Convolutional Codes. Lecture 13. Figure 93: Encoder for rate 1/2 constraint length 3 convolutional code.

Size: px
Start display at page:

Download "Convolutional Codes. Lecture 13. Figure 93: Encoder for rate 1/2 constraint length 3 convolutional code."

Transcription

1 Convolutional Codes Goals Lecture Be able to encode using a convolutional code Be able to decode a convolutional code received over a binary symmetric channel or an additive white Gaussian channel Convolutional codes are an alternative way of introducing redundancy into the data stream. They are referred to block codes in many situations because of the ease with which soft decision decoding can be erformed. They are called convolutional code because the encoder outut can be written as the convolutional of the encoder inut with a generator sequence. A convolutional code consists of k shift registers of length M or less each. The n oututs are linear combinations of the contents of the shift registers and the inut. A convolutional code is describe by the memory length M (or constraint length KM ), the number of inut bits k the number of outut bits n and the connections between the shift registers and the oututs. XIII- XIII- Examle (K=,M=, rate / code) 0/ 0/ / 0/ / Figure 9: Encoder for rate / constraint length convolutional code. 0/ / / Figure 9: State Diagram for rate / constraint length convolutional code. XIII- XIII-

2 x w We can describe the sequence of states the encoder asses through in time via a trellis diagram. This will be useful for decoding uroses. Figure 95: Trellis Diagram for rate / constraint length convolutional code. Decoding Convolutional Codes Consider a state diagram for a articular convolutional code. Let be the sequence reresenting the state at time m. Let x 0 be the initial state of the rocess x 0. Later on we will denote the states by the integers,,...,n. Since this is a Markov rocess we have that xm x x 0 That is, the state at time m deends only on the state at time m and not any revious state. Let w m corresondence between state sequences and transition sequences. By some mechanism (e.g. a noisy channel) a noisy version z m sequence is observed. Based on this noisy version of wm we wish to estimate the state sequence or the transition sequence w m. Since wm xm be the state transition at time m. There is a one-to-one of the state transition XIII-5 XIII- and contain the same information we have that z z where zz0 x0 channel is memoryless then we have that z zm, x z x M 0 m xm, and w z m w wm w0 wm. If the So given an observation z find the state sequence x for which the a osteriori robability x z is largest. This minimizes the robability that we chose the wrong sequence. Thus the otimum (minimum sequence error robability) decoder chooses x which maximizes x z : i.e ˆx argmaxx x z argmaxx argminx x z log x z XIII- argminx log z Using the memoryless roerty of the channel we obtain z x x x M m0 and using the Markov roerty of the state sequence Define λ x M m0 z k wm wm as follows: λ wm ln Then M ˆx argminx λ m0 This roblem formulation leads to a recursive solution. The recursive solution ln x 0 z m wm wm XIII-8

3 M u is called the Viterbi Algorithm by communication engineers and is a form of Dynamic Programming as studied by control engineers. They are really the same though. xm VITERBI ALGORITHM Let Γ to state at time m. Let ˆx be the shortest ath to state at time m. Let ˆΓ at time m that goes through state at time m. Then the algorithm works as follows. Storage Initialization be the length (otimization criteria) of the shortest (otimum) ath be the length of the ath to state xm m, time index, m0, ˆx Γ xm M o ˆx xm ˆx Γ Γ x 0 x 0 x0 arbitrary, m 0, m 0 x0 XIII-9 XIII- Justification: Recursion ˆΓ Γ xm Let ˆ Γ minxm ˆΓ λ xm argminxm ˆΓ wm for each xm. ˆx ˆx ˆ Basically we are interested in finding the shortest length ath through the trellis. At time m we find the shortest length aths to each of the ossible states at time m by comuting all ossible ways of getting to state ufrom a state at time m. If the shortest ath (denoted by ˆx u ) to get to uat time m goes through state vattime m (i.e. ˆx ˆx v corresonding ath ˆx v to state vmust be the shortest ath to state v at time msince if there was a shorter ath, say x v, to state v at time m then the ath x v time mwould be shorter then what we assumed was the shortest ath). Stated another way if the shortest way of getting to state u at time m is by going through state v at time mthen the ath used to get to state v at time m must be the shortest of all aths to state v at time m. u) then the u to state u at time m that used this shorter ath to state v at XIII- XIII-

4 0 XIII- XIII-5 0 XIII- XIII-

5 XIII- XIII-9 XIII-8 XIII-0

6 XIII- XIII- Error Bounds for Convolutional Codes The erformance of convolutional codes can be uer bounded by l d f ree w l D l Examle Binary Symmetric Channel crossover robability. where w l is the average number of nonzero information bits on aths with Hamming distance l and D is a arameter that deends only on the channel. usually the summation in the uer bound is truncated to some finite number of terms. D XIII- XIII-

7 8D 505D8 Examle Additive White Gaussian Noise channel D EN e 0 Performance Examles Generally hard decisions requires db more signal energy than soft decisions for the same bit error robability. Also soft decisions is only about 0.5dB better than 8 level quantization. XIII-5 XIII- Standard codes: Examle Convolutional Code : Constraint length, memory, state decoder, rate / has the following uer bound. D D 0D There is a chi made by Qualcomm and Stanford Telecommunications that oerates at data rates on the order of Mbits/second that will do encoding and decoding. Examle Convolutional Code : Constraint length 9, memory 8, 5 state decoder, rate / D Examle Convolutional Code : Constraint length 9, memory 8, 5 state decoder, rate / D 8 D0 95D 5D D D 9D XIII- XIII-8

8 D 59D D Examle (K=,M=, rate / code) Examle (K M rate / code) D This code has d f of 5. The weight enumerator olynomial is w D 5 D D5 D D D8 This code has d f of. The weight enumerator olynomial for determining the bit error robability is given by w D D D 0D D 5090D0 D 99D D 8 XIII-9 XIII-0 858D 8 We can uer bound the bit error robability by w P w D w The first bound is the union bound. It is imossible to exactly evaluate this bound because there are an infinite number of terms in the summation. Droing all but the first N terms gives an aroximation. It may no longer be an uer bound though. If the weight enumerator is known we can get arbitrarily close to the union bound and still get a bound as follows. w P N d f w P N d f w P N w D N w P N N d f w d f w P P D D w d f w D The second term is the Union-Bhattacharyya (U-B) bound. The first term is clearly less than zero, so we get something that is tighter than the U-B bound. By choosing N sufficiently large we can sometimes get significant imrovements over the U-B bound. XIII- XIII-

9 0 0 Simulation Uer Bound P e,b Lower Bound E /N (db) b E b /N 0 Figure 9: Error robability of constraint length convolutional codes on an additive white Gaussian noise channel with soft decisions decoding (uerbound, simulation and lower bound). XIII- Figure 9: Error robability of constraint length convolutional codes on an additive white Gaussian noise channel with soft decisions decoding (uerbound, simulation). XIII- 0 U er Bounds on Bit Error Probabilityfor Constraint Length, R ate / Convolutional Code Union Bound - - Simulation Uncoded P e,b - Bit Error Rate Hard Decisions Soft Decisions EbN0(dB) Figure 98: Error robability of constraint length convolutional codes on an additive white Gaussian noise channel with soft decisions decoding (uerbound, simulation). XIII E b /N 0 (db) Figure 99: Error Probability of Constraint Length Convolutional Codes on an Additive White Gaussian Noise Channel (hard and soft decisions). XIII-

10 Bit Error Probability Bit Error Probability (Bound) for Constraint Length 9 Rate / Convolutional Code Convolutional Decoder Hard Decisions Soft Decisions state E b /N 0 (db) Figure 0: Error Probability of Constraint Length 9 Convolutional Codes on an Additive White Gaussian Noise Channel (hard and soft decisions). XIII time XIII-8 Convolutional Decoder 0 Convolutional Decoder state state time time XIII-9 XIII-0

Homework Set #3 Rates definitions, Channel Coding, Source-Channel coding

Homework Set #3 Rates definitions, Channel Coding, Source-Channel coding Homework Set # Rates definitions, Channel Coding, Source-Channel coding. Rates (a) Channels coding Rate: Assuming you are sending 4 different messages using usages of a channel. What is the rate (in bits

More information

Module 8. Lecture 3: Markov chain

Module 8. Lecture 3: Markov chain Lecture 3: Markov chain A Markov chain is a stochastic rocess having the roerty that the value of the rocess X t at time t, deends only on its value at time t-1, X t-1 and not on the sequence X t-2, X

More information

4. Score normalization technical details We now discuss the technical details of the score normalization method.

4. Score normalization technical details We now discuss the technical details of the score normalization method. SMT SCORING SYSTEM This document describes the scoring system for the Stanford Math Tournament We begin by giving an overview of the changes to scoring and a non-technical descrition of the scoring rules

More information

ECE 534 Information Theory - Midterm 2

ECE 534 Information Theory - Midterm 2 ECE 534 Information Theory - Midterm Nov.4, 009. 3:30-4:45 in LH03. You will be given the full class time: 75 minutes. Use it wisely! Many of the roblems have short answers; try to find shortcuts. You

More information

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.1 Overview Basic Concepts of Channel Coding Block Codes I:

More information

Approximating min-max k-clustering

Approximating min-max k-clustering Aroximating min-max k-clustering Asaf Levin July 24, 2007 Abstract We consider the roblems of set artitioning into k clusters with minimum total cost and minimum of the maximum cost of a cluster. The cost

More information

Improved Capacity Bounds for the Binary Energy Harvesting Channel

Improved Capacity Bounds for the Binary Energy Harvesting Channel Imroved Caacity Bounds for the Binary Energy Harvesting Channel Kaya Tutuncuoglu 1, Omur Ozel 2, Aylin Yener 1, and Sennur Ulukus 2 1 Deartment of Electrical Engineering, The Pennsylvania State University,

More information

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

Convolutional Codes. Lecture Notes 8: Trellis Codes. Example: K=3,M=2, rate 1/2 code. Figure 95: Convolutional Encoder

Convolutional Codes. Lecture Notes 8: Trellis Codes. Example: K=3,M=2, rate 1/2 code. Figure 95: Convolutional Encoder Convolutional Codes Lecture Notes 8: Trellis Codes In this lecture we discuss construction of signals via a trellis. That is, signals are constructed by labeling the branches of an infinite trellis with

More information

Universal Finite Memory Coding of Binary Sequences

Universal Finite Memory Coding of Binary Sequences Deartment of Electrical Engineering Systems Universal Finite Memory Coding of Binary Sequences Thesis submitted towards the degree of Master of Science in Electrical and Electronic Engineering in Tel-Aviv

More information

Channel Coding and Interleaving

Channel Coding and Interleaving Lecture 6 Channel Coding and Interleaving 1 LORA: Future by Lund www.futurebylund.se The network will be free for those who want to try their products, services and solutions in a precommercial stage.

More information

Feedback-error control

Feedback-error control Chater 4 Feedback-error control 4.1 Introduction This chater exlains the feedback-error (FBE) control scheme originally described by Kawato [, 87, 8]. FBE is a widely used neural network based controller

More information

Training sequence optimization for frequency selective channels with MAP equalization

Training sequence optimization for frequency selective channels with MAP equalization 532 ISCCSP 2008, Malta, 12-14 March 2008 raining sequence otimization for frequency selective channels with MAP equalization Imed Hadj Kacem, Noura Sellami Laboratoire LEI ENIS, Route Sokra km 35 BP 3038

More information

Decoding Linear Block Codes Using a Priority-First Search: Performance Analysis and Suboptimal Version

Decoding Linear Block Codes Using a Priority-First Search: Performance Analysis and Suboptimal Version IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 3, MAY 1998 133 Decoding Linear Block Codes Using a Priority-First Search Performance Analysis Subotimal Version Yunghsiang S. Han, Member, IEEE, Carlos

More information

Round-off Errors and Computer Arithmetic - (1.2)

Round-off Errors and Computer Arithmetic - (1.2) Round-off Errors and Comuter Arithmetic - (.). Round-off Errors: Round-off errors is roduced when a calculator or comuter is used to erform real number calculations. That is because the arithmetic erformed

More information

1 1 c (a) 1 (b) 1 Figure 1: (a) First ath followed by salesman in the stris method. (b) Alternative ath. 4. D = distance travelled closing the loo. Th

1 1 c (a) 1 (b) 1 Figure 1: (a) First ath followed by salesman in the stris method. (b) Alternative ath. 4. D = distance travelled closing the loo. Th 18.415/6.854 Advanced Algorithms ovember 7, 1996 Euclidean TSP (art I) Lecturer: Michel X. Goemans MIT These notes are based on scribe notes by Marios Paaefthymiou and Mike Klugerman. 1 Euclidean TSP Consider

More information

Outline. EECS150 - Digital Design Lecture 26 Error Correction Codes, Linear Feedback Shift Registers (LFSRs) Simple Error Detection Coding

Outline. EECS150 - Digital Design Lecture 26 Error Correction Codes, Linear Feedback Shift Registers (LFSRs) Simple Error Detection Coding Outline EECS150 - Digital Design Lecture 26 Error Correction Codes, Linear Feedback Shift Registers (LFSRs) Error detection using arity Hamming code for error detection/correction Linear Feedback Shift

More information

On the capacity of the general trapdoor channel with feedback

On the capacity of the general trapdoor channel with feedback On the caacity of the general tradoor channel with feedback Jui Wu and Achilleas Anastasooulos Electrical Engineering and Comuter Science Deartment University of Michigan Ann Arbor, MI, 48109-1 email:

More information

CSE 599d - Quantum Computing When Quantum Computers Fall Apart

CSE 599d - Quantum Computing When Quantum Computers Fall Apart CSE 599d - Quantum Comuting When Quantum Comuters Fall Aart Dave Bacon Deartment of Comuter Science & Engineering, University of Washington In this lecture we are going to begin discussing what haens to

More information

Formal Modeling in Cognitive Science Lecture 29: Noisy Channel Model and Applications;

Formal Modeling in Cognitive Science Lecture 29: Noisy Channel Model and Applications; Formal Modeling in Cognitive Science Lecture 9: and ; ; Frank Keller School of Informatics University of Edinburgh keller@inf.ed.ac.uk Proerties of 3 March, 6 Frank Keller Formal Modeling in Cognitive

More information

Coding Along Hermite Polynomials for Gaussian Noise Channels

Coding Along Hermite Polynomials for Gaussian Noise Channels Coding Along Hermite olynomials for Gaussian Noise Channels Emmanuel A. Abbe IG, EFL Lausanne, 1015 CH Email: emmanuel.abbe@efl.ch Lizhong Zheng LIDS, MIT Cambridge, MA 0139 Email: lizhong@mit.edu Abstract

More information

44 CHAPTER 5. PERFORMACE OF SMALL SIGAL SETS In digital communications, we usually focus entirely on the code, and do not care what encoding ma is use

44 CHAPTER 5. PERFORMACE OF SMALL SIGAL SETS In digital communications, we usually focus entirely on the code, and do not care what encoding ma is use Performance of small signal sets Chater 5 In this chater, we show how to estimate the erformance of small-to-moderate-sized signal constellations on the discrete-time AWG channel. With equirobable signal

More information

On Code Design for Simultaneous Energy and Information Transfer

On Code Design for Simultaneous Energy and Information Transfer On Code Design for Simultaneous Energy and Information Transfer Anshoo Tandon Electrical and Comuter Engineering National University of Singaore Email: anshoo@nus.edu.sg Mehul Motani Electrical and Comuter

More information

NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00

NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00 NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR Sp ' 00 May 3 OPEN BOOK exam (students are permitted to bring in textbooks, handwritten notes, lecture notes

More information

The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009

The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009 1 Bacground Material 1.1 Organization of the Trellis The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009 The Viterbi algorithm (VA) processes the (noisy) output sequence from a state machine

More information

Convex Optimization methods for Computing Channel Capacity

Convex Optimization methods for Computing Channel Capacity Convex Otimization methods for Comuting Channel Caacity Abhishek Sinha Laboratory for Information and Decision Systems (LIDS), MIT sinhaa@mit.edu May 15, 2014 We consider a classical comutational roblem

More information

Nonlinear Estimation. Professor David H. Staelin

Nonlinear Estimation. Professor David H. Staelin Nonlinear Estimation Professor Davi H. Staelin Massachusetts Institute of Technology Lec22.5-1 [ DD 1 2] ˆ = 1 Best Fit, "Linear Regression" Case I: Nonlinear Physics Data Otimum Estimator P() ˆ D 1 D

More information

MATHEMATICAL MODELLING OF THE WIRELESS COMMUNICATION NETWORK

MATHEMATICAL MODELLING OF THE WIRELESS COMMUNICATION NETWORK Comuter Modelling and ew Technologies, 5, Vol.9, o., 3-39 Transort and Telecommunication Institute, Lomonosov, LV-9, Riga, Latvia MATHEMATICAL MODELLIG OF THE WIRELESS COMMUICATIO ETWORK M. KOPEETSK Deartment

More information

Topic: Lower Bounds on Randomized Algorithms Date: September 22, 2004 Scribe: Srinath Sridhar

Topic: Lower Bounds on Randomized Algorithms Date: September 22, 2004 Scribe: Srinath Sridhar 15-859(M): Randomized Algorithms Lecturer: Anuam Guta Toic: Lower Bounds on Randomized Algorithms Date: Setember 22, 2004 Scribe: Srinath Sridhar 4.1 Introduction In this lecture, we will first consider

More information

Radial Basis Function Networks: Algorithms

Radial Basis Function Networks: Algorithms Radial Basis Function Networks: Algorithms Introduction to Neural Networks : Lecture 13 John A. Bullinaria, 2004 1. The RBF Maing 2. The RBF Network Architecture 3. Comutational Power of RBF Networks 4.

More information

AI*IA 2003 Fusion of Multiple Pattern Classifiers PART III

AI*IA 2003 Fusion of Multiple Pattern Classifiers PART III AI*IA 23 Fusion of Multile Pattern Classifiers PART III AI*IA 23 Tutorial on Fusion of Multile Pattern Classifiers by F. Roli 49 Methods for fusing multile classifiers Methods for fusing multile classifiers

More information

Monopolist s mark-up and the elasticity of substitution

Monopolist s mark-up and the elasticity of substitution Croatian Oerational Research Review 377 CRORR 8(7), 377 39 Monoolist s mark-u and the elasticity of substitution Ilko Vrankić, Mira Kran, and Tomislav Herceg Deartment of Economic Theory, Faculty of Economics

More information

Evaluating Circuit Reliability Under Probabilistic Gate-Level Fault Models

Evaluating Circuit Reliability Under Probabilistic Gate-Level Fault Models Evaluating Circuit Reliability Under Probabilistic Gate-Level Fault Models Ketan N. Patel, Igor L. Markov and John P. Hayes University of Michigan, Ann Arbor 48109-2122 {knatel,imarkov,jhayes}@eecs.umich.edu

More information

The analysis and representation of random signals

The analysis and representation of random signals The analysis and reresentation of random signals Bruno TOÉSNI Bruno.Torresani@cmi.univ-mrs.fr B. Torrésani LTP Université de Provence.1/30 Outline 1. andom signals Introduction The Karhunen-Loève Basis

More information

Introduction to Probability and Statistics

Introduction to Probability and Statistics Introduction to Probability and Statistics Chater 8 Ammar M. Sarhan, asarhan@mathstat.dal.ca Deartment of Mathematics and Statistics, Dalhousie University Fall Semester 28 Chater 8 Tests of Hyotheses Based

More information

Sampling and Distortion Tradeoffs for Bandlimited Periodic Signals

Sampling and Distortion Tradeoffs for Bandlimited Periodic Signals Samling and Distortion radeoffs for Bandlimited Periodic Signals Elaheh ohammadi and Farokh arvasti Advanced Communications Research Institute ACRI Deartment of Electrical Engineering Sharif University

More information

General Linear Model Introduction, Classes of Linear models and Estimation

General Linear Model Introduction, Classes of Linear models and Estimation Stat 740 General Linear Model Introduction, Classes of Linear models and Estimation An aim of scientific enquiry: To describe or to discover relationshis among events (variables) in the controlled (laboratory)

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem Coyright c 2017 by Karl Sigman 1 Gambler s Ruin Problem Let N 2 be an integer and let 1 i N 1. Consider a gambler who starts with an initial fortune of $i and then on each successive gamble either wins

More information

18.312: Algebraic Combinatorics Lionel Levine. Lecture 12

18.312: Algebraic Combinatorics Lionel Levine. Lecture 12 8.3: Algebraic Combinatorics Lionel Levine Lecture date: March 7, Lecture Notes by: Lou Odette This lecture: A continuation of the last lecture: comutation of µ Πn, the Möbius function over the incidence

More information

BASICS OF DETECTION AND ESTIMATION THEORY

BASICS OF DETECTION AND ESTIMATION THEORY BASICS OF DETECTION AND ESTIMATION THEORY 83050E/158 In this chapter we discuss how the transmitted symbols are detected optimally from a noisy received signal (observation). Based on these results, optimal

More information

LDPC codes for the Cascaded BSC-BAWGN channel

LDPC codes for the Cascaded BSC-BAWGN channel LDPC codes for the Cascaded BSC-BAWGN channel Aravind R. Iyengar, Paul H. Siegel, and Jack K. Wolf University of California, San Diego 9500 Gilman Dr. La Jolla CA 9093 email:aravind,siegel,jwolf@ucsd.edu

More information

Characterizing the Behavior of a Probabilistic CMOS Switch Through Analytical Models and Its Verification Through Simulations

Characterizing the Behavior of a Probabilistic CMOS Switch Through Analytical Models and Its Verification Through Simulations Characterizing the Behavior of a Probabilistic CMOS Switch Through Analytical Models and Its Verification Through Simulations PINAR KORKMAZ, BILGE E. S. AKGUL and KRISHNA V. PALEM Georgia Institute of

More information

q-ary Symmetric Channel for Large q

q-ary Symmetric Channel for Large q List-Message Passing Achieves Caacity on the q-ary Symmetric Channel for Large q Fan Zhang and Henry D Pfister Deartment of Electrical and Comuter Engineering, Texas A&M University {fanzhang,hfister}@tamuedu

More information

I - Information theory basics

I - Information theory basics I - Information theor basics Introduction To communicate, that is, to carr information between two oints, we can emlo analog or digital transmission techniques. In digital communications the message is

More information

On split sample and randomized confidence intervals for binomial proportions

On split sample and randomized confidence intervals for binomial proportions On slit samle and randomized confidence intervals for binomial roortions Måns Thulin Deartment of Mathematics, Usala University arxiv:1402.6536v1 [stat.me] 26 Feb 2014 Abstract Slit samle methods have

More information

Improving on the Cutset Bound via a Geometric Analysis of Typical Sets

Improving on the Cutset Bound via a Geometric Analysis of Typical Sets Imroving on the Cutset Bound via a Geometric Analysis of Tyical Sets Ayfer Özgür Stanford University CUHK, May 28, 2016 Joint work with Xiugang Wu (Stanford). Ayfer Özgür (Stanford) March 16 1 / 33 Gaussian

More information

CHAPTER 8 Viterbi Decoding of Convolutional Codes

CHAPTER 8 Viterbi Decoding of Convolutional Codes MIT 6.02 DRAFT Lecture Notes Fall 2011 (Last update: October 9, 2011) Comments, questions or bug reports? Please contact hari at mit.edu CHAPTER 8 Viterbi Decoding of Convolutional Codes This chapter describes

More information

Introduction to Convolutional Codes, Part 1

Introduction to Convolutional Codes, Part 1 Introduction to Convolutional Codes, Part 1 Frans M.J. Willems, Eindhoven University of Technology September 29, 2009 Elias, Father of Coding Theory Textbook Encoder Encoder Properties Systematic Codes

More information

1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g

1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g Exercise Generator polynomials of a convolutional code, given in binary form, are g 0, g 2 0 ja g 3. a) Sketch the encoding circuit. b) Sketch the state diagram. c) Find the transfer function TD. d) What

More information

Binary Convolutional Codes

Binary Convolutional Codes Binary Convolutional Codes A convolutional code has memory over a short block length. This memory results in encoded output symbols that depend not only on the present input, but also on past inputs. An

More information

STK4900/ Lecture 7. Program

STK4900/ Lecture 7. Program STK4900/9900 - Lecture 7 Program 1. Logistic regression with one redictor 2. Maximum likelihood estimation 3. Logistic regression with several redictors 4. Deviance and likelihood ratio tests 5. A comment

More information

Model checking, verification of CTL. One must verify or expel... doubts, and convert them into the certainty of YES [Thomas Carlyle]

Model checking, verification of CTL. One must verify or expel... doubts, and convert them into the certainty of YES [Thomas Carlyle] Chater 5 Model checking, verification of CTL One must verify or exel... doubts, and convert them into the certainty of YES or NO. [Thomas Carlyle] 5. The verification setting Page 66 We introduce linear

More information

Elementary Analysis in Q p

Elementary Analysis in Q p Elementary Analysis in Q Hannah Hutter, May Szedlák, Phili Wirth November 17, 2011 This reort follows very closely the book of Svetlana Katok 1. 1 Sequences and Series In this section we will see some

More information

HetNets: what tools for analysis?

HetNets: what tools for analysis? HetNets: what tools for analysis? Daniela Tuninetti (Ph.D.) Email: danielat@uic.edu Motivation Seven Ways that HetNets are a Cellular Paradigm Shift, by J. Andrews, IEEE Communications Magazine, March

More information

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70 Name : Roll No. :.... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/2011-12 2011 CODING & INFORMATION THEORY Time Allotted : 3 Hours Full Marks : 70 The figures in the margin indicate full marks

More information

MATH 2710: NOTES FOR ANALYSIS

MATH 2710: NOTES FOR ANALYSIS MATH 270: NOTES FOR ANALYSIS The main ideas we will learn from analysis center around the idea of a limit. Limits occurs in several settings. We will start with finite limits of sequences, then cover infinite

More information

EE/Stats 376A: Information theory Winter Lecture 5 Jan 24. Lecturer: David Tse Scribe: Michael X, Nima H, Geng Z, Anton J, Vivek B.

EE/Stats 376A: Information theory Winter Lecture 5 Jan 24. Lecturer: David Tse Scribe: Michael X, Nima H, Geng Z, Anton J, Vivek B. EE/Stats 376A: Information theory Winter 207 Lecture 5 Jan 24 Lecturer: David Tse Scribe: Michael X, Nima H, Geng Z, Anton J, Vivek B. 5. Outline Markov chains and stationary distributions Prefix codes

More information

Distributed Rule-Based Inference in the Presence of Redundant Information

Distributed Rule-Based Inference in the Presence of Redundant Information istribution Statement : roved for ublic release; distribution is unlimited. istributed Rule-ased Inference in the Presence of Redundant Information June 8, 004 William J. Farrell III Lockheed Martin dvanced

More information

ITCT Lecture IV.3: Markov Processes and Sources with Memory

ITCT Lecture IV.3: Markov Processes and Sources with Memory ITCT Lecture IV.3: Markov Processes and Sources with Memory 4. Markov Processes Thus far, we have been occupied with memoryless sources and channels. We must now turn our attention to sources with memory.

More information

4 An Introduction to Channel Coding and Decoding over BSC

4 An Introduction to Channel Coding and Decoding over BSC 4 An Introduction to Channel Coding and Decoding over BSC 4.1. Recall that channel coding introduces, in a controlled manner, some redundancy in the (binary information sequence that can be used at the

More information

Amin, Osama; Abediseid, Walid; Alouini, Mohamed-Slim. Institute of Electrical and Electronics Engineers (IEEE)

Amin, Osama; Abediseid, Walid; Alouini, Mohamed-Slim. Institute of Electrical and Electronics Engineers (IEEE) KAUST Reository Outage erformance of cognitive radio systems with Imroer Gaussian signaling Item tye Authors Erint version DOI Publisher Journal Rights Conference Paer Amin Osama; Abediseid Walid; Alouini

More information

15-451/651: Design & Analysis of Algorithms October 23, 2018 Lecture #17: Prediction from Expert Advice last changed: October 25, 2018

15-451/651: Design & Analysis of Algorithms October 23, 2018 Lecture #17: Prediction from Expert Advice last changed: October 25, 2018 5-45/65: Design & Analysis of Algorithms October 23, 208 Lecture #7: Prediction from Exert Advice last changed: October 25, 208 Prediction with Exert Advice Today we ll study the roblem of making redictions

More information

Appendix D: Basics of convolutional codes

Appendix D: Basics of convolutional codes Appendix D: Basics of convolutional codes Convolutional encoder: In convolutional code (B. P. Lathi, 2009; S. G. Wilson, 1996; E. Biglieri, 2005; T. Oberg, 2001), the block of n code bits generated by

More information

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel Brian M. Kurkoski, Paul H. Siegel, and Jack K. Wolf Department of Electrical and Computer Engineering

More information

RANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES

RANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES RANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES AARON ZWIEBACH Abstract. In this aer we will analyze research that has been recently done in the field of discrete

More information

The Maximum-Likelihood Soft-Decision Sequential Decoding Algorithms for Convolutional Codes

The Maximum-Likelihood Soft-Decision Sequential Decoding Algorithms for Convolutional Codes The Maximum-Likelihood Soft-Decision Sequential Decoding Algorithms for Convolutional Codes Prepared by Hong-Bin Wu Directed by Prof. Po-Ning Chen In Partial Fulfillment of the Requirements For the Degree

More information

Named Entity Recognition using Maximum Entropy Model SEEM5680

Named Entity Recognition using Maximum Entropy Model SEEM5680 Named Entity Recognition using Maximum Entroy Model SEEM5680 Named Entity Recognition System Named Entity Recognition (NER): Identifying certain hrases/word sequences in a free text. Generally it involves

More information

Keywords: Vocal Tract; Lattice model; Reflection coefficients; Linear Prediction; Levinson algorithm.

Keywords: Vocal Tract; Lattice model; Reflection coefficients; Linear Prediction; Levinson algorithm. Volume 3, Issue 6, June 213 ISSN: 2277 128X International Journal of Advanced Research in Comuter Science and Software Engineering Research Paer Available online at: www.ijarcsse.com Lattice Filter Model

More information

One step ahead prediction using Fuzzy Boolean Neural Networks 1

One step ahead prediction using Fuzzy Boolean Neural Networks 1 One ste ahead rediction using Fuzzy Boolean eural etworks 1 José A. B. Tomé IESC-ID, IST Rua Alves Redol, 9 1000 Lisboa jose.tome@inesc-id.t João Paulo Carvalho IESC-ID, IST Rua Alves Redol, 9 1000 Lisboa

More information

CSC165H, Mathematical expression and reasoning for computer science week 12

CSC165H, Mathematical expression and reasoning for computer science week 12 CSC165H, Mathematical exression and reasoning for comuter science week 1 nd December 005 Gary Baumgartner and Danny Hea hea@cs.toronto.edu SF4306A 416-978-5899 htt//www.cs.toronto.edu/~hea/165/s005/index.shtml

More information

Chapter 7: Channel coding:convolutional codes

Chapter 7: Channel coding:convolutional codes Chapter 7: : Convolutional codes University of Limoges meghdadi@ensil.unilim.fr Reference : Digital communications by John Proakis; Wireless communication by Andreas Goldsmith Encoder representation Communication

More information

Analysis of execution time for parallel algorithm to dertmine if it is worth the effort to code and debug in parallel

Analysis of execution time for parallel algorithm to dertmine if it is worth the effort to code and debug in parallel Performance Analysis Introduction Analysis of execution time for arallel algorithm to dertmine if it is worth the effort to code and debug in arallel Understanding barriers to high erformance and redict

More information

Lecture 21: Quantum Communication

Lecture 21: Quantum Communication CS 880: Quantum Information Processing 0/6/00 Lecture : Quantum Communication Instructor: Dieter van Melkebeek Scribe: Mark Wellons Last lecture, we introduced the EPR airs which we will use in this lecture

More information

Decoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift

Decoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift Decoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift Ching-Yao Su Directed by: Prof. Po-Ning Chen Department of Communications Engineering, National Chiao-Tung University July

More information

A Coordinate System for Gaussian Networks

A Coordinate System for Gaussian Networks A Coordinate System for Gaussian Networs The MIT Faculty has made this article oenly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher Abbe, Emmanuel,

More information

Multi-Operation Multi-Machine Scheduling

Multi-Operation Multi-Machine Scheduling Multi-Oeration Multi-Machine Scheduling Weizhen Mao he College of William and Mary, Williamsburg VA 3185, USA Abstract. In the multi-oeration scheduling that arises in industrial engineering, each job

More information

11 The Max-Product Algorithm

11 The Max-Product Algorithm Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.438 Algorithms for Inference Fall 2014 11 The Max-Product Algorithm In the previous lecture, we introduced

More information

The decision-feedback equalizer optimization for Gaussian noise

The decision-feedback equalizer optimization for Gaussian noise Journal of Theoretical and Alied Comuter Science Vol. 8 No. 4 4. 5- ISSN 99-634 (rinted 3-5653 (online htt://www.jtacs.org The decision-feedback eualizer otimization for Gaussian noise Arkadiusz Grzbowski

More information

PROFIT MAXIMIZATION. π = p y Σ n i=1 w i x i (2)

PROFIT MAXIMIZATION. π = p y Σ n i=1 w i x i (2) PROFIT MAXIMIZATION DEFINITION OF A NEOCLASSICAL FIRM A neoclassical firm is an organization that controls the transformation of inuts (resources it owns or urchases into oututs or roducts (valued roducts

More information

arxiv: v1 [quant-ph] 3 Feb 2015

arxiv: v1 [quant-ph] 3 Feb 2015 From reversible comutation to quantum comutation by Lagrange interolation Alexis De Vos and Stin De Baerdemacker 2 arxiv:502.0089v [quant-h] 3 Feb 205 Cmst, Imec v.z.w., vakgroe elektronica en informatiesystemen,

More information

Anytime communication over the Gilbert-Eliot channel with noiseless feedback

Anytime communication over the Gilbert-Eliot channel with noiseless feedback Anytime communication over the Gilbert-Eliot channel with noiseless feedback Anant Sahai, Salman Avestimehr, Paolo Minero Deartment of Electrical Engineering and Comuter Sciences University of California

More information

Analysis of M/M/n/K Queue with Multiple Priorities

Analysis of M/M/n/K Queue with Multiple Priorities Analysis of M/M/n/K Queue with Multile Priorities Coyright, Sanjay K. Bose For a P-riority system, class P of highest riority Indeendent, Poisson arrival rocesses for each class with i as average arrival

More information

ECE 6960: Adv. Random Processes & Applications Lecture Notes, Fall 2010

ECE 6960: Adv. Random Processes & Applications Lecture Notes, Fall 2010 ECE 6960: Adv. Random Processes & Alications Lecture Notes, Fall 2010 Lecture 16 Today: (1) Markov Processes, (2) Markov Chains, (3) State Classification Intro Please turn in H 6 today. Read Chater 11,

More information

Code design: Computer search

Code design: Computer search Code design: Computer search Low rate codes Represent the code by its generator matrix Find one representative for each equivalence class of codes Permutation equivalences? Do NOT try several generator

More information

Cryptanalysis of Pseudorandom Generators

Cryptanalysis of Pseudorandom Generators CSE 206A: Lattice Algorithms and Alications Fall 2017 Crytanalysis of Pseudorandom Generators Instructor: Daniele Micciancio UCSD CSE As a motivating alication for the study of lattice in crytograhy we

More information

x(n) x(n) H (f) y(n) Z -1 Z -1 x(n) ^

x(n) x(n) H (f) y(n) Z -1 Z -1 x(n) ^ SOE FUNDAENTAL POPETIES OF SE FILTE BANKS. Kvanc hcak, Pierre oulin, Kannan amchandran University of Illinois at Urbana-Chamaign Beckman Institute and ECE Deartment 405 N. athews Ave., Urbana, IL 680 Email:

More information

Chapter 1 Fundamentals

Chapter 1 Fundamentals Chater Fundamentals. Overview of Thermodynamics Industrial Revolution brought in large scale automation of many tedious tasks which were earlier being erformed through manual or animal labour. Inventors

More information

Optimal Recognition Algorithm for Cameras of Lasers Evanescent

Optimal Recognition Algorithm for Cameras of Lasers Evanescent Otimal Recognition Algorithm for Cameras of Lasers Evanescent T. Gaudo * Abstract An algorithm based on the Bayesian aroach to detect and recognise off-axis ulse laser beams roagating in the atmoshere

More information

Combinatorics of topmost discs of multi-peg Tower of Hanoi problem

Combinatorics of topmost discs of multi-peg Tower of Hanoi problem Combinatorics of tomost discs of multi-eg Tower of Hanoi roblem Sandi Klavžar Deartment of Mathematics, PEF, Unversity of Maribor Koroška cesta 160, 000 Maribor, Slovenia Uroš Milutinović Deartment of

More information

EVALUATION OF PACKET ERROR RATE IN WIRELESS NETWORKS

EVALUATION OF PACKET ERROR RATE IN WIRELESS NETWORKS EVALUATION OF PACKET ERROR RATE IN WIRELESS NETWORKS Ramin Khalili, Kavé Salamatian LIP6-CNRS, Université Pierre et Marie Curie. Paris, France. Ramin.khalili, kave.salamatian@lip6.fr Abstract Bit Error

More information

Channel Coding 1. Sportturm (SpT), Room: C3165

Channel Coding 1.   Sportturm (SpT), Room: C3165 Channel Coding Dr.-Ing. Dirk Wübben Institute for Telecommunications and High-Frequency Techniques Department of Communications Engineering Room: N3, Phone: 4/8-6385 Sportturm (SpT), Room: C365 wuebben@ant.uni-bremen.de

More information

The E8 Lattice and Error Correction in Multi-Level Flash Memory

The E8 Lattice and Error Correction in Multi-Level Flash Memory The E8 Lattice and Error Correction in Multi-Level Flash Memory Brian M Kurkoski University of Electro-Communications Tokyo, Japan kurkoski@iceuecacjp Abstract A construction using the E8 lattice and Reed-Solomon

More information

LPC methods are the most widely used in. recognition, speaker recognition and verification

LPC methods are the most widely used in. recognition, speaker recognition and verification Digital Seech Processing Lecture 3 Linear Predictive Coding (LPC)- Introduction LPC Methods LPC methods are the most widely used in seech coding, seech synthesis, seech recognition, seaker recognition

More information

A Social Welfare Optimal Sequential Allocation Procedure

A Social Welfare Optimal Sequential Allocation Procedure A Social Welfare Otimal Sequential Allocation Procedure Thomas Kalinowsi Universität Rostoc, Germany Nina Narodytsa and Toby Walsh NICTA and UNSW, Australia May 2, 201 Abstract We consider a simle sequential

More information

Information collection on a graph

Information collection on a graph Information collection on a grah Ilya O. Ryzhov Warren Powell February 10, 2010 Abstract We derive a knowledge gradient olicy for an otimal learning roblem on a grah, in which we use sequential measurements

More information

LIMITATIONS OF RECEPTRON. XOR Problem The failure of the perceptron to successfully simple problem such as XOR (Minsky and Papert).

LIMITATIONS OF RECEPTRON. XOR Problem The failure of the perceptron to successfully simple problem such as XOR (Minsky and Papert). LIMITATIONS OF RECEPTRON XOR Problem The failure of the ercetron to successfully simle roblem such as XOR (Minsky and Paert). x y z x y z 0 0 0 0 0 0 Fig. 4. The exclusive-or logic symbol and function

More information

How to Estimate Expected Shortfall When Probabilities Are Known with Interval or Fuzzy Uncertainty

How to Estimate Expected Shortfall When Probabilities Are Known with Interval or Fuzzy Uncertainty How to Estimate Exected Shortfall When Probabilities Are Known with Interval or Fuzzy Uncertainty Christian Servin Information Technology Deartment El Paso Community College El Paso, TX 7995, USA cservin@gmail.com

More information

Principal Components Analysis and Unsupervised Hebbian Learning

Principal Components Analysis and Unsupervised Hebbian Learning Princial Comonents Analysis and Unsuervised Hebbian Learning Robert Jacobs Deartment of Brain & Cognitive Sciences University of Rochester Rochester, NY 1467, USA August 8, 008 Reference: Much of the material

More information

GOOD MODELS FOR CUBIC SURFACES. 1. Introduction

GOOD MODELS FOR CUBIC SURFACES. 1. Introduction GOOD MODELS FOR CUBIC SURFACES ANDREAS-STEPHAN ELSENHANS Abstract. This article describes an algorithm for finding a model of a hyersurface with small coefficients. It is shown that the aroach works in

More information

ON POLYNOMIAL SELECTION FOR THE GENERAL NUMBER FIELD SIEVE

ON POLYNOMIAL SELECTION FOR THE GENERAL NUMBER FIELD SIEVE MATHEMATICS OF COMPUTATIO Volume 75, umber 256, October 26, Pages 237 247 S 25-5718(6)187-9 Article electronically ublished on June 28, 26 O POLYOMIAL SELECTIO FOR THE GEERAL UMBER FIELD SIEVE THORSTE

More information