S Advanced Digital Communication (4 cr) Targets today

Size: px
Start display at page:

Download "S Advanced Digital Communication (4 cr) Targets today"

Transcription

1 S Advanced Dtal Communcaton (4 cr) Convolutonal Codes Tarets today Why to apply convolutonal codn? Defnn convolutonal codes Practcal encodn crcuts Defnn qualty of convolutonal codes Decodn prncples Vterb decodn 2

2 k bts Convolutonal encodn (n,k,l) (n,k,l) encoder encoder n bts nput bt n(l+) output bts messae bts encoded bts Convolutonal codes are appled n applcatons that requre ood performance wth low mplementaton complexty. They operate on code streams (not n blocks) Convoluton codes have memory that utlzes prevous bts to encode or decode follown bts (block codes are memoryless) Convolutonal codes are denoted by (n,k,l), where L s code (or encoder) Memory depth (number of rester staes) Constrant lenth C=n(L+) s defned as the number of encoded bts a messae bt can nfluence to Convolutonal codes acheve ood performance by expandn ther memory depth 3 Example: Convolutonal encoder, k =, n = 2 x' m m m x'' m m 2 2 memory depth L = number of states x x' x'' x' x'' x' x''... out (n,k,l) = (2,,2) encoder Convolutonal encoder s a fnte state machne (FSM) processn nformaton bts n a seral manner Thus the enerated code s a functon of nput and the state of the FSM In ths (n,k,l) = (2,,2) encoder each messae bt nfluences a span of C= n(l+)=6 successve output bts = constrant lenth C Thus, for eneraton of n-bt output, we requre n ths example n shft resters n k = convolutonal encoder 4 2

3 Example: (n,k,l)=(3,2,) Convolutonal encoder x m m m ' 3 2 x m m m '' 3 x m m ''' 2 After each new block of k nput bts follows a transton nto new state Hence, from each nput state transton, 2 k dfferent output states may follow Each messae bt nfluences a span of C = n(l+) = 3(+) = 6 successve output bts 5 Generator sequences k bts (n,k,l) (n,k,l) encoder encoder n bts (n,k,l) Convolutonal code can be descrbed by the enerator sequences ( 2) ( n ),,... that are the mpulse responses for each coder n output branches: 0 2 m (2,,2) encoder ( ) ( ) ( ) ( ) n [ n n n 0 m ] [ 0 ] Note that the enerator sequence lenth ( 2) [] exceeds rester depth always by Generator sequences specfy convolutonal code completely by the assocated enerator matrx Encoded convoluton code s produced by matrx multplcaton of the nput and the enerator matrx 6 3

4 Convoluton pont of vew n encodn and enerator matrx Encoder outputs are formed by modulo-2 dscrete convolutons: v u*, v u*... v u* where u s the nformaton sequence: u ( u, u, ) ( 2) ( 2) ( ) ( ) Therefore, the l:th bt of the :th output branch s* m v u u u... u Hence, for ths crcut the follown equatons result, (assume: ) 0 ( ) ( ) ( ) ( ) ( ) l 0 l l l 0 l lm m where m L, u 0, l L2 ul 2 2 l 3 ul 3 branches v u u u l l l2 l3 ( 2 ) v u u u u l l l l2 l3 x y( u) x( k) y( u k) v ka xy encoder output: nput bt n(l+) output bts ( 2 ) [ 0 ] [ ] ( 2 ) ( 2 ) ( 2 ) [ v v v v v v...] *note that u s reversed n tme as n the defnton of convoluton top rht 7 Example: Usn enerator matrx ( 2 ) [ 0 ] [ ] v ( ) ( ) l l 0 u u ( ) l... u ( ) lm m ul m ( 2) 0 0 ( 2) ( 2) m m Verfy that you can obtan the result shown! 8 4

5 S.Ln, D.J. Costello: Error Control Codn, II ed, p Representn convolutonal codes: Code tree Number of braches devatn from each node equals 2 k (n,k,l) = (2,,2) encoder x' m m m 2 x'' m m 2 x x' x'' x' x'' x' x''... out x' 0 0 x'' 0 x' 0 0 x'' 0 0 m 2m 0 Ths tells how one nput bt s transformed nto two output bts (ntally rester s all zero) x' 0 x'' 0 0 5

6 Representn convolutonal codes compactly: code trells and state daram Input state ndcated by dashed lne Code trells State daram Shft rester states Inspectn state daram: Structural propertes of convolutonal codes Each new block of k nput bts causes a transton nto new state Hence there are 2 k branches leavn each state Assumn encoder zero ntal state, encoded word for any nput of k bts can thus be obtaned. For nstance, below for u=( 0 ), encoded word v=(, 0, 0, 0,, 0,, ) s produced: Verfy that you obtan the same result! Input state - encoder state daram for (n,k,l)=(2,,2) code - note that the number of states s 8 = 2 L+ => L = 2 (two state bts) 2 6

7 Code weht, path an, and eneratn functon The state daram can be modfed to yeld nformaton on code dstance propertes (= tells how ood the code s to detect or correct errors) Rules (example on the next slde): Splt S 0 nto ntal and fnal state, remove self-loop (2) Label each branch by the branch an X. Here s the weht* of the n encoded bts on that branch (3) Each path connectn the ntal state and the fnal state represents a nonzero code word that dveres and re-emeres wth S 0 only once The path an s the product of the branch ans alon a path, and the weht of the assocated code word s the power of X n the path an Code weh dstrbuton s obtaned by usn a wehted an formula to compute ts eneratn functon (nput-output equaton) T( X ) A X where A s the number of encoded words of weht *In lnear codes, weht s the number of :s n the encoder output 3 branch an weht: 2 weht: Example: The path representn the state sequence S 0 S S 3 S 7 S 6 S 5 S 2 S 4 S 0 has the path an X 2 X X X X 2 X X 2 X 2 =X 2 and the correspondn code word has the weht of 2 Where does these terms come from? T( X ) X 3X 5X A X X 25 X

8 Dstance propertes of convolutonal codes Code strenth s measured by the mnmum free dstance: d free where v and v are the encoded words correspondn nformaton sequences u and u. Code can correct up to t d free / 2 errors. The mnmum free dstance d free denotes: The mnmum weht of all the paths n the state daram that dvere from and remere wth the all-zero state S 0 The lowest power of the code-eneratn functon T(X) T( X ) A X * for dervaton, see Carlson s, p. 583 mn d( v ', v '') : u' u'' X 3X 5X Code an*: Gc kd / 2 n R d c / 2 free free 9 0 X 25 X... d free 6 5 Codn an for some selected convolutonal codes Here s a table of some selected convolutonal codes and ther code ans R C d free /2 expressed for hard decodn also by 0lo ( R d 0 c free / 2) db 6 8

9 Decodn of convolutonal codes Maxmum lkelhood decodn of convolutonal codes means fndn the code branch n the code trells that was most lkely transmtted Therefore maxmum lkelhood decodn s based on calculatn code Hammn dstances for each branch potentally formn encoded word Assume that the nformaton symbols appled nto an AWGN channel are equally alke and ndependent Let s denote by x encoded symbols (no errors) and by y receved (potentally erroneous) symbols: x x x... x... y y y... y... Probablty to decode the symbols s then p( y, x) p( y x ) 0 x The most lkely path throuh the trells wll maxmze ths metrc. Often ln() s taken from both sdes, because probabltes are often small numbers, yeldn: ln p( y, x) ln p( y x ) (note ths corresponds equavalently also the smallest Hammn dstance) receved code: non - erroneous code: m y x Decoder (=dstance calculaton) bt decsons 7 Example of exhaustve maxmal lkelhood detecton Assume a three bt messae s transmtted and encoded by (2,,2) convolutonal encoder. To clear the decoder, two zero-bts are appended after messae. Thus 5 bts are encoded resultn 0 bts of code. Assume channel error probablty s p = 0.. After the channel 0,0,0,,00 s produced (ncludn some errors). What comes after the decoder, e.. what was most lkely the transmtted code and what were the respectve messae bts? a b states c d decoder outputs f ths path s selected 8 9

10 p( y, x) p( y x ) 0 ln p( y, x) ln p( y x ) 0 weht for prob. to receve bt n-error errors correct 9 correct: =8;8 ( 0.) 0.88 false: =2;2 ( 2.30) 4.6 total path metrc: 5.48 Note also the Hammn dstances! The larest metrc, verfy that you et the same result! 20 0

11 The Vterb alorthm Problem of optmum decodn s to fnd the mnmum dstance path from the ntal state back to the ntal state (below from S 0 to S 0 ). The mnmum dstance s one of the sums of all path metrcs from S 0 to S 0 Exhaustve maxmum lkelhood method must search all the paths n phase trells (2 k paths emern/ entern from 2 L+ states for an (n,k,l) code) The Vterb alorthm ets mprovement n computatonal effcency va concentratn nto survvor paths of the trells 2 The survvor path Assume for smplcty a convolutonal code wth k=, and thus up to 2 k = 2 branches can enter each state n trells daram Assume optmal path passes S. Metrc comparson s done by addn the metrc of S and S 2 to S. At the survvor path the accumulated metrc s naturally smaller (otherwse t could not be the optmum path) For ths reason the non-survved path can be dscarded -> all path alternatves need not to be further consdered Note that n prncple the whole transmtted sequence must be receved before decson. However, n practce storn of states for L nput lenth of 5L s qute adequate 2 nodes, determned by memory depth k 2 branches enter each node branch of larer metrc dscarded 22

12 Example of usn the Vterb alorthm Assume the receved sequence s y and the (n,k,l)=(2,,2) encoder shown below. Determne the Vterb decoded output sequence! states (Note that for ths encoder code rate s /2 and memory depth equals L = 2) 23 The maxmum lkelhood path After rester lenth L+=3 branch pattern bens to repeat Smaller accumulated metrc selected (2) (0) (Branch Hammn dstances n parenthess) Frst depth wth two entres to the node The decoded ML code sequence s whose Hammn dstance to the receved sequence s 4 and the respectve decoded sequence s (why?). Note that ths s the mnmum dstance path. (Black crcles denote the deleted branches, dashed lnes: '' was appled) 24 2

13 How to end-up decodn? In the prevous example t was assumed that the rester was fnally flled wth zeros thus fndn the mnmum dstance path In practce wth lon code words zeron requres feedn of lon sequence of zeros to the end of the messae bts: ths wastes channel capacty & ntroduces delay To avod ths path memory truncaton s appled: Trace all the survvn paths to the depth where they mere Fure rht shows a common pont at a memory depth J J s a random varable whose applcable mantude shown n the fure (5L) has been expermentally tested for nelble error rate ncrease Note that ths also ntroduces the delay of 5L! J 5L staes of the trells 25 Lessons learned You understand the dfferences between cyclc codes and convolutonal codes You can create state daram for a convolutonal encoder You know how to construct convolutonal encoder crcuts based on known the enerator sequences You can analyze code strenths based on known code eneraton crcuts / state darams or enerator sequences You understand how to realze maxmum lkelhood convolutonal decodn by usn exhaustve search You understand the prncple of Vterb decodn 26 3

Bayesian decision theory. Nuno Vasconcelos ECE Department, UCSD

Bayesian decision theory. Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world observatons decson functon L[,y] loss of predctn y wth the epected value of the

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

ECE 534: Elements of Information Theory. Solutions to Midterm Exam (Spring 2006)

ECE 534: Elements of Information Theory. Solutions to Midterm Exam (Spring 2006) ECE 534: Elements of Informaton Theory Solutons to Mdterm Eam (Sprng 6) Problem [ pts.] A dscrete memoryless source has an alphabet of three letters,, =,, 3, wth probabltes.4,.4, and., respectvely. (a)

More information

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Chapter 7 Channel Capacty and Codng Contents 7. Channel models and channel capacty 7.. Channel models Bnary symmetrc channel Dscrete memoryless channels Dscrete-nput, contnuous-output channel Waveform

More information

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Wreless Informaton Transmsson System Lab. Chapter 7 Channel Capacty and Codng Insttute of Communcatons Engneerng atonal Sun Yat-sen Unversty Contents 7. Channel models and channel capacty 7.. Channel models

More information

Communication with AWGN Interference

Communication with AWGN Interference Communcaton wth AWG Interference m {m } {p(m } Modulator s {s } r=s+n Recever ˆm AWG n m s a dscrete random varable(rv whch takes m wth probablty p(m. Modulator maps each m nto a waveform sgnal s m=m

More information

Application of Nonbinary LDPC Codes for Communication over Fading Channels Using Higher Order Modulations

Application of Nonbinary LDPC Codes for Communication over Fading Channels Using Higher Order Modulations Applcaton of Nonbnary LDPC Codes for Communcaton over Fadng Channels Usng Hgher Order Modulatons Rong-Hu Peng and Rong-Rong Chen Department of Electrcal and Computer Engneerng Unversty of Utah Ths work

More information

THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan.

THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan. THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY Wllam A. Pearlman 2002 References: S. Armoto - IEEE Trans. Inform. Thy., Jan. 1972 R. Blahut - IEEE Trans. Inform. Thy., July 1972 Recall

More information

FEATURE ANALYSIS ON QUEUE LENGTH OF ASYMMETRIC TWO-QUEUE POLLING SYSTEM WITH GATED SERVICES *

FEATURE ANALYSIS ON QUEUE LENGTH OF ASYMMETRIC TWO-QUEUE POLLING SYSTEM WITH GATED SERVICES * Journal of Theoretcal and Appled Informaton Technoloy 0 th January 03. Vol. 4 No. 005-03 JATIT & LLS. All rhts reserved. ISSN: 99-845 www.att.or E-ISSN: 8-395 FEATURE ANALYSIS ON QUEUE LENGTH OF ASYMMETRIC

More information

A Comparison between Weight Spectrum of Different Convolutional Code Types

A Comparison between Weight Spectrum of Different Convolutional Code Types A Comparson between Weght Spectrum of fferent Convolutonal Code Types Baltă Hora, Kovac Mara Abstract: In ths paper we present the non-recursve systematc, recursve systematc and non-recursve non-systematc

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

CS 331 DESIGN AND ANALYSIS OF ALGORITHMS DYNAMIC PROGRAMMING. Dr. Daisy Tang

CS 331 DESIGN AND ANALYSIS OF ALGORITHMS DYNAMIC PROGRAMMING. Dr. Daisy Tang CS DESIGN ND NLYSIS OF LGORITHMS DYNMIC PROGRMMING Dr. Dasy Tang Dynamc Programmng Idea: Problems can be dvded nto stages Soluton s a sequence o decsons and the decson at the current stage s based on the

More information

A Simplified Decoding Algorithm for Polar Codes

A Simplified Decoding Algorithm for Polar Codes Internatonal Core Journal o Enneern Vol.4 No. 8 ISSN: 44-895 A Smpled Decodn Alorthm or Polar Codes Quran Ma a, Honen Gao b Henan Unversty o Scence and Technoloy, Henan 47, Chna. aquranma@qq.com, b aohonenhappy@6.com

More information

Consider the following passband digital communication system model. c t. modulator. t r a n s m i t t e r. signal decoder.

Consider the following passband digital communication system model. c t. modulator. t r a n s m i t t e r. signal decoder. PASSBAND DIGITAL MODULATION TECHNIQUES Consder the followng passband dgtal communcaton system model. cos( ω + φ ) c t message source m sgnal encoder s modulator s () t communcaton xt () channel t r a n

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Low Complexity Soft-Input Soft-Output Hamming Decoder

Low Complexity Soft-Input Soft-Output Hamming Decoder Low Complexty Soft-Input Soft-Output Hammng Der Benjamn Müller, Martn Holters, Udo Zölzer Helmut Schmdt Unversty Unversty of the Federal Armed Forces Department of Sgnal Processng and Communcatons Holstenhofweg

More information

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng

More information

Calculation of time complexity (3%)

Calculation of time complexity (3%) Problem 1. (30%) Calculaton of tme complexty (3%) Gven n ctes, usng exhaust search to see every result takes O(n!). Calculaton of tme needed to solve the problem (2%) 40 ctes:40! dfferent tours 40 add

More information

Dynamics 4600:203 Homework 08 Due: March 28, Solution: We identify the displacements of the blocks A and B with the coordinates x and y,

Dynamics 4600:203 Homework 08 Due: March 28, Solution: We identify the displacements of the blocks A and B with the coordinates x and y, Dynamcs 46:23 Homework 8 Due: March 28, 28 Name: Please denote your answers clearly,.e., box n, star, etc., and wrte neatly. There are no ponts for small, messy, unreadable work... please use lots of paper.

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Lecture 3: Shannon s Theorem

Lecture 3: Shannon s Theorem CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD he Gaussan classfer Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world X observatons g decson functon L[g,y] loss of predctng y wth g Bayes decson rule s

More information

The Second Anti-Mathima on Game Theory

The Second Anti-Mathima on Game Theory The Second Ant-Mathma on Game Theory Ath. Kehagas December 1 2006 1 Introducton In ths note we wll examne the noton of game equlbrum for three types of games 1. 2-player 2-acton zero-sum games 2. 2-player

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Entropy of Markov Information Sources and Capacity of Discrete Input Constrained Channels (from Immink, Coding Techniques for Digital Recorders)

Entropy of Markov Information Sources and Capacity of Discrete Input Constrained Channels (from Immink, Coding Techniques for Digital Recorders) Entropy of Marov Informaton Sources and Capacty of Dscrete Input Constraned Channels (from Immn, Codng Technques for Dgtal Recorders). Entropy of Marov Chans We have already ntroduced the noton of entropy

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

Pulse Coded Modulation

Pulse Coded Modulation Pulse Coded Modulaton PCM (Pulse Coded Modulaton) s a voce codng technque defned by the ITU-T G.711 standard and t s used n dgtal telephony to encode the voce sgnal. The frst step n the analog to dgtal

More information

Error Probability for M Signals

Error Probability for M Signals Chapter 3 rror Probablty for M Sgnals In ths chapter we dscuss the error probablty n decdng whch of M sgnals was transmtted over an arbtrary channel. We assume the sgnals are represented by a set of orthonormal

More information

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

6. Stochastic processes (2)

6. Stochastic processes (2) Contents Markov processes Brth-death processes Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 Markov process Consder a contnuous-tme and dscrete-state stochastc process X(t) wth state space

More information

6. Stochastic processes (2)

6. Stochastic processes (2) 6. Stochastc processes () Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 6. Stochastc processes () Contents Markov processes Brth-death processes 6. Stochastc processes () Markov process

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering / Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons

More information

CHAPTER 17 Amortized Analysis

CHAPTER 17 Amortized Analysis CHAPTER 7 Amortzed Analyss In an amortzed analyss, the tme requred to perform a sequence of data structure operatons s averaged over all the operatons performed. It can be used to show that the average

More information

where v means the change in velocity, and t is the

where v means the change in velocity, and t is the 1 PHYS:100 LECTURE 4 MECHANICS (3) Ths lecture covers the eneral case of moton wth constant acceleraton and free fall (whch s one of the more mportant examples of moton wth constant acceleraton) n a more

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

On Quantization of Log-Likelihood Ratios for Maximum Mutual Information

On Quantization of Log-Likelihood Ratios for Maximum Mutual Information On Quantzaton of Lo-Lkelhood Ratos for Maxmum Mutual Informaton Andreas Wnkelbauer and Gerald Matz Insttute of Telecommuncatons, Venna Unversty of Technoloy Gusshausstrasse 25/389, 040 Venna, Austra emal:

More information

Laboratory 1c: Method of Least Squares

Laboratory 1c: Method of Least Squares Lab 1c, Least Squares Laboratory 1c: Method of Least Squares Introducton Consder the graph of expermental data n Fgure 1. In ths experment x s the ndependent varable and y the dependent varable. Clearly

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem. Lecture 14 (03/27/18). Channels. Decodng. Prevew of the Capacty Theorem. A. Barg The concept of a communcaton channel n nformaton theory s an abstracton for transmttng dgtal (and analog) nformaton from

More information

Tornado and Luby Transform Codes. Ashish Khisti Presentation October 22, 2003

Tornado and Luby Transform Codes. Ashish Khisti Presentation October 22, 2003 Tornado and Luby Transform Codes Ashsh Khst 6.454 Presentaton October 22, 2003 Background: Erasure Channel Elas[956] studed the Erasure Channel β x x β β x 2 m x 2 k? Capacty of Noseless Erasure Channel

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Queueing Networks II Network Performance

Queueing Networks II Network Performance Queueng Networks II Network Performance Davd Tpper Assocate Professor Graduate Telecommuncatons and Networkng Program Unversty of Pttsburgh Sldes 6 Networks of Queues Many communcaton systems must be modeled

More information

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Lossy Compression. Compromise accuracy of reconstruction for increased compression. Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost

More information

Digital Signal Processing

Digital Signal Processing Dgtal Sgnal Processng Dscrete-tme System Analyss Manar Mohasen Offce: F8 Emal: manar.subh@ut.ac.r School of IT Engneerng Revew of Precedent Class Contnuous Sgnal The value of the sgnal s avalable over

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING INTRODUCTION

CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING INTRODUCTION CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING N. Phanthuna 1,2, F. Cheevasuvt 2 and S. Chtwong 2 1 Department of Electrcal Engneerng, Faculty of Engneerng Rajamangala

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

For the three forces. find: (a) the resultant force R~ (a) (b) the magnitude of the resultant force. Three coplanar forces, A

For the three forces. find: (a) the resultant force R~ (a) (b) the magnitude of the resultant force. Three coplanar forces, A WorkSHEE 8. Vector applcatons ame: or the three forces, and, fnd: the resultant force R R the mantude of the resultant force. R hree coplanar forces,, B and C have mantudes of 8, 6 and 9 respectvely. nd

More information

On Optimal Power Control for Delay-Constrained Communication over Fading Channels

On Optimal Power Control for Delay-Constrained Communication over Fading Channels On Optmal Power Control for Delay-Constraned Communcaton over Fadn Channels Xaochen L, Xhua Don, and Dapen Wu Abstract In ths paper, we study the problem of optmal power control for delay-constraned communcaton

More information

Dynamic Programming. Lecture 13 (5/31/2017)

Dynamic Programming. Lecture 13 (5/31/2017) Dynamc Programmng Lecture 13 (5/31/2017) - A Forest Thnnng Example - Projected yeld (m3/ha) at age 20 as functon of acton taken at age 10 Age 10 Begnnng Volume Resdual Ten-year Volume volume thnned volume

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Split alignment. Martin C. Frith April 13, 2012

Split alignment. Martin C. Frith April 13, 2012 Splt algnment Martn C. Frth Aprl 13, 2012 1 Introducton Ths document s about algnng a query sequence to a genome, allowng dfferent parts of the query to match dfferent parts of the genome. Here are some

More information

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming EEL 6266 Power System Operaton and Control Chapter 3 Economc Dspatch Usng Dynamc Programmng Pecewse Lnear Cost Functons Common practce many utltes prefer to represent ther generator cost functons as sngle-

More information

find (x): given element x, return the canonical element of the set containing x;

find (x): given element x, return the canonical element of the set containing x; COS 43 Sprng, 009 Dsjont Set Unon Problem: Mantan a collecton of dsjont sets. Two operatons: fnd the set contanng a gven element; unte two sets nto one (destructvely). Approach: Canoncal element method:

More information

MDL-Based Unsupervised Attribute Ranking

MDL-Based Unsupervised Attribute Ranking MDL-Based Unsupervsed Attrbute Rankng Zdravko Markov Computer Scence Department Central Connectcut State Unversty New Brtan, CT 06050, USA http://www.cs.ccsu.edu/~markov/ markovz@ccsu.edu MDL-Based Unsupervsed

More information

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Hidden Markov Models & The Multivariate Gaussian (10/26/04) CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models

More information

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence) /24/27 Prevew Fbonacc Sequence Longest Common Subsequence Dynamc programmng s a method for solvng complex problems by breakng them down nto smpler sub-problems. It s applcable to problems exhbtng the propertes

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

Laboratory 3: Method of Least Squares

Laboratory 3: Method of Least Squares Laboratory 3: Method of Least Squares Introducton Consder the graph of expermental data n Fgure 1. In ths experment x s the ndependent varable and y the dependent varable. Clearly they are correlated wth

More information

Boostrapaggregating (Bagging)

Boostrapaggregating (Bagging) Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod

More information

Appendix B. The Finite Difference Scheme

Appendix B. The Finite Difference Scheme 140 APPENDIXES Appendx B. The Fnte Dfference Scheme In ths appendx we present numercal technques whch are used to approxmate solutons of system 3.1 3.3. A comprehensve treatment of theoretcal and mplementaton

More information

EGR 544 Communication Theory

EGR 544 Communication Theory EGR 544 Communcaton Theory. Informaton Sources Z. Alyazcoglu Electrcal and Computer Engneerng Department Cal Poly Pomona Introducton Informaton Source x n Informaton sources Analog sources Dscrete sources

More information

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1 Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

Turing Machines (intro)

Turing Machines (intro) CHAPTER 3 The Church-Turng Thess Contents Turng Machnes defntons, examples, Turng-recognzable and Turng-decdable languages Varants of Turng Machne Multtape Turng machnes, non-determnstc Turng Machnes,

More information

Linear Regression Analysis: Terminology and Notation

Linear Regression Analysis: Terminology and Notation ECON 35* -- Secton : Basc Concepts of Regresson Analyss (Page ) Lnear Regresson Analyss: Termnology and Notaton Consder the generc verson of the smple (two-varable) lnear regresson model. It s represented

More information

Compilers. Spring term. Alfonso Ortega: Enrique Alfonseca: Chapter 4: Syntactic analysis

Compilers. Spring term. Alfonso Ortega: Enrique Alfonseca: Chapter 4: Syntactic analysis Complers Sprng term Alfonso Ortega: alfonso.ortega@uam.es nrque Alfonseca: enrque.alfonseca@uam.es Chapter : Syntactc analyss. Introducton. Bottom-up Analyss Syntax Analyser Concepts It analyses the context-ndependent

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

8/25/17. Data Modeling. Data Modeling. Data Modeling. Patrice Koehl Department of Biological Sciences National University of Singapore

8/25/17. Data Modeling. Data Modeling. Data Modeling. Patrice Koehl Department of Biological Sciences National University of Singapore 8/5/17 Data Modelng Patrce Koehl Department of Bologcal Scences atonal Unversty of Sngapore http://www.cs.ucdavs.edu/~koehl/teachng/bl59 koehl@cs.ucdavs.edu Data Modelng Ø Data Modelng: least squares Ø

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Homework Assignment 3 Due in class, Thursday October 15

Homework Assignment 3 Due in class, Thursday October 15 Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.

More information

Department of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING

Department of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING MACHINE LEANING Vasant Honavar Bonformatcs and Computatonal Bology rogram Center for Computatonal Intellgence, Learnng, & Dscovery Iowa State Unversty honavar@cs.astate.edu www.cs.astate.edu/~honavar/

More information

Prof. Dr. I. Nasser Phys 630, T Aug-15 One_dimensional_Ising_Model

Prof. Dr. I. Nasser Phys 630, T Aug-15 One_dimensional_Ising_Model EXACT OE-DIMESIOAL ISIG MODEL The one-dmensonal Isng model conssts of a chan of spns, each spn nteractng only wth ts two nearest neghbors. The smple Isng problem n one dmenson can be solved drectly n several

More information

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud Resource Allocaton wth a Budget Constrant for Computng Independent Tasks n the Cloud Wemng Sh and Bo Hong School of Electrcal and Computer Engneerng Georga Insttute of Technology, USA 2nd IEEE Internatonal

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Lecture 10: May 6, 2013

Lecture 10: May 6, 2013 TTIC/CMSC 31150 Mathematcal Toolkt Sprng 013 Madhur Tulsan Lecture 10: May 6, 013 Scrbe: Wenje Luo In today s lecture, we manly talked about random walk on graphs and ntroduce the concept of graph expander,

More information

Hopfield Training Rules 1 N

Hopfield Training Rules 1 N Hopfeld Tranng Rules To memorse a sngle pattern Suppose e set the eghts thus - = p p here, s the eght beteen nodes & s the number of nodes n the netor p s the value requred for the -th node What ll the

More information

Supplemental document

Supplemental document Electronc Supplementary Materal (ESI) for Physcal Chemstry Chemcal Physcs. Ths journal s the Owner Socetes 01 Supplemental document Behnam Nkoobakht School of Chemstry, The Unversty of Sydney, Sydney,

More information

Continuous Time Markov Chain

Continuous Time Markov Chain Contnuous Tme Markov Chan Hu Jn Department of Electroncs and Communcaton Engneerng Hanyang Unversty ERICA Campus Contents Contnuous tme Markov Chan (CTMC) Propertes of sojourn tme Relatons Transton probablty

More information

Annexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances

Annexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances ec Annexes Ths Annex frst llustrates a cycle-based move n the dynamc-block generaton tabu search. It then dsplays the characterstcs of the nstance sets, followed by detaled results of the parametercalbraton

More information

Formulas for the Determinant

Formulas for the Determinant page 224 224 CHAPTER 3 Determnants e t te t e 2t 38 A = e t 2te t e 2t e t te t 2e 2t 39 If 123 A = 345, 456 compute the matrx product A adj(a) What can you conclude about det(a)? For Problems 40 43, use

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Communication Complexity 16:198: February Lecture 4. x ij y ij

Communication Complexity 16:198: February Lecture 4. x ij y ij Communcaton Complexty 16:198:671 09 February 2010 Lecture 4 Lecturer: Troy Lee Scrbe: Rajat Mttal 1 Homework problem : Trbes We wll solve the thrd queston n the homework. The goal s to show that the nondetermnstc

More information

PHYS 705: Classical Mechanics. Calculus of Variations II

PHYS 705: Classical Mechanics. Calculus of Variations II 1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

Winter 2008 CS567 Stochastic Linear/Integer Programming Guest Lecturer: Xu, Huan

Winter 2008 CS567 Stochastic Linear/Integer Programming Guest Lecturer: Xu, Huan Wnter 2008 CS567 Stochastc Lnear/Integer Programmng Guest Lecturer: Xu, Huan Class 2: More Modelng Examples 1 Capacty Expanson Capacty expanson models optmal choces of the tmng and levels of nvestments

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Ensemble Methods: Boosting

Ensemble Methods: Boosting Ensemble Methods: Boostng Ncholas Ruozz Unversty of Texas at Dallas Based on the sldes of Vbhav Gogate and Rob Schapre Last Tme Varance reducton va baggng Generate new tranng data sets by samplng wth replacement

More information

Chapter 1. Probability

Chapter 1. Probability Chapter. Probablty Mcroscopc propertes of matter: quantum mechancs, atomc and molecular propertes Macroscopc propertes of matter: thermodynamcs, E, H, C V, C p, S, A, G How do we relate these two propertes?

More information

Suppose that there s a measured wndow of data fff k () ; :::; ff k g of a sze w, measured dscretely wth varable dscretzaton step. It s convenent to pl

Suppose that there s a measured wndow of data fff k () ; :::; ff k g of a sze w, measured dscretely wth varable dscretzaton step. It s convenent to pl RECURSIVE SPLINE INTERPOLATION METHOD FOR REAL TIME ENGINE CONTROL APPLICATIONS A. Stotsky Volvo Car Corporaton Engne Desgn and Development Dept. 97542, HA1N, SE- 405 31 Gothenburg Sweden. Emal: astotsky@volvocars.com

More information

Markov chains. Definition of a CTMC: [2, page 381] is a continuous time, discrete value random process such that for an infinitesimal

Markov chains. Definition of a CTMC: [2, page 381] is a continuous time, discrete value random process such that for an infinitesimal Markov chans M. Veeraraghavan; March 17, 2004 [Tp: Study the MC, QT, and Lttle s law lectures together: CTMC (MC lecture), M/M/1 queue (QT lecture), Lttle s law lecture (when dervng the mean response tme

More information