Mathematical Models for Information Sources A Logarithmic i Measure of Information

Size: px
Start display at page:

Download "Mathematical Models for Information Sources A Logarithmic i Measure of Information"

Transcription

1 Introducton to Informaton Theory Wreless Informaton Transmsson System Lab. Insttute of Communcatons Engneerng g Natonal Sun Yat-sen Unversty

2 Table of Contents Mathematcal Models for Informaton Sources A Logarthmc Measure of Informaton Average Mutual Informaton and Entropy Informaton Measures for Contnuous Random Varables Codng for Dscrete Sources Codng for Dscrete Memoryless Sources Dscrete Statonary Sources The Lempel-Zv Algorthm Channel Models and Channel Capacty Channel Models Channel Capacty 2

3 Introducton Two types of sources: analog source and dgtal source. Whether a source s analog or dscrete, a dgtal communcaton system s desgned to transmt nformaton n dgtal form. The output of the source must be converted ed to a format that can be transmtted dgtally. Ths converson of the source output to a dgtal form s generally performed by the source encoder, whose output may be assumed to be a sequence of bnary dgts. In ths chapter, we treat source encodng based on mathematcal models of nformaton sources and provde a quanttatve measure of the nformaton emtted by a source. 3

4 Mathematcal Models for Informaton Sources The output of any nformaton source s random. The source output t s characterzed n statstcal ttt lterms. To construct a mathematcal model for a dscrete source, we assume that each letter n the alphabet {x, x 2,, x L } has a gven probablty p k of occurrence. p = P( X = x ), k L k L k = k p k = 4

5 Mathematcal Models for Informaton Sources Two mathematcal models of dscrete sources: If the output sequence from the source s statstcally ndependent,.e. the current output letter s statstcally ndependent from all past and future outputs, then the source s sad to be memoryless. Such a source s called a dscrete memoryless source (DMS). If the dscrete source output s statstcally dependent, we may construct a mathematcal model based on statstcal statonarty. By defnton, a dscrete source s sad to be statonary f the jont probabltes of two sequences of length n, say a, a 2,, a n and a +m, a 2+m,, a n+m, are dentcal for all n and for all shfts m. In other words, the jont probabltes for any arbtrary length sequence of source outputs are nvarant under a shft n the tme orgn. 5

6 Mathematcal Models for Informaton Sources Analog source An analog source has an output waveform x(t) that s a sample functon of a stochastc process X(t). Assume that X(t) s a statonary stochastc process wth autocorrelaton functon φ xx (τ) and power spectral densty Φ xx ( f ). When X(t) s a band-lmted stochastc process,.e., Φ xx ( f )=0 for f W, the samplng theorem may be used to represent X(t)as: n sn 2π W t n 2W X () t = X n= 2W n 2π W t 2WW By applyng the samplng theorem, we may convert the output of an analog source nto an equvalent dscrete-tme source. 6

7 Mathematcal Models for Informaton Sources Analog source (cont.) The output samples {X(n/2W)} from the statonary sources are generally contnuous, and they can t be represented n dgtal form wthout some loss n precson. We may quantze each sample to a set of dscrete values, but the quantzaton process results n loss of precson. Consequently, the orgnal sgnal can t be reconstructed exactly from the quantzed sample values. 7

8 A Logarthmc Measure of Informaton Consder two dscrete random varables wth possble outcomes x, =,2,,,,, n,, and y, =,2,,m.,,, When X and Y are statstcally ndependent, the occurrence of Y=y j provdes no nformaton about the occurrence of X=x. When X and Y are fully dependent such that the occurrence of Y=y j determnes the occurrence of X=x,, the nformaton content s smply that provded by the event X=x. Mutual Informaton between x and y j : the nformaton content provded d by the occurrence of the event Y=y j about the event X=x, s defned as: ( y ) I x ; = log j ( y ) P x ( ) P x j 8

9 A Logarthmc Measure of Informaton The unts of I(x,y j ) are determned by the base of the logarthm, whch s usually selected as ether 2 or e. When the base of the logarthm s 2, the unts of I(x,y j ) are bts. When the ebase s e, the eunts so of I(x,y j ) are ecalled ednats (natural unts). The nformaton measured n nats s equal to ln2 tmes the nformaton measured n bts snce: log a b = ln a = ln 2log a = log a 2 2 log When X and Y are statstcally ndependent, p(x y j )=p(x ), I(x,y j )=0. When X and Y are fully dependent, P(x( y j )=, and hence I(x,y j )=-logp(x ). Informaton of the event X=x j. log c c b a 9

10 A Logarthmc Measure of Informaton Self-nformaton of the event X=x s defned as I(x )=-logp(x ) 0. Note that a hgh-probablty event conveys less nformaton than a low-probablty event. If there ees sonly a sngle geevent x wth probablty b P(x)=,, then I(x)=0. Informaton = Amount of Uncertanty Example: A dscrete nformaton source that emts a bnary dgt wth equal probablty. The nformaton content of each output s: I( x) = log2 P( x) = log2 = bt, x=0, 2 For a block of k bnary dgts, f the source s memoryless, there are M=2 k possble k-bt blocks. The self-nformaton s: ( ' ) k = log2 2 = k bts I x 0

11 A Logarthmc Measure of Informaton The nformaton provded by the occurrence of the event Y=y j about the event X=x s dentcal to the nformaton provded by the occurrence of the event X=x about the event Y=y j snce: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) P x, ( ; ) yj P x yj P yj P x yj I x yj = log = log = log P x P x P y P x P y j j ( ) ( ) ( ) j j ( ) ( ) ( ) P y x P x P y x = log = log = I yj; x P x P y P y j j ( ) Example : X and Y are bnary-valued {0,} random varables that represent the nput and output of a bnary channel. The nput symbols are equally lkely.

12 A Logarthmc Measure of Informaton Example (cont.): The output symbols depend on the nput accordng to the condtonal probablty: ( 0 0 ) ( 0 ) ( ) ( 0 ) P Y = X = = p P Y = X = = p 0 0 P Y = X = = p P Y = X = = p Mutual nformaton about X=0 and X=, gven that Y=0: ( = 0 ) = ( = 0 = 0 ) ( = 0 ) PY PY X P X + P Y = X = P X = = p + p 2 PY= = PY= X= 0 P X= 0 ( 0 ) ( ) ( ) ( ) ( ) ( ) 0 + P Y = X = P X = = p + p 2 ( ) ( ) ( ) 2 0

13 A Logarthmc Measure of Informaton Example (cont.) The mutual nformaton about X=0 0 gven that Y=0s: 0 P ( Y = 0 X = 0) 2 p0 I( x; y) = I( 0;0) = log2 = log2 P Y = 0 p + p ( ) ( ) 0 The mutual nformaton about X= gven that Y=0 s: 2 p I ( x; 2 y) I( ;0) = log2 p + p If the channel s noseless, p 0 =p =0: I ( 0;0) = log2 2 = bt If the channel s useless, p 0 =p =0.5: ( ) 2 I 0;0 = log = 0 bt 0 3

14 A Logarthmc Measure of Informaton Condtonal self-nformaton s defned as: I( x yj) = log = log P( x yj) 0 P x y ( j) ( ; ) = log ( ) log ( ) = ( ) ( ) I x y P x y P x I x I x y j j j We nterpret I(x y j ) as the self-nformaton about the event X=x after havng observed the event Y=y j. The mutual nformaton between a par of events can be ether postveor or negatve, or zerosnceboth I(x y j )and I(x ) are greater than or equal to zero. 4

15 Average Mutual Informaton and Entropy Average mutual nformaton between X and Y: n m ( ; ) = (, j) ( ; j) = j= I X Y P x y I x y n m ( yj) = P x, log P x P ( x, yj ) ( ) P( y ) = j= j I(X;Y)=0 when X and Y are statstcally ndependent. I(X;Y) 0. Average self-nformaton H(X): n = ( ) ( ) ( ) H X = P x I x When X represents the alphabet of possble output letters from a source, H(X) represents the average self-nformaton per source letter, and t s called the entropy. 5

16 Average Mutual Informaton and Entropy In the specal case, n whch the letter from the source are equally probable, P(x( )=/n, we have: n H ( X) = log = log n = n n In general, H(X) log n for any gven set of source letter probabltes. In other words, the entropy of a dscrete source s a maxmum when the output letters are equally probable. 6

17 Average Mutual Informaton and Entropy Example : Consder a source that emts a sequence of statstcally ndependent letters, where each output letter s ether 0 wth probablty q or wth probablty -q. The entropy of ths source s: ( ) ( ) = log ( ) log ( ) H X H q q q q q Maxmum value of the entropy functon occurs at q=0.5 where H(0.5)=. 7

18 Average Mutual Informaton and Entropy The average condtonal self-nformaton s called the condtonal entropy and s defned as: n m H ( X Y) = P( x, yj) log P x ( y ) = j= j H(X Y) s the nformaton or uncertanty n X after Y s observed. n m ( ) = ( j ) I X ; Y P x, y log P ( x, y j ) ( ) ( ) P x P y = j= j n m ( ) ( ) ( ) { } ( ) ( ) = P x, y log P x y P y log P x log P y = j= j j j j n m P ( x, y j) log P ( x y j ) log P ( x ) = = j= n n m ( ) log ( ) (, ) j log ( j) = P x P x + P x y P x y = = j= ( ) ( ) = H X H X Y 8

19 Average Mutual Informaton and Entropy Snce I(X;Y) 0, t follows that H(X) H(X Y), wth equalty f and only f X and Y are statstcally ndependent. H(X Y) can be nterpreted as the average amount of (condtonal selfnformaton) uncertanty n X after we observe Y. H(X) can be nterpreted as the average amount of uncertanty (selfnformaton) pror to the observaton. I(X;Y) s the average amount of (mutual nformaton) uncertanty provded about the set X by the observaton of the set Y. Snce H(X) H(X Y), t s clear that condtonng on the observaton Y does not ncrease the entropy. 9

20 Average Mutual Informaton and Entropy Example: Consder the case of p 0 =p =p. Let P(X=0)=q and P(X=)=-q. The entropy s: H ( X) H ( q) = q log q ( q) log ( q) 20

21 Average Mutual Informaton and Entropy As n the proceedng example, when the condtonal entropy H(X Y) s vewed n terms of a channel whose nput s X and whose output s Y, H(X Y) s called the equvocaton and s nterpreted as the amount of average uncertanty remanng n X after observaton of Y. 用模稜兩可的話 ; 含糊其辭 2

22 Average Mutual Informaton and Entropy Entropy for two or more random varables: n n n = j = j = j = 2 k ( ) ( ) ( ) 2... K... j j... j log j j... j H X X X P x x x P x x x 2 k 2 k 2 P ( xx 2 xk) = P( x) P( x2 x) P( x3 xx 2) P( xk xx 2 xk ) ( k ) = ( ) + ( 2 ) + ( 3 2 ) H ( X X X... X ) snce H XXX X H X H X X H X XX k = k 2 (... ) H X XX2 X k = () ( ) ( ) ( ) ( ) Snce H X H X Y H X X... X H X where X = Xm and Y = X X... Xm 2 2 k k m= m k 22

23 Informaton Measures for Contnuous Random Varables If X and Y are random varables wth jont PDF p(x,y) and margnal PDFs p(x) ) and p(y), the average mutual nformaton between X and Y s defned as: ( ) ( ) ( ) ( ) p y x p x I ( X ; Y ) = p ( x) p ( y x) log dx dy p x p y Although the defnton of the average mutual nformaton carrers over to contnuous random varables, the concept of self-nformaton does not. The problem s that a contnuous random varable requres an nfnte number of bnary dgts to represent t exactly. Hence, ts self-nformaton s nfnte and, therefore, ts entropy s also nfnte. 23

24 Informaton Measures for Contnuous Random Varables Dfferental entropy of the contnuous random varables X s defned as: H X p x log p x dx ( ) ( ) ( ) = Note that ths quantty does not have the physcal meanng of self- nformaton, although t may appear to be a natural extenson of the defnton of entropy for a dscrete random varable. Average condtonal entropy of X gven Y s defned as: H ( X Y ) = p ( x, y ) log p ( x y ) dxdyd Average mutual nformaton may be expressed as: ( ; ) = ( ) ( ) = H ( Y ) H ( Y X ) I X Y H X H X Y 24

25 Informaton Measures for Contnuous Random Varables Suppose X s dscrete and has possble outcomes x, =,2,,n, and Y s contnuous and s descrbed by ts margnal PDF p(y). When X and Y are statstcally dependent, we may express p(y) as: n ( ) p ( y x ) P ( x ) p y = = The mutual nformaton provded about the event X= x by the occurrence of the event Y=y s: p ( y x) P( x) p( y x) I ( x ; y ) = = log p y P x p y ( ) ( ) ( ) The average mutual nformaton between X and Y s: ( ) ( ) ( ) = ( ) p ( y ) n p y x I XY ; p y x Px log dy = 25

26 Informaton Measures for Contnuous Random Varables Example: Let X be a dscrete random varable wth two equally probable outcomes x =AA and x 2 =-A. Let the condtonal PDFs p(y x ), =,2, be Gaussan wth mean x and varance σ 2. ( ) ( ) 2 2 σ p ( y A ) = e p ( y A ) = e 2πσ 2πσ y A y+ A 2 2σ 2 2 The average mutual nformaton s: ( ) p( y) ( ) p( y) p y A p y A I ( X ; Y ) = ( A ) log + ( A ) log dy 2 p y p y where p ( y ) = p ( y A ) + p ( y A ) 2 26

27 Codng for Dscrete Memoryless Sources Consder the process of encodng the output of a source,.e., the process of representng the source output by a sequence of bnary dgts. A measure of the effcency of a source-encodng method can be obtaned by comparng the average number of bnary dgts per output letter from the source to the entropy H(X). The dscrete memoryless source (DMS) s by far the smplest model that can be devsed for a physcal model. Few physcal sources closely ft ths dealzed mathematcal model. It s always more effcent to encode blocks of symbols nstead of encodng each symbol separately. By makng the block sze suffcently large, the average number of bnary dgts per output letter from the source can be made arbtrarly close to the entropy of the source. 27

28 Codng for Dscrete Memoryless Sources Suppose that a DMS produces an output letter or symbol every τ s seconds. Each symbol s selected from a fnte alphabet of symbols x, =,2,,L, occurrng wth probabltes P(x ), =,2,,L. The entropy of the DMS n bts per source symbol s: L ( ) ( ) log ( ) H X = P x 2 P x log 2 L = The equalty holds when the symbols are equally yprobable. The average number of bts per source symbol s H(X). The source rate n bts/s s defned as H(X)/ ( ) τ s. 28

29 Codng for Dscrete Memoryless Sources Fxed-length code words Consder a block encodng scheme that assgns a unque set of R bnary dgts to each symbol. Snce there are L possble symbols, the number of bnary dgts per symbol requred for unque encodng s: R = log L when L s a power of 2. = 2 R = log 2 L + when L s not a power of 2. x denotes the largest nteger less than x. The code rate R n bts per symbol s R. Snce H(X) log 2 L, t follows that R H(X). 29

30 Codng for Dscrete Memoryless Sources Fxed-length code words The effcency of the encodng for the DMS s defned as the rato H(X)/R. When L s a power of 2 and the source letters are equally probable, R=H(X). If L s not a power of 2, but the source symbols are equally probable, R dffers from H(X) by at most bt per symbol. When log 2 L>>,, the effcency of ths encodng scheme s hgh. When L s small, the effcency of the fxed-length code can be ncreased by encodng a sequence of J symbols at a tme. To acheve ths, we need L J unque code words. 30

31 Codng for Dscrete Memoryless Sources Fxed-length code words 2 N J L By usng sequences of N bnary dgts, we have 2 N possble code words. N J log 2 L. The mnmum nteger value of N requred s N = Jlog 2 L +. The average number of bts per source symbol s N/J=R and the neffcency has been reduced by approxmately a factor of /J relatve to the symbol-by-symbol encodng. By makng J suffcently large, the effcency of the encodng procedure, measured by the rato H(X)/R=JH(X)/N, can be made as close to unty as desred. The above mentoned methods ntroduce no dstorton snce the encodng of source symbols or block of symbols nto code words s unque. Ths s called noseless. 3

32 Codng for Dscrete Memoryless Sources Block codng falure (or dstorton), wth probablty of P e, occurs when the encodng gprocess s not unque. Source codng theorem I: (by Shannon) Let X be the ensemble e of letters e from a DMS wth fnte entropy H(X). Blocks of J symbols from the source are encoded nto code words of length N from a bnary alphabet. For anyε>0, the probablty P e of a block decodng falure can be made arbtrarly small f J s suffcently large and N R H ( X ) + +ε J Conversely, f R H(X)-ε, P e becomes arbtrarly close to as J s suffcently large. 32

33 Codng for Dscrete Memoryless Sources Varable-length code words When the source symbols are not equally probable, a more effcent encodng method s to use varable-length code words. In the Morse code, the letters that occur more frequently are assgned short code words and those that occur nfrequently are assgned long code words. Entropy codng devses a method for selectng and assgnng the code words to source letters. 33

34 Codng for Dscrete Memoryless Sources Varable-length code words Letter P(a k ) Code I Code II Code III a /2 0 0 a 2 / a 3 / a 4 /8 0 Code I s not unquely decodable. (Eg: 00:,00,;0,0) Code II s unquely decodable and nstantaneously decodable. Dgt 0 ndcates the end of a code word and no code word s longer than three bnary dgts. Prefx condton: no code word of length l<k that s dentcal to the frst l bnary dgts of another code word of length k>l. 34

35 Codng for Dscrete Memoryless Sources Varable-length code words Code III has a tree structure: The code s unquely decodable. The code s not nstantaneously t decodable. d Ths code does not satsfy the prefx condton. Objectve: devse a systematc procedure for constructng unquely decodable varable-length codes that mnmzes: L R = nkp( ak) k = 35

36 Codng for Dscrete Memoryless Sources Kraft nequalty A necessary and suffcent condton for the exstence of a bnary code wth code words havng lengths n n 2 n L that satsfy the prefx condton s L k = nk 2 Proof of suffcent condton: Consder a code tree that s embedded n the full tree of 2 n (n=n n L ) nodes. 36

37 Codng for Dscrete Memoryless Sources Kraft nequalty Proof of suffcent condton (cont.) Let s select any node of order n as the frst code word C. Ths choce elmnates 2 n-n termnal nodes (or the fracton 2 -n of the 2n termnal nodes). From the remanng avalable nodes of order n 2, we select one node for the second code word C C2. Ths choce elmnates 2 n-n 2 termnal nodes. Ths process contnues untl the last code word s assgned at termnal node L. At the node j<l, the fracton of the number of termnal nodes elmnated s: j L nk nk 2 < 2 k= k= At node j<l, there s always a node k>j avalable to be assgned to the next code word. Thus, we have constructed a code tree that s embedded n the full tree. Q.E.D. 37

38 Codng for Dscrete Memoryless Sources Kraft nequalty Proof of necessary condton In code tree of order n=n L, the number of termnal nodes elmnated from the total number of 2 n termnal nodes s: L n nk n nk k= k= Source codng theorem II Let X be the ensemble of letters from a DMS wth fnte entropy H(X) and output letters x k, k L, wth correspondng probabltes of occurrence p k, k L. It s possble to construct a code that satsfes the prefx condton and has an average length R that satsfes the nequaltes: L H( X) R< H( X) + 38

39 Codng for Dscrete Memoryless Sources Source codng theorem II (cont.) Proof of lower bound: L L L 2 H ( X) R = p log2 p n = p log2 p k k k k k= p k k= k= L nk L nk 2 2 = pk ln ln 2 = pk ln log2 e k= pk k= pk snce ln x x, we have: Kraft nequalty. L nk L 2 n k H ( X) R ( log 2e) pk ( log 2e) 2 0 k= p k k= - Equalty holds f and only f p = 2 n k for k L. k Note that log ab =logb/loga. oga. n k k 39

40 Codng for Dscrete Memoryless Sources Source codng theorem II (cont.) Proof of upper bound: The upper bound may be establshed under the constrant that n k, k L, are ntegers, by selectng the {n k } such that nk nk+ 2 p k < 2. n If the terms p k k 2 are summed over k L, we obtan the Kraft nequalty, for whch we have demonstrated that there exsts a code that satsfes the prefx condton. n On the other hand, f we take the logarthm of 2 k + p k <, we obtan log2 pk < nk + or nk < log2 pk. If we multply both sdes by p k and sum over k L, we obtan the desred upper bound. 40

41 Codng for Dscrete Memoryless Sources Huffman codng algorthm. The source symbols are lsted n order of decreasng gprobablty. The two source symbols of lowest probablty are assgned a 0 and a. 2. These two source symbols are regarded as beng combned nto a new source symbol wth probablty equal to the sum of the two orgnal probabltes. The probablty of the new symbol s placed n the lst n accordance wth ts value. 3. The procedure s repeated untl we are left wth a fnal lst of source statstcs of only two for whch a 0 and a are assgned. 4. The code for each (orgnal) source symbol s found by workng backward and tracng the sequence of 0s and s assgned to that symbol as well as ts successors. 4

42 Codng for Dscrete Memoryless Sources Symbol S0 S S2 S3 S4 Probablty Code Word Symbol Stage Stage 2 Stage 3 Stage 4 S0 0.4 S 0.2 S S3 0. S

43 Codng for Dscrete Memoryless Sources Huffman codng algorthm Example 43

44 Codng for Dscrete Memoryless Sources Huffman codng algorthm 44

45 Codng for Dscrete Memoryless Sources Huffman codng algorthm Example 45

46 Codng for Dscrete Memoryless Sources Huffman codng algorthm Ths algorthm s optmum n the sense that the average number of bnary dgts requred to represent the source symbols s a mnmum, subject to the constrant that the code words satsfy the prefx condton, whch h allows the receved sequence to be unquely and nstantaneously decodable. Huffman encodng process s not unque. Code words for dfferent Huffman encodng process can have dfferent lengths. However, the average code-word length s the same. When a combned symbol s moved as hgh as possble, the resultng Huffman code has a sgnfcantly smaller varance than when t s moved as low as possble. 46

47 Codng for Dscrete Memoryless Sources Huffman codng algorthm The varable-length length encodng (Huffman) algorthm descrbed n the above mentoned examples generates a prefx code havng an R that satsfes: H ( X) R < H ( X) + A more effcent procedure s to encode blocks of J symbols at a tme. In such a case, the bounds of source codng theorem II become: R J JH ( X ) R J < JH ( X ) + H ( X ) R < H ( X ) + J J R can be made as close to H(X) as desred by selectng J suffcently large. To desgn a Huffman code for a DMS, we need to know the probabltes of occurrence of all the source letters. 47

48 Codng for Dscrete Memoryless Sources Huffman codng algorthm Example 48

49 Codng for Dscrete Memoryless Sources Huffman codng algorthm Example 49

50 Dscrete Statonary Sources We consder dscrete sources for whch the sequence of output letters s statstcally dependent and statstcally statonary. The entropy of a block of random varables X X 2 X k s: k (... ) (... ) H XX X = H X XX X Eq. q(), (), P K 2 = H(X X X 2 X - ) s the condtonal entropy of the th symbol from the source gven the prevous - symbols. The entropy per letter for the k-symbol block s defned as H k ( X) = H ( XX2... Xk) k Informaton content of a statonary source s defned as the entropy per letter n the lmt as k. H X Hk X H X X2 X k k k ( ) lm ( ) = lm (... ) k 50

51 Dscrete Statonary Sources The entropy per letter from the source can be defned n terms of the condtonal entropy H(X( k X X 2 X k-) ) n the lmt as k approaches nfnty. H X = lm H X X X... X ( ) ( ) k k 2 k Exstence of the above equaton can be found n page For a dscrete statonary source that emts J letters wth H J (X)asthe entropy per letter. H ( X X ) R J H ( X X )... <... + J R J HJ ( X) R < HJ ( X) + J In the lmt as J, we have: ( ) ( ) H X R < H X + ε J J 5

52 The Lempel-Zv Algorthm For Huffman Codng, except for the estmaton of the margnal probabltes {p k }, correspondng to the frequency of occurrence of the ndvdual source output letters, the computatonal complexty nvolved n estmatng jont probabltes s extremely hgh. The applcaton of the Huffman codng method to source codng for many real sources wth memory s generally mpractcal. The Lempel-Zv source codng algorthm s desgned to be ndependent of the source statstcs. It belongs to the class of unversal source codng algorthms. It s a varable-to-fxed-length algorthm. 52

53 The Lempel-Zv Algorthm Operaton of Lempel-Zv algorthm. The sequence at the output of the dscrete source s parsed nto varable-length blocks, whch are called phrases. 2. A new phrase s ntroduced every tme a block of letters from the source dffers from some prevous phrase n the last letter. 3. The phrases are lsted n a dctonary, whch stores the locaton of the exstng phrases. 4. In encodng a new phrase, we smply specfy the locaton of the exstng gphrase n the dctonary and append the new letter ,0,0,,0,00,00,,00,000,0,00,0,0,000,0 53

54 The Lempel-Zv Algorthm Operaton of Lempel-Zv algorthm (cont.) To encode the phrases, we construct a dctonary: 54

55 The Lempel-Zv Algorthm Operaton of Lempel-Zv algorthm (cont.) 5. The code words are determned by lstng the dctonary locaton (n bnary form) of the prevous phrase that matches the new phrase n all but the last locaton. 6. The new output letter s appended to the dctonary locaton of the prevous phrase. 7. The locaton 0000 s used to encode a phrase that has not appeared prevously. 8. The source decoder for the code constructs an dentcal copy of the dctonary at the recevng end of the communcaton system and decodes the receved sequence n step wth the transmtted data sequence. 55

56 The Lempel-Zv Algorthm Operaton of Lempel-Zv algorthm (cont.) As the sequence s ncreased n length, the encodng procedure becomes more effcent and results n a compressed sequence at the output of the source. No matter how large the table s, t wll eventually overflow. To solve the overflow problem, the source encoder and decoder must use an dentcal procedure to remove phrases from the dctonares that are not useful and substtute new phrases n ther place. Lempel-Zv algorthm s wdely used n the compresson of computer fles. E.g. compress and uncompress utltes under the UNIX OS. 56

57 Channel Models Bnary symmetrc channel (BSC) If the channel nose and other dsturbances cause statstcally ndependent errors n the transmtted bnary sequence wth average probablty yp, then ( Y = 0 X = ) = P( Y = X = 0) p ( Y = X = ) = P ( Y = 0 X = 0 ) = p P = P Y 57

58 Channel Models Dscrete memoryless channels (DMC) BSC s a specal case of a more general dscrete-nput, nput dscreteoutput channel. Output symbols from the channel encoder are q-ary symbols,.e., X={x 0, x,, x q- }. Output of the detector conssts of Q-ary symbols, where Q M=2 2 q. If the channel and modulaton are memoryless, we have a set of qq condtonal probabltes: ( Y = y X = x ) P( y x ) P j j where =0,,,Q- 0 and dj=0,,,q-. 0 Such a channel s called a dscrete memoryless channel (DMC). 58

59 Channel Models Dscrete memoryless channels (DMC) Input u,u 2,,u n Output: v,v 2,,v n The condtonal probablty s gven by: ( Y = v, Y = v,...,y = v X = u,..., X = u ) P 2 2 n n n ( = v X u ) = P Y k = k = k In general, the condtonal probabltes P(y j x ) can be arranged n the matrx form P=[p j ], called probablty transton matrx. q y p n Dscrete q-ary nput, Q-ary output channel 59

60 Channel Models Dscrete-nput, contnuous-output channel Dscrete nput alphabet X={x 0,xx,,x x q- }. Output of the detector s unquantzed (Q= ). The most mportant channel of ths type s the addtve whte Gaussan nose (AWGN) channel, for whch Y = X + G where G s a zeor-mean Gaussan random varable wth varance σ 2 and X=x k, k=0,,,q-. p ( y X = x k ) = e 2πσ ( y x ) 2 k 2 2σ ( y = = = =,..., X u X u,..., X u ), y2, yn, 2 2, n n p ( y X = u ) p 2 n = 60

61 Channel Models Waveform channels Assume that a channel has a gven bandwdth W, wth deal frequency response C( f )= wthn the bandwdth W, and the sgnal at ts output s corrupted by AWGN: y(t)=x(t)+n(t). ) ( ) ( ) Expand y(t), x(t), and n(t) nto a complete set of orthonormal functons: () = ( ) ( ) = ( ) ( ) = ( ) y t y f t, x t x f t, n t n f t. T T * * () () () () () y = y t f t dt = x t + n t f t dt = x + n 0 0 T 0 f * () t f () t j dt = δ j = 0 ( = j) ( j ) 6

62 Channel Models Waveform channels Snce y =x +n, t follows that: p ( y x ) = e 2πσ 2 ( y ) 2 x 2 2 σ, =, 2,... Snce the functons {f (t)} are orthonormal, t follows that the {n } are uncorrelated. Snce they are Gaussan, they are also statstcally ndependent: p N, y2,..., yn x, x2,..., xn = p y = ( y ) ( x ) Samples of x(t)andy(t) may be taken at the Nyqust rate of 2W samples per second. Thus, n a tme nterval of length T, there are N=2WT samples. 62

63 Channel Capacty Consder a DMC havng an nput alphabet X={x 0,x,, x q- }, an output alphabet Y={y 0, y,, y Q- }, and the set of transton probabltes P(y,x j ). The mutual nformaton provded d about the event X=x j by the occurrence of the event Y=y s log[p(y x j )/P(y )], where P ( y ) P( Y = y ) P( x ) P( y x ) q = k = Hence, the average mutual nformaton provded by the output Y about the nput X s: q Q ( ) ( j) ( j) 0 k k ( xj) P y I X; Y = P x P y x log ( ) P y j= 0 = 0 63

64 Channel Capacty The value of I(X;Y) maxmzed over the set of nput symbol probabltes P(x j ) s a quantty that depends only on the characterstcs of the DMC through the condtonal probabltes P(y x j ). Ths quantty s called the capacty of the channel and s denoted by C: = max ( ; ) ( ) C I X Y P x j q Q = ( ) ( ) max P x j P y xj log P( xj ) ( x ) P y The maxmzaton of I(X;Y) s performed under the constrants that ( ) P y j= 0 = 0 64 q ( ) ( ) j P xj P x 0 and =. j= 0 j

65 Channel Capacty Example: BSC wth transton probabltes P(0 )=P( 0)=p. The average mutual nformaton s maxmzed when the nput probabltes P(0)=P()=½. The ecapacty c yof the BSC s C ( p) log 2( p) = H ( p) = p log 2 p + where H(p) s the bnary entropy functon. 65

66 Channel Capacty Consder the dscrete-tme AWGN memoryless channel descrbed by p( y X = xk ) = e 2πσ ( y x ) 2 k 2 2σ The capacty of ths channel n bts per channel use s the maxmum average mutual nformaton between the dscrete nput X={x 0,x,,x q- } and the output Y={,- }: where C = max ( x ) q P = 0 p p ( y x ) P( x ) q = log ( y ) p ( y x k ) P ( x k ) k =0 2 ( x ) ( y) dy P y P 66

67 Channel Capacty Example: Consder a bnary-nput AWGN memoryless channel wth possble nputs X=A and X=-A. The average mutual nformaton I(X;Y) s maxmzed when the nput probabltes are P(X=A)=P(X=-A)=½. ½ ( ) p( y) p ( y A ) 2 p( y) p y A C = p( y A) log 2 dy 2 + p( y A) log dy 2 SNR 67

68 Channel Capacty It s not always the case to obtan the channel capacty by assumng that the nput symbols are equally yprobable. Nothng can be sad n general about the nput probablty assgnment that maxmzes the average mutual nformaton. It can be shown that the necessary and suffcent condtons for the set of nput probabltes {P(x j )} to maxmze I(X;Y) and to acheve capacty on a DMC are: I I ( x ; ) for all wth ( ) j Y = C j P x j > 0 ( x ; Y ) C for all j wth P( x ) = 0 j where C s the capacty of the channel and I ( x ; Y ) P( y x ) j j ( y x ) Q P = j log P =0 ( y ) j 68

69 Channel Capacty Consder a band-lmted waveform channel wth AWGN. The capacty of the channel per unt tme has been defned by Shannon (948) as C = lm max I ( X ; Y ) T p( x) T Alternatvely, we may use the samples or the coeffcents {y }, {x }, and {n } n the seres expansons of y(t), x(t), andn(t) n(t) to determne the average mutual nformaton between x N =[x x 2 x N ] and y N =[y y 2 y N ], where N=2WT, y = x + n. I p ( ) ( ) ( ) ( y N x N ) X N ; YN = p y N x N p x N log dx N dy N x N y N p ( y N ) N p ( ) ( ) ( y x ) = p y x p x log dydx p ( y ) = (*) 69

70 Channel Capacty where 2 ( y x ) N0 p ( y x ) = e π N 0 The maxmum of I(X;Y) over the nput PDFs p(x ) s obtaned when we the e{x } are statstcally s ndependent zero-mean e Gaussan random varables,.e., From (*) () n P.69 p ( x ) 2 2 x 2σ x x = e 2πσ N 2 2 2σ x 2σ x max I( XN; YN) = log + = Nlog + px ( ) = 2 N0 2 N0 2 2σ x = WT log + N0 70 x

71 Channel Capacty If we put a constrant on the average power n x(t),.e., T 2 N ( ) ( 2 Nσ P ) av = E x t dt E x 0 T = = T = T 2 TPav Pav σ x = = N 2W P ( ) ( ) = + av max I X N ; YN WT log p x WN0 Dvdng d both sdes by T and we can obtan the capacty of the bandlmted AWGN waveform channel wth a band-lmted and average power-lmted nput: P = log + av C W WN0 2 x 7

72 Channel Capacty Normalzed channel capacty as a functon of SNR for bandlmted AWGN channel Channel capacty as a functon of bandwdth wth a fxed transmtted average power 72

73 Channel Capacty Note that as W approaches nfnty, the capacty of the channel approaches the asymptotc value Pav Pav C = log 2 e = bts/s N ln 2 0 N 0 Snce P av represents the average transmtted power and C s the rate n bts/s, t follows that Hence, we have Consequently C W P av = Cε b C log 2 + W ε = b N 0 ε C W b 2 N 0 = C W 73

74 Channel Capacty When C/W=, ε b /N 0 = (0 db). When C/W, CW ε b 2 C C exp ln 2 ln N0 C W W W ε b ncreases expontally as CW. N 0 When C/W 0 ε b N 0 = lm C W 0 C W 2 C W = ln 2 74

75 Channel Capacty The channel capacty formulas serve as upper lmts on the transmsson rate for relable communcaton over a nosy channel. Nosy channel codng theorem by Shannon (948) There exst channel codes (and decoders) that make t possble to acheve relable communcaton, wth as small an error probablty as desred, f the transmsson rate R<C, where C s the channel capacty. If R>C, t s not possble to make the probablty of error tend toward zero wth any code. 75

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Wreless Informaton Transmsson System Lab. Chapter 7 Channel Capacty and Codng Insttute of Communcatons Engneerng atonal Sun Yat-sen Unversty Contents 7. Channel models and channel capacty 7.. Channel models

More information

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Chapter 7 Channel Capacty and Codng Contents 7. Channel models and channel capacty 7.. Channel models Bnary symmetrc channel Dscrete memoryless channels Dscrete-nput, contnuous-output channel Waveform

More information

EGR 544 Communication Theory

EGR 544 Communication Theory EGR 544 Communcaton Theory. Informaton Sources Z. Alyazcoglu Electrcal and Computer Engneerng Department Cal Poly Pomona Introducton Informaton Source x n Informaton sources Analog sources Dscrete sources

More information

Lecture 3: Shannon s Theorem

Lecture 3: Shannon s Theorem CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Error Probability for M Signals

Error Probability for M Signals Chapter 3 rror Probablty for M Sgnals In ths chapter we dscuss the error probablty n decdng whch of M sgnals was transmtted over an arbtrary channel. We assume the sgnals are represented by a set of orthonormal

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

Entropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or

Entropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or Sgnal Compresson Sgnal Compresson Entropy Codng Entropy codng s also known as zero-error codng, data compresson or lossless compresson. Entropy codng s wdely used n vrtually all popular nternatonal multmeda

More information

ECE 534: Elements of Information Theory. Solutions to Midterm Exam (Spring 2006)

ECE 534: Elements of Information Theory. Solutions to Midterm Exam (Spring 2006) ECE 534: Elements of Informaton Theory Solutons to Mdterm Eam (Sprng 6) Problem [ pts.] A dscrete memoryless source has an alphabet of three letters,, =,, 3, wth probabltes.4,.4, and., respectvely. (a)

More information

Pulse Coded Modulation

Pulse Coded Modulation Pulse Coded Modulaton PCM (Pulse Coded Modulaton) s a voce codng technque defned by the ITU-T G.711 standard and t s used n dgtal telephony to encode the voce sgnal. The frst step n the analog to dgtal

More information

Digital Modems. Lecture 2

Digital Modems. Lecture 2 Dgtal Modems Lecture Revew We have shown that both Bayes and eyman/pearson crtera are based on the Lkelhood Rato Test (LRT) Λ ( r ) < > η Λ r s called observaton transformaton or suffcent statstc The crtera

More information

Introduction to Information Theory, Data Compression,

Introduction to Information Theory, Data Compression, Introducton to Informaton Theory, Data Compresson, Codng Mehd Ibm Brahm, Laura Mnkova Aprl 5, 208 Ths s the augmented transcrpt of a lecture gven by Luc Devroye on the 3th of March 208 for a Data Structures

More information

Communication with AWGN Interference

Communication with AWGN Interference Communcaton wth AWG Interference m {m } {p(m } Modulator s {s } r=s+n Recever ˆm AWG n m s a dscrete random varable(rv whch takes m wth probablty p(m. Modulator maps each m nto a waveform sgnal s m=m

More information

THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan.

THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan. THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY Wllam A. Pearlman 2002 References: S. Armoto - IEEE Trans. Inform. Thy., Jan. 1972 R. Blahut - IEEE Trans. Inform. Thy., July 1972 Recall

More information

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem. Lecture 14 (03/27/18). Channels. Decodng. Prevew of the Capacty Theorem. A. Barg The concept of a communcaton channel n nformaton theory s an abstracton for transmttng dgtal (and analog) nformaton from

More information

Consider the following passband digital communication system model. c t. modulator. t r a n s m i t t e r. signal decoder.

Consider the following passband digital communication system model. c t. modulator. t r a n s m i t t e r. signal decoder. PASSBAND DIGITAL MODULATION TECHNIQUES Consder the followng passband dgtal communcaton system model. cos( ω + φ ) c t message source m sgnal encoder s modulator s () t communcaton xt () channel t r a n

More information

Chapter 8 SCALAR QUANTIZATION

Chapter 8 SCALAR QUANTIZATION Outlne Chapter 8 SCALAR QUANTIZATION Yeuan-Kuen Lee [ CU, CSIE ] 8.1 Overvew 8. Introducton 8.4 Unform Quantzer 8.5 Adaptve Quantzaton 8.6 Nonunform Quantzaton 8.7 Entropy-Coded Quantzaton Ch 8 Scalar

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Lossy Compression. Compromise accuracy of reconstruction for increased compression. Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

Introduction to information theory and data compression

Introduction to information theory and data compression Introducton to nformaton theory and data compresson Adel Magra, Emma Gouné, Irène Woo March 8, 207 Ths s the augmented transcrpt of a lecture gven by Luc Devroye on March 9th 207 for a Data Structures

More information

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore Sesson Outlne Introducton to classfcaton problems and dscrete choce models. Introducton to Logstcs Regresson. Logstc functon and Logt functon. Maxmum Lkelhood Estmator (MLE) for estmaton of LR parameters.

More information

Quantum and Classical Information Theory with Disentropy

Quantum and Classical Information Theory with Disentropy Quantum and Classcal Informaton Theory wth Dsentropy R V Ramos rubensramos@ufcbr Lab of Quantum Informaton Technology, Department of Telenformatc Engneerng Federal Unversty of Ceara - DETI/UFC, CP 6007

More information

MAXIMUM A POSTERIORI TRANSDUCTION

MAXIMUM A POSTERIORI TRANSDUCTION MAXIMUM A POSTERIORI TRANSDUCTION LI-WEI WANG, JU-FU FENG School of Mathematcal Scences, Peng Unversty, Bejng, 0087, Chna Center for Informaton Scences, Peng Unversty, Bejng, 0087, Chna E-MIAL: {wanglw,

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

Compression in the Real World :Algorithms in the Real World. Compression in the Real World. Compression Outline

Compression in the Real World :Algorithms in the Real World. Compression in the Real World. Compression Outline Compresson n the Real World 5-853:Algorthms n the Real World Data Compresson: Lectures and 2 Generc Fle Compresson Fles: gzp (LZ77), bzp (Burrows-Wheeler), BOA (PPM) Archvers: ARC (LZW), PKZp (LZW+) Fle

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran

More information

6. Stochastic processes (2)

6. Stochastic processes (2) 6. Stochastc processes () Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 6. Stochastc processes () Contents Markov processes Brth-death processes 6. Stochastc processes () Markov process

More information

6. Stochastic processes (2)

6. Stochastic processes (2) Contents Markov processes Brth-death processes Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 Markov process Consder a contnuous-tme and dscrete-state stochastc process X(t) wth state space

More information

Lecture 4: Universal Hash Functions/Streaming Cont d

Lecture 4: Universal Hash Functions/Streaming Cont d CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Complete subgraphs in multipartite graphs

Complete subgraphs in multipartite graphs Complete subgraphs n multpartte graphs FLORIAN PFENDER Unverstät Rostock, Insttut für Mathematk D-18057 Rostock, Germany Floran.Pfender@un-rostock.de Abstract Turán s Theorem states that every graph G

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

Application of Nonbinary LDPC Codes for Communication over Fading Channels Using Higher Order Modulations

Application of Nonbinary LDPC Codes for Communication over Fading Channels Using Higher Order Modulations Applcaton of Nonbnary LDPC Codes for Communcaton over Fadng Channels Usng Hgher Order Modulatons Rong-Hu Peng and Rong-Rong Chen Department of Electrcal and Computer Engneerng Unversty of Utah Ths work

More information

Assuming that the transmission delay is negligible, we have

Assuming that the transmission delay is negligible, we have Baseband Transmsson of Bnary Sgnals Let g(t), =,, be a sgnal transmtted over an AWG channel. Consder the followng recever g (t) + + Σ x(t) LTI flter h(t) y(t) t = nt y(nt) threshold comparator Decson ˆ

More information

Tornado and Luby Transform Codes. Ashish Khisti Presentation October 22, 2003

Tornado and Luby Transform Codes. Ashish Khisti Presentation October 22, 2003 Tornado and Luby Transform Codes Ashsh Khst 6.454 Presentaton October 22, 2003 Background: Erasure Channel Elas[956] studed the Erasure Channel β x x β β x 2 m x 2 k? Capacty of Noseless Erasure Channel

More information

Applied Stochastic Processes

Applied Stochastic Processes STAT455/855 Fall 23 Appled Stochastc Processes Fnal Exam, Bref Solutons 1. (15 marks) (a) (7 marks) The dstrbuton of Y s gven by ( ) ( ) y 2 1 5 P (Y y) for y 2, 3,... The above follows because each of

More information

State Amplification and State Masking for the Binary Energy Harvesting Channel

State Amplification and State Masking for the Binary Energy Harvesting Channel State Amplfcaton and State Maskng for the Bnary Energy Harvestng Channel Kaya Tutuncuoglu, Omur Ozel 2, Ayln Yener, and Sennur Ulukus 2 Department of Electrcal Engneerng, The Pennsylvana State Unversty,

More information

Assignment 2. Tyler Shendruk February 19, 2010

Assignment 2. Tyler Shendruk February 19, 2010 Assgnment yler Shendruk February 9, 00 Kadar Ch. Problem 8 We have an N N symmetrc matrx, M. he symmetry means M M and we ll say the elements of the matrx are m j. he elements are pulled from a probablty

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

What would be a reasonable choice of the quantization step Δ?

What would be a reasonable choice of the quantization step Δ? CE 108 HOMEWORK 4 EXERCISE 1. Suppose you are samplng the output of a sensor at 10 KHz and quantze t wth a unform quantzer at 10 ts per sample. Assume that the margnal pdf of the sgnal s Gaussan wth mean

More information

Channel Encoder. Channel. Figure 7.1: Communication system

Channel Encoder. Channel. Figure 7.1: Communication system Chapter 7 Processes The model of a communcaton system that we have been developng s shown n Fgure 7.. Ths model s also useful for some computaton systems. The source s assumed to emt a stream of symbols.

More information

Chapter 8 Indicator Variables

Chapter 8 Indicator Variables Chapter 8 Indcator Varables In general, e explanatory varables n any regresson analyss are assumed to be quanttatve n nature. For example, e varables lke temperature, dstance, age etc. are quanttatve n

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Continuous Time Markov Chain

Continuous Time Markov Chain Contnuous Tme Markov Chan Hu Jn Department of Electroncs and Communcaton Engneerng Hanyang Unversty ERICA Campus Contents Contnuous tme Markov Chan (CTMC) Propertes of sojourn tme Relatons Transton probablty

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Convergence of random processes

Convergence of random processes DS-GA 12 Lecture notes 6 Fall 216 Convergence of random processes 1 Introducton In these notes we study convergence of dscrete random processes. Ths allows to characterze phenomena such as the law of large

More information

find (x): given element x, return the canonical element of the set containing x;

find (x): given element x, return the canonical element of the set containing x; COS 43 Sprng, 009 Dsjont Set Unon Problem: Mantan a collecton of dsjont sets. Two operatons: fnd the set contanng a gven element; unte two sets nto one (destructvely). Approach: Canoncal element method:

More information

Lossless Compression Performance of a Simple Counter- Based Entropy Coder

Lossless Compression Performance of a Simple Counter- Based Entropy Coder ITB J. ICT, Vol. 5, No. 3, 20, 73-84 73 Lossless Compresson Performance of a Smple Counter- Based Entropy Coder Armen Z. R. Lang,2 ITB Research Center on Informaton and Communcaton Technology 2 Informaton

More information

Lecture 3: Probability Distributions

Lecture 3: Probability Distributions Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the

More information

Low Complexity Soft-Input Soft-Output Hamming Decoder

Low Complexity Soft-Input Soft-Output Hamming Decoder Low Complexty Soft-Input Soft-Output Hammng Der Benjamn Müller, Martn Holters, Udo Zölzer Helmut Schmdt Unversty Unversty of the Federal Armed Forces Department of Sgnal Processng and Communcatons Holstenhofweg

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests Smulated of the Cramér-von Mses Goodness-of-Ft Tests Steele, M., Chaselng, J. and 3 Hurst, C. School of Mathematcal and Physcal Scences, James Cook Unversty, Australan School of Envronmental Studes, Grffth

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD he Gaussan classfer Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world X observatons g decson functon L[g,y] loss of predctng y wth g Bayes decson rule s

More information

Transform Coding. Transform Coding Principle

Transform Coding. Transform Coding Principle Transform Codng Prncple of block-wse transform codng Propertes of orthonormal transforms Dscrete cosne transform (DCT) Bt allocaton for transform coeffcents Entropy codng of transform coeffcents Typcal

More information

Probability Theory (revisited)

Probability Theory (revisited) Probablty Theory (revsted) Summary Probablty v.s. plausblty Random varables Smulaton of Random Experments Challenge The alarm of a shop rang. Soon afterwards, a man was seen runnng n the street, persecuted

More information

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur Analyss of Varance and Desgn of Exerments-I MODULE III LECTURE - 2 EXPERIMENTAL DESIGN MODELS Dr. Shalabh Deartment of Mathematcs and Statstcs Indan Insttute of Technology Kanur 2 We consder the models

More information

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton

More information

The Feynman path integral

The Feynman path integral The Feynman path ntegral Aprl 3, 205 Hesenberg and Schrödnger pctures The Schrödnger wave functon places the tme dependence of a physcal system n the state, ψ, t, where the state s a vector n Hlbert space

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Laboratory 1c: Method of Least Squares

Laboratory 1c: Method of Least Squares Lab 1c, Least Squares Laboratory 1c: Method of Least Squares Introducton Consder the graph of expermental data n Fgure 1. In ths experment x s the ndependent varable and y the dependent varable. Clearly

More information

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1 Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons

More information

Expected Value and Variance

Expected Value and Variance MATH 38 Expected Value and Varance Dr. Neal, WKU We now shall dscuss how to fnd the average and standard devaton of a random varable X. Expected Value Defnton. The expected value (or average value, or

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Signal space Review on vector space Linear independence Metric space and norm Inner product

Signal space Review on vector space Linear independence Metric space and norm Inner product Sgnal space.... Revew on vector space.... Lnear ndependence... 3.3 Metrc space and norm... 4.4 Inner product... 5.5 Orthonormal bass... 7.6 Waveform communcaton system... 9.7 Some examples... 6 Sgnal space

More information

CSE4210 Architecture and Hardware for DSP

CSE4210 Architecture and Hardware for DSP 4210 Archtecture and Hardware for DSP Lecture 1 Introducton & Number systems Admnstratve Stuff 4210 Archtecture and Hardware for DSP Text: VLSI Dgtal Sgnal Processng Systems: Desgn and Implementaton. K.

More information

CHAPTER III Neural Networks as Associative Memory

CHAPTER III Neural Networks as Associative Memory CHAPTER III Neural Networs as Assocatve Memory Introducton One of the prmary functons of the bran s assocatve memory. We assocate the faces wth names, letters wth sounds, or we can recognze the people

More information

Bit Juggling. Representing Information. representations. - Some other bits. - Representing information using bits - Number. Chapter

Bit Juggling. Representing Information. representations. - Some other bits. - Representing information using bits - Number. Chapter Representng Informaton 1 1 1 1 Bt Jugglng - Representng nformaton usng bts - Number representatons - Some other bts Chapter 3.1-3.3 REMINDER: Problem Set #1 s now posted and s due next Wednesday L3 Encodng

More information

Exercises of Chapter 2

Exercises of Chapter 2 Exercses of Chapter Chuang-Cheh Ln Department of Computer Scence and Informaton Engneerng, Natonal Chung Cheng Unversty, Mng-Hsung, Chay 61, Tawan. Exercse.6. Suppose that we ndependently roll two standard

More information

On mutual information estimation for mixed-pair random variables

On mutual information estimation for mixed-pair random variables On mutual nformaton estmaton for mxed-par random varables November 3, 218 Aleksandr Beknazaryan, Xn Dang and Haln Sang 1 Department of Mathematcs, The Unversty of Msssspp, Unversty, MS 38677, USA. E-mal:

More information

ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE

ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE School of Computer and Communcaton Scences Handout 0 Prncples of Dgtal Communcatons Solutons to Problem Set 4 Mar. 6, 08 Soluton. If H = 0, we have Y = Z Z = Y

More information

University of Washington Department of Chemistry Chemistry 453 Winter Quarter 2015

University of Washington Department of Chemistry Chemistry 453 Winter Quarter 2015 Lecture 2. 1/07/15-1/09/15 Unversty of Washngton Department of Chemstry Chemstry 453 Wnter Quarter 2015 We are not talkng about truth. We are talkng about somethng that seems lke truth. The truth we want

More information

STAT 511 FINAL EXAM NAME Spring 2001

STAT 511 FINAL EXAM NAME Spring 2001 STAT 5 FINAL EXAM NAME Sprng Instructons: Ths s a closed book exam. No notes or books are allowed. ou may use a calculator but you are not allowed to store notes or formulas n the calculator. Please wrte

More information

Feb 14: Spatial analysis of data fields

Feb 14: Spatial analysis of data fields Feb 4: Spatal analyss of data felds Mappng rregularly sampled data onto a regular grd Many analyss technques for geophyscal data requre the data be located at regular ntervals n space and/or tme. hs s

More information

Asymptotic Quantization: A Method for Determining Zador s Constant

Asymptotic Quantization: A Method for Determining Zador s Constant Asymptotc Quantzaton: A Method for Determnng Zador s Constant Joyce Shh Because of the fnte capacty of modern communcaton systems better methods of encodng data are requred. Quantzaton refers to the methods

More information

Entropy of Markov Information Sources and Capacity of Discrete Input Constrained Channels (from Immink, Coding Techniques for Digital Recorders)

Entropy of Markov Information Sources and Capacity of Discrete Input Constrained Channels (from Immink, Coding Techniques for Digital Recorders) Entropy of Marov Informaton Sources and Capacty of Dscrete Input Constraned Channels (from Immn, Codng Technques for Dgtal Recorders). Entropy of Marov Chans We have already ntroduced the noton of entropy

More information

Markov chains. Definition of a CTMC: [2, page 381] is a continuous time, discrete value random process such that for an infinitesimal

Markov chains. Definition of a CTMC: [2, page 381] is a continuous time, discrete value random process such that for an infinitesimal Markov chans M. Veeraraghavan; March 17, 2004 [Tp: Study the MC, QT, and Lttle s law lectures together: CTMC (MC lecture), M/M/1 queue (QT lecture), Lttle s law lecture (when dervng the mean response tme

More information

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA 4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected

More information

CSCE 790S Background Results

CSCE 790S Background Results CSCE 790S Background Results Stephen A. Fenner September 8, 011 Abstract These results are background to the course CSCE 790S/CSCE 790B, Quantum Computaton and Informaton (Sprng 007 and Fall 011). Each

More information

FREQUENCY DISTRIBUTIONS Page 1 of The idea of a frequency distribution for sets of observations will be introduced,

FREQUENCY DISTRIBUTIONS Page 1 of The idea of a frequency distribution for sets of observations will be introduced, FREQUENCY DISTRIBUTIONS Page 1 of 6 I. Introducton 1. The dea of a frequency dstrbuton for sets of observatons wll be ntroduced, together wth some of the mechancs for constructng dstrbutons of data. Then

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Statistics and Probability Theory in Civil, Surveying and Environmental Engineering

Statistics and Probability Theory in Civil, Surveying and Environmental Engineering Statstcs and Probablty Theory n Cvl, Surveyng and Envronmental Engneerng Pro. Dr. Mchael Havbro Faber ETH Zurch, Swtzerland Contents o Todays Lecture Overvew o Uncertanty Modelng Random Varables - propertes

More information

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS HCMC Unversty of Pedagogy Thong Nguyen Huu et al. A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS Thong Nguyen Huu and Hao Tran Van Department of mathematcs-nformaton,

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

LINEAR REGRESSION ANALYSIS. MODULE VIII Lecture Indicator Variables

LINEAR REGRESSION ANALYSIS. MODULE VIII Lecture Indicator Variables LINEAR REGRESSION ANALYSIS MODULE VIII Lecture - 7 Indcator Varables Dr. Shalabh Department of Maematcs and Statstcs Indan Insttute of Technology Kanpur Indcator varables versus quanttatve explanatory

More information

Uncertainty in measurements of power and energy on power networks

Uncertainty in measurements of power and energy on power networks Uncertanty n measurements of power and energy on power networks E. Manov, N. Kolev Department of Measurement and Instrumentaton, Techncal Unversty Sofa, bul. Klment Ohrdsk No8, bl., 000 Sofa, Bulgara Tel./fax:

More information

Negative Binomial Regression

Negative Binomial Regression STATGRAPHICS Rev. 9/16/2013 Negatve Bnomal Regresson Summary... 1 Data Input... 3 Statstcal Model... 3 Analyss Summary... 4 Analyss Optons... 7 Plot of Ftted Model... 8 Observed Versus Predcted... 10 Predctons...

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information