Chapter 5. Presentation. Entropy STATISTICAL CODING

Size: px
Start display at page:

Download "Chapter 5. Presentation. Entropy STATISTICAL CODING"

Transcription

1 Chapter 5 STATISTICAL CODING Presetato Etropy

2 Iformato data codg Iformato data codg coded represetato of formato Ijectve correspodece Message {b } Multples roles of codg Preparg the trasformato message => trasmtted sgal Adaptg the source bt rate - chael capacty (compresso ) Protectve ecodg agast trasmsso errors (error detecto / correcto) Ecryptg ( secretve commucatos ) Tattoog ( owershp markers ) Trascodg (alphabet chages, trasmsso costrats ) The goal of a commucato system s to trasport messages from a seder (the formato source) towards a recpet (formato user). The sgal supportg the formato beg trasmtted has to be compatble wth the characterstcs of the trasmsso chael. Iformato codg must establsh a jectve correspodece betwee the message produced by the source ad the sequece of formato {b } set to the trasmtter. Ths formato codg plays umerous roles: - It prepares the trasformato of a message to a sgal (carred out the trasmtter part: sgal formato ecodg); - It adapts the source of formato to the capacty of the trasmttg chael; - It ca be used to protect formato agast trasmsso errors (wth certa lmts), so as to detect ad/or correct them (due to chael dsturbaces); - It ca also be used certa cases for secret commucatos (ecryptg) ad watermarkg to protect owershp. A gve trasmsso chael must be able to trasport messages of varous types that s why trascodg s ecessary: ths trasforms the message represetato from a codebook M k usg a alphabet A k to a represetato of the same message from a codebook M 0 usg alphabet A 0. Ths partcular alphabet s ofte the bary set {0, 1}, but ths s ot the oly oe.

3 Iformato data codg Deftos Message sources S: producto of a sequece of messages, each of them beg selected a set M of messages ( M : codebook of possble messages M = { m 1, m 2,.}, the m are also called "words") Message: fte sequece of symbols (characters take from A : alphabet ) Alphabet: fte set of symbols A = { a 1, a 2,, a k } Deftos: A message s ay fte set of characters take from a alphabet A: a fte set of symbols (for example: letters, dgts etc.). A message source S s the system that produces a temporal seres of messages m, each of them take from a set of possble messages M. M s called a message (or word) codebook. The trasmtted message s fact a text formed by the sytactc rules of elemetary messages called words: M = {m 1, m 2, }, each word s wrtte by a fxed, fte set of symbols take from the alphabet A. Depedg o the applcatos, the message sources S ca use dctoares of very dfferet types: from messages wrtte usg characters of the alphabet, umbers, puctuato ad tab marks, to vsual messages where the messages are dgtal mages where for example, each word s a pxel represeted as a sequece of 8 bary symbols take from the alphabet {0, 1} (bt).

4 Etropy of a source (SHANNON 1948) Defto of ucertaty ad of etropy Ucertaty I of a evet E: I(E) = -log 2 Pr{E} Uts: bt ( BIary ut f log 2 ) f source smple s = > I(s ) = =1; I(m α ) Etropy H of a dscrete radom varable X: at (NAtural ut f Log e ): 1 at=1.443 bts H(X) = E X [ I(X) ] = =1; p I(X ) = - =1; p log 2 (p ) Propertes of etropy H 0 ; H s cotuous, symmetrcal; H(p 1,, p N ) log 2 f (p 1,, p ) ad (q 1, q ) are 2 dstrbutos of probabltes ==> =1; p log 2 ( q / p ) 0 car Log x < x - 1 The ucertaty I of a evet E of probablty Pr( E ) s defed by: I ( E ) = log 2 Pr( 1 = - log E ) 2 Pr( E ) Notes: - f Pr( E ) = 1/2 the I ( E ) = 1 (utary ucertaty) - f Pr( E ) = 1 the I ( E ) = 0: ucertaty s ull for a certa evet. - The ucertaty ut s bt (Bary Ut). It s ot the same as the bt: Bary dgt. - We ca use the atural logarthm stead of the base 2 logarthm, therefore the ut s the at (Natural Ut = bt).

5 We ow cosder that the evets E are fact realzatos of a radom dscrete varable X. We defe the etropy H as beg the average ucertaty of the radom varable X. If we cosder fact each evet x, [1, ], as a realzato of a radom varable X (.e. X s a radom varable wth values { x 1, x 2,, x }) : H(X) = E X { I(X) } = =1.. Pr{X = x }. I(x ) = =1.. p.i(x ), wth p = Pr{X = x } The etropy depeds o the probablty law of X but t s ot a fucto of the values take by X. It s expressed bts (or ats) ad represets the average umber of bts ecessary to bary ecode the dfferet realzatos of X. Now let s cosder a formato source S defed by a set of possble m (codebook): S{m 1, m 2,, m N }, ad by a mechasm such as for emttg messages: s = {m α1, m α2,, m α } wth m α1 : 1 st emtted message,, m α : th emtted message. Warg: the dex α defes the temporal dex the sequece of messages emtted by the source. α defes the dex of the th message emtted the codebook M of possble messages, geerally: N. The choce of m α occurs accordg to a gve probablty law. The emsso of a dscrete source of formato thus correspods to a sequece of radom varables X, [1, ]: The probablty of s ca be expressed as a product of codtoal probabltes: Pr(s ) = Pr{X 1 = m α1 } Pr{X 2 = m α2 / X 1 = m α1 } Pr{ X = m α / X 1 = m α1,, X -1 = m α-1 } I the case of smple sources, the radom varables X are depedet ad of the same law, whch gves: (, j) [1, ] x [1, N], Pr{ X = m j } = p j, et Pr{s } = p α1.p α2 p α I ( s ) = - log 2 Pr{s } = - log 2 ( p α1.p α2 p α ) = =1.. - log 2 p α = =1.. I ( m α1 ) I ( s ) = =1.. I ( m α1 ) I the case of a dscrete source of messages m, where each message m s assocated wth a probablty p, the etropy H of the source S s gve by: H (S) = = 1 p. log 2 p

6 Propertes of etropy: - As 0 p 1 ad p 1, the H(X) > 0: the etropy s postve. = 1 = q - Gve (p 1, p 2,, p ) ad (q 1, q 2,, q ) two probablty laws, the p log2 0. p x > 0, we have L x x 1 so l q p q p - 1, beg log2 q p = 1 1 q ( - 1) l2 p thus q 1 p = q p = ( 1 1) = 0 = l2 1 = p l2 1 q p log p = = l The etropy of a radom varable X wth possble values s maxmal ad s worth log 2 whe X follows a uform probablty law. By takg q 1 = q 2 = = q = 1 (uform law), the prevous property: q p log2 0 p p log2 p p log = 1 = 1 = 1 = 1 H(X) p log 1 2 H(X) log 1 2 p = log2 = 1 - The etropy s cotuous ad symmetrcal. For the rest of ths course, we wll systematcally use the logarthm base 2. 2 q Smple example: Let s cosder a source S, of uform law, that seds messages from the 26-character Frech (a,b,c,, z). To ths alphabet we add the "space" character as a word separator The alphabet s made up of 27 characters: H(S) = - log = log 2 (27) = 4.75 bts of formato per character. Actually, the etropy s close to 4 bts of formato per character o a very large amout of Frech text. = 1

7 Chapter 5 STATISTICAL CODING Huffma Codg

8 Optmal statstcal codg Deftos: - S: dscrete ad smple source of messages m wth probablty law p = (p 1,., p N ) (homogeeous source) - Codg of a alphabet A = { a 1, a 2,, a q } - Etropy of the source H(S) ad average legth of code-words E() MacMlla s theorem: - There exsts at least oe rreducble vertg code that matches: H / log2 q E() < (H / log2 q) +1 Equalty f p of the form: p = q - ( f q = 2 => = - log2 p ) Shao s theorem (1 st theorem o oseless codg) H / log2 q E() < (H / log2 q) + ε 0 I the prevous resource, "Defg codg ad propertes" we saw that the objectves of codg are maly to trascrbe formato ad to reduce the quatty of symbols ecessary to represet formato. By optmzg the codg, we attempt to reduce the quatty of symbols as much as possble. Let s cosder a smple source of homogeeous formato S = {m 1, m 2,, m N } armed wth a probablty law p = { p 1, p 2,, p N } où p = Pr{m = m }. If M s the code-word that correspods to the message m, we call = (M ) the umber of characters that belog to the alphabet A (Card(A) = q) eeded for the codg of m, s thus the legth of the code-word M. The average legth of the code-words s the: E() = N = 1 p. The average ucertaty of a source s the etropy H. The average ucertaty per character of the alphabet A s equal to E H, so we get: () E H log () 2 q because alphabet A cotas «q» characters. From ths equalty, we deduce that E() H. log q 2

9 To optmze codg, we wat to reduce E(), the average legth of the code-words. Ths average legth caot be lower tha H. Nevertheless, two theores show that t s possble log q to obta equalty E() = H ad a optmal code: log q Mac Mlla s theorem: 2 2 A formato source S wth etropy H coded by a vertg way wth a alphabet coutg q characters s such that: E() H ad there exsts at least oe rreducble code of a gve log q 2 law such as H + 1. Equalty s reached f p = q - (.e. = - log q p ) ad we the have log2q a optmal codg. Note: I the partcular case of a bary alphabet {0, 1}, we have q = 2. If the relatoshp = - log 2 p s true, we the have E() = H: the etropy s the bottom lmt of the set of code-word average legths ad ths lower lmt s reached. Shao s theorem of oseless codg: Ay homogeeous source of formato s such as there exsts a rreducble codg for whch the average legth of code-words s as close as we wat to the lower lmt H. The log2q demostrato of ths theorem uses rreducble codg blocks (block codg assgs a codeword to each block of «k» messages of S, cosecutve or ot).

10 Optmal statstcal codg Fao - Shao codg Arthmetc codg (block ecodg, terval type ecodg) possbltes of o le adaptato Huffma codg 3 basc prcples: - f p < p j => j - the 2 ulkelest codes have the same legth - the 2 ulkelest codes (of max legth) have the same prefx of legth max -1 The presetato above shows three types of codes that are close to optmal code: Shao-Fao codg: It tres to approach as much as possble the most compact rreducble code, but the probablty p s ot usually equal to 2 -, so the codg ca oly be close to optmal codg. The probabltes p assocated wth the messages m are arraged by decreasg order the we fx so that: log 1 1. Fally, we choose each code-word M of legth so that 2 p oe of the prevously chose code-words forms a prefx (t avods decodg ambguty cf.: "classfcato of the codes"). Arthmetc codg: The code s assocated wth a sequece of messages from the source ad ot wth each message. Ulke Huffma codg, whch must have a teger legth of bts per message ad whch does ot always allow a optmal compresso, arthmetc codg lets you code a message o a o-teger umber of bts: ths s the most effectve method, but t s also the slowest. The dfferet aspects of ths codg are more fully developed the resource: "Statstcal codg: arthmetc codg".

11 Huffma codg: Huffma codg s the optmal rreducble code. It s based o three prcples: - f p j > p the j, - the two most ulkely words have equal legths, - the latters are wrtte wth the same max -1 frst characters. By usg ths procedure teratvely, we buld the code-words M of the messages m. - Example: Let be a source S = {m 1, m 2,, m 8 } wth a probablty law: p 1 = 0,4 ; p 2 = 0,18 ; p 3 = p 4 = 0,1 ; p 5 = 0,07 ; p 6 = 0,06 ; p 7 = 0,05 ; p 8 = 0,04. We place these probabltes decreasg order the colum p (0) of the table below. We ca see that the colum p (0) the probabltes of the messages m 7 ad m 8 are the smallest, we add them ad reorder the probabltes, stll decreasg order, to create the colum p (1) : Geerally, we add the two smallest probabltes the colum p (k), the we reorder the probabltes decreasg order to obta the colum p (k+1). Fally we get the followg table:

12 We assg the bts 0 ad 1 to the last two elemets of each colum: For each message m, we go through the table from left to rght ad each colum we ca see the assocated probablty p (k) (blue path o the llustrato below). The code-word M s the obtaed by startg from the last colum o the rght ad movg back to the frst colum o the left, by selectg the bts assocated wth the probabltes p (k) of the message m (gree rectagles o the llustrato below). For example, we wat to determe the code-word M 6 of the message m 6. We detect all the probabltes p 6 (k) :

13 The code-word M 6 s thus obtaed by smply readg from rght to left the bts cotaed the gree rectagles: By followg the same procedure for each message, we obta: The average legth of the code-words s equal to: 8 E() = = 1 E() = 2,61 p = 0, , , , , , , ,04 5 We ca compare ths sze wth the etropy H of the source: H = 8 = 1 p. log p = 2, ,552 The effcecy η of the Huffma codg for ths example s thus = 97.8 %. For 2,61 comparso purposes, 3 bts are eeded to code 8 dfferet messages wth a atural bary (2 3 = 8). For ths example, the effcecy of the atural bary codg s oly 2,552 = 85 %. 3

14 Chapter 5 STATISTICAL CODING Arthmetc Codg

15 Arthmetc Codes Basc prcples: - the code s assocated to the sequece of symbols m (messages), ot to every symbol the sequece. - codg of tervals of type [c, d [ for each symbol. - terato o the selected terval for the ext symbol of the sequece - oe codes the sequece of symbols wth a real value o [0, 1 [. Arthmetc codes allow you to ecode a sequece of evets by usg estmates of the probabltes of the evets. The arthmetc code assgs oe codeword to each possble data set. Ths techque dffers from Huffma codes, whch assg oe codeword to each possble evet. The codeword assged to oe sequece s a real umber whch belogs to the half-ope ut terval [0, 1 [. Arthmetc codes are calculated by successve subdvsos of ths orgal ut terval. For each ew evet the sequece, the subterval s refed usg the probabltes of all the dvdual evets. Fally we create a half-ope subterval of [0, 1[, so that ay value ths subterval ecodes the orgal sequece.

16 Deftos: Arthmetc Codes - Let S = {s 1, s 2,, s N } be a source ad p k = Pr(s k ) - [Ls k, Hs k [ s the terval correspodg to the symbol s k wth: Hs k - Ls k = p k Ecodg algorthm: 1) Italzato: L c = 0 ; H c = 1 2) Calculate code sub-tervals 3) Get ext put symbol s k 4) Update the code sub-terval 5) Repeat from step 2 utl all the sequece has bee ecoded Let S = {s 1,, s N } be a source whch ca produce N source symbols. The probabltes of the source symbols are deoted by: [1, N], P{s k } = p k. Here s the basc algorthm for the arthmetc codg of a sequece s M = {s α1, s α2,, s αm } of M source symbols (s αk stads for the k-th source symbol that occurs the sequece we wat to ecode): Step1: Let us beg wth a curret half-ope terval [L c, H c [ talzed to [0, 1[ (ths terval correspods to the probablty of choosg the frst source symbol s α1 ). The legth of the curret terval s thus defed by: legth = H c - L c. Step 2: For each source symbol the sequece, we subdvde the curret terval to half-ope subtervals [Ls k, Hs k [, oe for each possble source symbol s k. The sze of a symbol s subterval [Ls k, Hs k [ depeds o the probablty p k of the symbol s k. The subterval legth ( Ls k - Hs k ) s defed so that: H k - L k = p k, the: k Ls k = L c + legth 1 k p ad Hs = 1 k = L c + legth p. = 1

17 Step 3: We select the subterval correspodg to the source symbol s k that occurs ext ad make t the ew curret terval [L c, H c [: L c = L c + legth Ls k H = L + legth Hs c c k Step 4 : Ths ew curret terval s subdvded aga as descrbed Step 2. Step 5 : Repeat the steps 2, 3, ad 4 utl the whole sequece of source symbols has bee ecoded.

18 Example Arthmetc Codg The source S= {-2, -1, 0, 1, 2} s a set of 5 possble moto vector values (used for vdeo codg) Null moto vector Y s the radom varable assocated to the moto vector values Wth the followg probabltes: Pr{Y = -2} = p 1 = 0,1 Pr{Y = -1} = p 2 = 0,2 Pr{Y = 0} = p 3 = 0,4 Pr{Y = 1} = p 4 = 0,2 Pr{Y = 2} = p 5 = 0,1 We wat to ecode the moto vector sequece (0, -1, 0, 2) Here s a example for ecodg moto vectors of a vdeo sgal wth a arthmetc code.

19 Arthmetc Codg Subdvsos of the curret sub-tervals s α1 = 0 s α2 = -1 s α3 = 0 s α1 = 2 To ecode the sequece {s α1, s α2, s α3, s α4 } = {s 3, s 2, s 3, s 5 } = {0, -1, 0, 2}, we subdvde the ut curret terval [0, 1[ to 5 half-ope tervals, the we select the subterval correspodg to the frst moto vector that occurs the sequece (value 0). Ths subterval s the ew curret terval. We subdvde t ad we select the subterval correspodg to the ext evet (the moto vector -1). We repeat these steps for each source symbol of the sequece (here these source symbols are the moto vectors). Cosequetly, we ca ecode the sequece (0, -1, 0, 2) of vertcal moto vectors by ay value the half-ope rage [0.3928, 0.396[. The value ecodes ths sequece; therefore we eed 8 bts to ecode ths sequece: So we eed 8/5 = 1.6 bts/symbol =

20 Arthmetc Codg Decodg algorthm 1) Italzato: L c = 0 ; H c = 1 2) Calculate the code sub-terval legth: legth = H c - L c 3) Fd the symbol sub-terval [Ls k, Hs k [ wth 1 k N such that: Ls k (codeword L c ) / legth < Hs k 4) Output symbol: s k 5) Update the subterval: L c = L c + legth Ls k H c = L c + legth Hs k 6) Repeat from step 2 utl all the last symbol s decoded The fgure above descrbes the decodg algorthm of a codeword obtaed after havg ecoded a sequece of source symbols wth a arthmetc code. Ths decodg algorthm s performed for the prevous example. Let us cosder the codeword M c = : [ Algorthm begg ] Step 1: We talze the curret terval [L c, H c [ : L c = 0 et H c = 1. Step 2: We calculate the legth L of the curret terval: L = H c - L c = 1. Step 3: We calculate the value ( M c - L c ) / L = , ad we select the subterval [Ls k, Hs k [ so that M c [Ls k, Hs k [. Here, the selected subterval s [0.3, 0.7[. Ths subterval correspods to the half-ope terval [Ls 3, Hs 3 [. Step 4: The frst symbol s α1 of the sequece s thus s 3 (moto vector 0).

21 Step 5: We create the ew curret terval [L c, H c [ for ecodg the ext source symbol: Step 2: L c = L c + L Ls 3 = = 0.3 H c = L c + L Hs 3 = = 0.7 We calculate the legth L of the curret terval: L = H c - L c = = 0.4. Step 3: ( M c - L c ) / L = ( ) / 0.4 = 0,2363. Ths value belogs to the subterval [Ls 2, Hs 2 [ = [0.1, 0.3[. Step 4: The secod symbol s α2 of the sequece s thus s 2 (moto vector -1). Step 5: L c = L c + L Ls 2 = = 0.34 H c = L c + L Hs 2 = = 0.42 Step 2: We calculate the legth L of the curret terval: L = H c - L c = = Step 3: ( M c - L c ) / L = ( ) / 0.08 = Ths value belogs to the subterval [Ls 3, Hs 3 [ = [0.3, 0.7[. Step 4: The thrd symbol s α3 of the sequece s thus s 3 (moto vector 0). Step 5: L c = L c + L Ls 3 = = H c = L c + L Hs 3 = = Step 2: We calculate the legth L of the curret terval: L = H c - L c = =

22 Step 3: ( M c - L c ) / L = ( ) / = Ths value belogs to the subterval [Ls 5, Hs 5 [ = [0.9, 1[. Step 4: The fourth symbol s α4 of the sequece s thus s 5 (moto vector 2). [ Algorthm ed ] The decodg of the value allows us to rebuld the orgal sequece {s α1, s α2, s α3, s α4 } = {s 3, s 2, s 3, s 5 } = { 0, -1, 0, 2 }. Cotrary to Huffma codes, arthmetc codes allow you to allocate fractoal bts to symbols. The data compresso wth arthmetc codes s thus more effcet. However arthmetc codg s slower tha Huffma. It s ot possble to start decodg wthout the etre sequece of symbols, whch s possble Huffma codg. The compresso rate ca also be creased by usg probablty models whch are ot statc. The probabltes are adapted accordg to the curret ad the prevous sequeces: arthmetc codg ca thus hadle adaptve codg.

23 Chapter 5 STATISTICAL CODING Presetato Classfcato of the statstcal codes

24 Iformato data codg Objectves Trascrpto of formato to facltate codg code sgal (Trascodg) Iformato compresso reducg formato sze Protecto agast trasmsso errors agast loss ad decso errors Keepg trasmtted formato secret ecrypto Defto of a code applcato of S A = { a 1, a 2,, a q } message m S code-word M M fte sequeces of A Iformato codg cossts of trascrbg messages from a formato source the form of a sequece of characters take from a predefed alphabet. The objectves of codg fall to four ma categores: trascrbg formato a form that makes t easy to create a sgal that ca hadle the formato, or easy to hadle the formato automatcally. To do ths, dfferet codes for represetg the formato are used depedg o the evsaged applcato, wth trascodg operatos frequetly beg used; reducg the umber of formato symbols eeded to represet the formato ( terms of the total umber of symbols used): ths s a space-savg role; prevetg qualty loss (dstorto, ose) caused by the trasmsso chael ad whch lead to errors whe you recostruct the formato whe t leaves the trasmsso chael (upo recepto); protectg cofdetal formato by makg t utellgble except for ts teded recpet.

25 Defto of a code: Gve a set, called alphabet A made up of q characters a : A = { a 1, a 2,, a q }ad M the fte set of fte sequeces M of characters (for example: M = a 10 a 4 a 7 ). Gve a fte set of messages emtted by a message source S: S={ m 1,, m N }. A code refers to ay applcato of S A: codg of S through the use of the alphabet A. The elemet M of M whch correspods to the message m of S s called the codeword of m. Its legth, oted as, s the umber of characters belogg to A whch compose M. The decodg of a sequece of set messages m volves beg able to separate the codewords a receved sequece of codewords M. Ths s why we sometmes use a specal spacg character a alphabet.

26 Iformato data codg (4) Alphabet A = { a 1, a 2,, a q } Fte set of messages S = { m 1, m 2,., m,, m N } Codg C = { M 1, M 2,., M,..., M N } Legth of code-words: = (M ) Average legth of code-words: E ( ) = =1;N p Etropy of the source H: H(p 1,, p N ) log 2 N Average quatty of formato per character = H / E() or H / E() log 2 q => E() H / log 2 q Flow of a source of formato coded wth a average D characters per secod: R = D H/E() => R D log 2 q R bts/secod From here o, we shall call the messages produced by the formato source m ad M the codewords assocated wth them. We wll call = (M ) the umber of characters belogg to a alphabet A (Card(A) = q) ecessary for codg m, beg the legth of the codeword M. If the source uses N possble dfferet messages, the average legth of the codewords s gve by: N E() = = 1 p, where p = Pr{ m }. H s the average ucertaty (.e. the etropy) of the source S per message set, so the average ucertaty (.e. the etropy) per character (of the alphabet A) equals E H ad we () have: E H log () 2 q (because we have q characters the alphabet A), so: E() H. log2q Fally, f the coded formato source produces D characters per secod take from the alphabet A, E H beg the average formato trasported per character bt/character, the () character rate R of formato s: R = D. E H. () Ths character rate s the lmted by: R D.log 2 q.

27 Codg ad decodg formato (5) Effcecy η of a code: η = m / E() => η = H / ( E() log 2 q ) Redudacy ρ of a code : ρ = 1 - η Smple examples: codes C 1 ad C 2 Costrats: separato of code-words & uambguous readg of code-words => regular ad vertg codes Regular code: f m m j ==> M M j (jectve applcato) Ivertg codes : 2 sequeces of dstct messages ==> 2 sequeces of dstct codes f (m α1,, m α ) (m β1,, m βj ) => (M α1,, M α ) (M β1,, M βj ) examples: fxed legth codes; codes wth separator Irreducble code: vertg code that ca be decoded wthout ay devce M s ot a prefx of M j, j Some deftos ad propertes lked to formato ecodg ad decodg: Effcecy: For a gve alphabet A, the effcecy of a code s η gve by: H m m E() log2q η = = = = H, η [0, 1] E() E() E() E() log2q Redudacy: The mathematcal redudacy s defed by the factor ρ = 1 - η. Redudacy ca be used to crease the robustess of the codg whe faced wth trasmsso errors for the coded formato (error detecto ad correcto).

28 Here s a smple example: we cosder a source of 4 possble messages {m 1, m 2, m 3, m 4 } of probabltes: p 1 = 0.5 ; p 2 = 0.25 ; p 3 = p 4 = 0.125, respectvely. Gve the followg two codes C 1 (smple bary codage) ad C 2 (varable legth code): Messages Codes m 1 m 2 m 3 m 4 C C For C 1 : η = 1.75 = ad ρ = 1 - η = For C 2 : η = 1.75 = 1 ad ρ = 1 - η = The code C 2 s of maxmum effcecy (utary) whle code C 1 s ot. Regular code: Ay gve code-word s assocated wth oly oe possble message (applcato S A s bjectve): f m m j the M M j. Ivertg code: The code s vertg f two dstct sets of messages (m α1,, m α ) ad (m β1,, m βj ) ecessary lead to dstct codgs (for example code of fxed legth such as C 1 ad codes wth separator). A vertg code s the a specal case of a regular code. Irreducble code: Ths s a decryptable code that ca be read drectly wthout ay specal devce (fxed legth code, separator). To do that, ay code-word M of a message m must have o prefx that s aother code-word M j. I ths way, we ca create a herarchcal classfcato to characterze a code s type:

29 Code examples Regular codes / Ivertg codes / Irreducble codes Messages Proba. m m m m C C C C C 1 s a regular code C 2 s a o-vertg code C 3 s a vertg ad rreducble code C 4 s oly a vertg code Here are four codes C 1, C 2, C 3 ad C 4 gve as examples of the prevous deftos ad propertes. We suppose that the four messages m 1, m 2, m 3, ad m 4 are dstct. The code C 1 s ot regular: m 1 m 2 but C 1 (m 1 ) = C 1 (m 2 ), ad also C 1 (m 3 ) = C 1 (m 4 ). The code C 2 s a o-vertg code: the two texts {m 1, m 2 } ad {m 4 } are dfferet, but they lead to the same code «01». The code C 3 s a vertg ad rreducble code: two dstct texts made up of sequeces of messages, for example {m 1, m 3, m 4 } ad {m 1, m 2 } always lead to dfferet codes ad o code-word M = C 3 (m ) s prefxed by aother code-word M j = C 3 (m j ) The code C 4 s a vertg code but ot rreducble: two dstct texts always lead to dfferet codes but the code-words M = C 4 (m ) are the prefxes of all the code-words M j = C 4 (m j ) oce < j.

CHAPTER VI Statistical Analysis of Experimental Data

CHAPTER VI Statistical Analysis of Experimental Data Chapter VI Statstcal Aalyss of Expermetal Data CHAPTER VI Statstcal Aalyss of Expermetal Data Measuremets do ot lead to a uque value. Ths s a result of the multtude of errors (maly radom errors) that ca

More information

(b) By independence, the probability that the string 1011 is received correctly is

(b) By independence, the probability that the string 1011 is received correctly is Soluto to Problem 1.31. (a) Let A be the evet that a 0 s trasmtted. Usg the total probablty theorem, the desred probablty s P(A)(1 ɛ ( 0)+ 1 P(A) ) (1 ɛ 1)=p(1 ɛ 0)+(1 p)(1 ɛ 1). (b) By depedece, the probablty

More information

Bounds on the expected entropy and KL-divergence of sampled multinomial distributions. Brandon C. Roy

Bounds on the expected entropy and KL-divergence of sampled multinomial distributions. Brandon C. Roy Bouds o the expected etropy ad KL-dvergece of sampled multomal dstrbutos Brado C. Roy bcroy@meda.mt.edu Orgal: May 18, 2011 Revsed: Jue 6, 2011 Abstract Iformato theoretc quattes calculated from a sampled

More information

3. Basic Concepts: Consequences and Properties

3. Basic Concepts: Consequences and Properties : 3. Basc Cocepts: Cosequeces ad Propertes Markku Jutt Overvew More advaced cosequeces ad propertes of the basc cocepts troduced the prevous lecture are derved. Source The materal s maly based o Sectos.6.8

More information

D. VQ WITH 1ST-ORDER LOSSLESS CODING

D. VQ WITH 1ST-ORDER LOSSLESS CODING VARIABLE-RATE VQ (AKA VQ WITH ENTROPY CODING) Varable-Rate VQ = Quatzato + Lossless Varable-Legth Bary Codg A rage of optos -- from smple to complex A. Uform scalar quatzato wth varable-legth codg, oe

More information

Summary of the lecture in Biostatistics

Summary of the lecture in Biostatistics Summary of the lecture Bostatstcs Probablty Desty Fucto For a cotuos radom varable, a probablty desty fucto s a fucto such that: 0 dx a b) b a dx A probablty desty fucto provdes a smple descrpto of the

More information

Lecture 3. Sampling, sampling distributions, and parameter estimation

Lecture 3. Sampling, sampling distributions, and parameter estimation Lecture 3 Samplg, samplg dstrbutos, ad parameter estmato Samplg Defto Populato s defed as the collecto of all the possble observatos of terest. The collecto of observatos we take from the populato s called

More information

18.413: Error Correcting Codes Lab March 2, Lecture 8

18.413: Error Correcting Codes Lab March 2, Lecture 8 18.413: Error Correctg Codes Lab March 2, 2004 Lecturer: Dael A. Spelma Lecture 8 8.1 Vector Spaces A set C {0, 1} s a vector space f for x all C ad y C, x + y C, where we take addto to be compoet wse

More information

The Mathematical Appendix

The Mathematical Appendix The Mathematcal Appedx Defto A: If ( Λ, Ω, where ( λ λ λ whch the probablty dstrbutos,,..., Defto A. uppose that ( Λ,,..., s a expermet type, the σ-algebra o λ λ λ are defed s deoted by ( (,,...,, σ Ω.

More information

Introduction to local (nonparametric) density estimation. methods

Introduction to local (nonparametric) density estimation. methods Itroducto to local (oparametrc) desty estmato methods A slecture by Yu Lu for ECE 66 Sprg 014 1. Itroducto Ths slecture troduces two local desty estmato methods whch are Parze desty estmato ad k-earest

More information

PTAS for Bin-Packing

PTAS for Bin-Packing CS 663: Patter Matchg Algorthms Scrbe: Che Jag /9/00. Itroducto PTAS for B-Packg The B-Packg problem s NP-hard. If we use approxmato algorthms, the B-Packg problem could be solved polyomal tme. For example,

More information

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS Postpoed exam: ECON430 Statstcs Date of exam: Jauary 0, 0 Tme for exam: 09:00 a.m. :00 oo The problem set covers 5 pages Resources allowed: All wrtte ad prted

More information

{ }{ ( )} (, ) = ( ) ( ) ( ) Chapter 14 Exercises in Sampling Theory. Exercise 1 (Simple random sampling): Solution:

{ }{ ( )} (, ) = ( ) ( ) ( ) Chapter 14 Exercises in Sampling Theory. Exercise 1 (Simple random sampling): Solution: Chapter 4 Exercses Samplg Theory Exercse (Smple radom samplg: Let there be two correlated radom varables X ad A sample of sze s draw from a populato by smple radom samplg wthout replacemet The observed

More information

best estimate (mean) for X uncertainty or error in the measurement (systematic, random or statistical) best

best estimate (mean) for X uncertainty or error in the measurement (systematic, random or statistical) best Error Aalyss Preamble Wheever a measuremet s made, the result followg from that measuremet s always subject to ucertaty The ucertaty ca be reduced by makg several measuremets of the same quatty or by mprovg

More information

Discrete Mathematics and Probability Theory Fall 2016 Seshia and Walrand DIS 10b

Discrete Mathematics and Probability Theory Fall 2016 Seshia and Walrand DIS 10b CS 70 Dscrete Mathematcs ad Probablty Theory Fall 206 Sesha ad Walrad DIS 0b. Wll I Get My Package? Seaky delvery guy of some compay s out delverg packages to customers. Not oly does he had a radom package

More information

Laboratory I.10 It All Adds Up

Laboratory I.10 It All Adds Up Laboratory I. It All Adds Up Goals The studet wll work wth Rema sums ad evaluate them usg Derve. The studet wll see applcatos of tegrals as accumulatos of chages. The studet wll revew curve fttg sklls.

More information

CHAPTER 4 RADICAL EXPRESSIONS

CHAPTER 4 RADICAL EXPRESSIONS 6 CHAPTER RADICAL EXPRESSIONS. The th Root of a Real Number A real umber a s called the th root of a real umber b f Thus, for example: s a square root of sce. s also a square root of sce ( ). s a cube

More information

VARIABLE-RATE VQ (AKA VQ WITH ENTROPY CODING)

VARIABLE-RATE VQ (AKA VQ WITH ENTROPY CODING) VARIABLE-RATE VQ (AKA VQ WITH ENTROPY CODING) Varable-Rate VQ = Quatzato + Lossless Varable-Legth Bary Codg A rage of optos -- from smple to complex a. Uform scalar quatzato wth varable-legth codg, oe

More information

Chapter 14 Logistic Regression Models

Chapter 14 Logistic Regression Models Chapter 4 Logstc Regresso Models I the lear regresso model X β + ε, there are two types of varables explaatory varables X, X,, X k ad study varable y These varables ca be measured o a cotuous scale as

More information

On generalized fuzzy mean code word lengths. Department of Mathematics, Jaypee University of Engineering and Technology, Guna, Madhya Pradesh, India

On generalized fuzzy mean code word lengths. Department of Mathematics, Jaypee University of Engineering and Technology, Guna, Madhya Pradesh, India merca Joural of ppled Mathematcs 04; (4): 7-34 Publshed ole ugust 30, 04 (http://www.scecepublshggroup.com//aam) do: 0.648/.aam.04004.3 ISSN: 330-0043 (Prt); ISSN: 330-006X (Ole) O geeralzed fuzzy mea

More information

Lecture 3 Probability review (cont d)

Lecture 3 Probability review (cont d) STATS 00: Itroducto to Statstcal Iferece Autum 06 Lecture 3 Probablty revew (cot d) 3. Jot dstrbutos If radom varables X,..., X k are depedet, the ther dstrbuto may be specfed by specfyg the dvdual dstrbuto

More information

NP!= P. By Liu Ran. Table of Contents. The P vs. NP problem is a major unsolved problem in computer

NP!= P. By Liu Ran. Table of Contents. The P vs. NP problem is a major unsolved problem in computer NP!= P By Lu Ra Table of Cotets. Itroduce 2. Strategy 3. Prelmary theorem 4. Proof 5. Expla 6. Cocluso. Itroduce The P vs. NP problem s a major usolved problem computer scece. Iformally, t asks whether

More information

CS286.2 Lecture 4: Dinur s Proof of the PCP Theorem

CS286.2 Lecture 4: Dinur s Proof of the PCP Theorem CS86. Lecture 4: Dur s Proof of the PCP Theorem Scrbe: Thom Bohdaowcz Prevously, we have prove a weak verso of the PCP theorem: NP PCP 1,1/ (r = poly, q = O(1)). Wth ths result we have the desred costat

More information

Estimation of Stress- Strength Reliability model using finite mixture of exponential distributions

Estimation of Stress- Strength Reliability model using finite mixture of exponential distributions Iteratoal Joural of Computatoal Egeerg Research Vol, 0 Issue, Estmato of Stress- Stregth Relablty model usg fte mxture of expoetal dstrbutos K.Sadhya, T.S.Umamaheswar Departmet of Mathematcs, Lal Bhadur

More information

Lecture Notes Types of economic variables

Lecture Notes Types of economic variables Lecture Notes 3 1. Types of ecoomc varables () Cotuous varable takes o a cotuum the sample space, such as all pots o a le or all real umbers Example: GDP, Polluto cocetrato, etc. () Dscrete varables fte

More information

Chapter 4 Multiple Random Variables

Chapter 4 Multiple Random Variables Revew for the prevous lecture: Theorems ad Examples: How to obta the pmf (pdf) of U = g (, Y) ad V = g (, Y) Chapter 4 Multple Radom Varables Chapter 44 Herarchcal Models ad Mxture Dstrbutos Examples:

More information

Investigating Cellular Automata

Investigating Cellular Automata Researcher: Taylor Dupuy Advsor: Aaro Wootto Semester: Fall 4 Ivestgatg Cellular Automata A Overvew of Cellular Automata: Cellular Automata are smple computer programs that geerate rows of black ad whte

More information

8.1 Hashing Algorithms

8.1 Hashing Algorithms CS787: Advaced Algorthms Scrbe: Mayak Maheshwar, Chrs Hrchs Lecturer: Shuch Chawla Topc: Hashg ad NP-Completeess Date: September 21 2007 Prevously we looked at applcatos of radomzed algorthms, ad bega

More information

Lecture 9: Tolerant Testing

Lecture 9: Tolerant Testing Lecture 9: Tolerat Testg Dael Kae Scrbe: Sakeerth Rao Aprl 4, 07 Abstract I ths lecture we prove a quas lear lower boud o the umber of samples eeded to do tolerat testg for L dstace. Tolerat Testg We have

More information

Solving Constrained Flow-Shop Scheduling. Problems with Three Machines

Solving Constrained Flow-Shop Scheduling. Problems with Three Machines It J Cotemp Math Sceces, Vol 5, 2010, o 19, 921-929 Solvg Costraed Flow-Shop Schedulg Problems wth Three Maches P Pada ad P Rajedra Departmet of Mathematcs, School of Advaced Sceces, VIT Uversty, Vellore-632

More information

Chapter 5 Properties of a Random Sample

Chapter 5 Properties of a Random Sample Lecture 6 o BST 63: Statstcal Theory I Ku Zhag, /0/008 Revew for the prevous lecture Cocepts: t-dstrbuto, F-dstrbuto Theorems: Dstrbutos of sample mea ad sample varace, relatoshp betwee sample mea ad sample

More information

CODING & MODULATION Prof. Ing. Anton Čižmár, PhD.

CODING & MODULATION Prof. Ing. Anton Čižmár, PhD. CODING & MODULATION Prof. Ig. Ato Čžmár, PhD. also from Dgtal Commucatos 4th Ed., J. G. Proaks, McGraw-Hll It. Ed. 00 CONTENT. PROBABILITY. STOCHASTIC PROCESSES Probablty ad Stochastc Processes The theory

More information

1 Mixed Quantum State. 2 Density Matrix. CS Density Matrices, von Neumann Entropy 3/7/07 Spring 2007 Lecture 13. ψ = α x x. ρ = p i ψ i ψ i.

1 Mixed Quantum State. 2 Density Matrix. CS Density Matrices, von Neumann Entropy 3/7/07 Spring 2007 Lecture 13. ψ = α x x. ρ = p i ψ i ψ i. CS 94- Desty Matrces, vo Neuma Etropy 3/7/07 Sprg 007 Lecture 3 I ths lecture, we wll dscuss the bascs of quatum formato theory I partcular, we wll dscuss mxed quatum states, desty matrces, vo Neuma etropy

More information

Chapter 3 Sampling For Proportions and Percentages

Chapter 3 Sampling For Proportions and Percentages Chapter 3 Samplg For Proportos ad Percetages I may stuatos, the characterstc uder study o whch the observatos are collected are qualtatve ature For example, the resposes of customers may marketg surveys

More information

THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA

THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA THE ROYAL STATISTICAL SOCIETY EXAMINATIONS SOLUTIONS GRADUATE DIPLOMA PAPER II STATISTICAL THEORY & METHODS The Socety provdes these solutos to assst caddates preparg for the examatos future years ad for

More information

ρ < 1 be five real numbers. The

ρ < 1 be five real numbers. The Lecture o BST 63: Statstcal Theory I Ku Zhag, /0/006 Revew for the prevous lecture Deftos: covarace, correlato Examples: How to calculate covarace ad correlato Theorems: propertes of correlato ad covarace

More information

NP!= P. By Liu Ran. Table of Contents. The P versus NP problem is a major unsolved problem in computer

NP!= P. By Liu Ran. Table of Contents. The P versus NP problem is a major unsolved problem in computer NP!= P By Lu Ra Table of Cotets. Itroduce 2. Prelmary theorem 3. Proof 4. Expla 5. Cocluso. Itroduce The P versus NP problem s a major usolved problem computer scece. Iformally, t asks whether a computer

More information

Some Notes on the Probability Space of Statistical Surveys

Some Notes on the Probability Space of Statistical Surveys Metodološk zvezk, Vol. 7, No., 200, 7-2 ome Notes o the Probablty pace of tatstcal urveys George Petrakos Abstract Ths paper troduces a formal presetato of samplg process usg prcples ad cocepts from Probablty

More information

Chapter 9 Jordan Block Matrices

Chapter 9 Jordan Block Matrices Chapter 9 Jorda Block atrces I ths chapter we wll solve the followg problem. Gve a lear operator T fd a bass R of F such that the matrx R (T) s as smple as possble. f course smple s a matter of taste.

More information

Introduction to Probability

Introduction to Probability Itroducto to Probablty Nader H Bshouty Departmet of Computer Scece Techo 32000 Israel e-mal: bshouty@cstechoacl 1 Combatorcs 11 Smple Rules I Combatorcs The rule of sum says that the umber of ways to choose

More information

Functions of Random Variables

Functions of Random Variables Fuctos of Radom Varables Chapter Fve Fuctos of Radom Varables 5. Itroducto A geeral egeerg aalyss model s show Fg. 5.. The model output (respose) cotas the performaces of a system or product, such as weght,

More information

Lecture 7. Confidence Intervals and Hypothesis Tests in the Simple CLR Model

Lecture 7. Confidence Intervals and Hypothesis Tests in the Simple CLR Model Lecture 7. Cofdece Itervals ad Hypothess Tests the Smple CLR Model I lecture 6 we troduced the Classcal Lear Regresso (CLR) model that s the radom expermet of whch the data Y,,, K, are the outcomes. The

More information

X ε ) = 0, or equivalently, lim

X ε ) = 0, or equivalently, lim Revew for the prevous lecture Cocepts: order statstcs Theorems: Dstrbutos of order statstcs Examples: How to get the dstrbuto of order statstcs Chapter 5 Propertes of a Radom Sample Secto 55 Covergece

More information

A tighter lower bound on the circuit size of the hardest Boolean functions

A tighter lower bound on the circuit size of the hardest Boolean functions Electroc Colloquum o Computatoal Complexty, Report No. 86 2011) A tghter lower boud o the crcut sze of the hardest Boolea fuctos Masak Yamamoto Abstract I [IPL2005], Fradse ad Mlterse mproved bouds o the

More information

Chain Rules for Entropy

Chain Rules for Entropy Cha Rules for Etroy The etroy of a collecto of radom varables s the sum of codtoal etroes. Theorem: Let be radom varables havg the mass robablty x x.x. The...... The roof s obtaed by reeatg the alcato

More information

Chapter 11 Systematic Sampling

Chapter 11 Systematic Sampling Chapter stematc amplg The sstematc samplg techue s operatoall more coveet tha the smple radom samplg. It also esures at the same tme that each ut has eual probablt of cluso the sample. I ths method of

More information

Entropy ISSN by MDPI

Entropy ISSN by MDPI Etropy 2003, 5, 233-238 Etropy ISSN 1099-4300 2003 by MDPI www.mdp.org/etropy O the Measure Etropy of Addtve Cellular Automata Hasa Aı Arts ad Sceces Faculty, Departmet of Mathematcs, Harra Uversty; 63100,

More information

THE ROYAL STATISTICAL SOCIETY HIGHER CERTIFICATE

THE ROYAL STATISTICAL SOCIETY HIGHER CERTIFICATE THE ROYAL STATISTICAL SOCIETY 00 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE PAPER I STATISTICAL THEORY The Socety provdes these solutos to assst caddates preparg for the examatos future years ad for the

More information

MA/CSSE 473 Day 27. Dynamic programming

MA/CSSE 473 Day 27. Dynamic programming MA/CSSE 473 Day 7 Dyamc Programmg Bomal Coeffcets Warshall's algorthm (Optmal BSTs) Studet questos? Dyamc programmg Used for problems wth recursve solutos ad overlappg subproblems Typcally, we save (memoze)

More information

CIS 800/002 The Algorithmic Foundations of Data Privacy October 13, Lecture 9. Database Update Algorithms: Multiplicative Weights

CIS 800/002 The Algorithmic Foundations of Data Privacy October 13, Lecture 9. Database Update Algorithms: Multiplicative Weights CIS 800/002 The Algorthmc Foudatos of Data Prvacy October 13, 2011 Lecturer: Aaro Roth Lecture 9 Scrbe: Aaro Roth Database Update Algorthms: Multplcatve Weghts We ll recall aga) some deftos from last tme:

More information

Econometric Methods. Review of Estimation

Econometric Methods. Review of Estimation Ecoometrc Methods Revew of Estmato Estmatg the populato mea Radom samplg Pot ad terval estmators Lear estmators Ubased estmators Lear Ubased Estmators (LUEs) Effcecy (mmum varace) ad Best Lear Ubased Estmators

More information

Algorithms Design & Analysis. Hash Tables

Algorithms Design & Analysis. Hash Tables Algorthms Desg & Aalyss Hash Tables Recap Lower boud Order statstcs 2 Today s topcs Drect-accessble table Hash tables Hash fuctos Uversal hashg Perfect Hashg Ope addressg 3 Symbol-table problem Symbol

More information

The number of observed cases The number of parameters. ith case of the dichotomous dependent variable. the ith case of the jth parameter

The number of observed cases The number of parameters. ith case of the dichotomous dependent variable. the ith case of the jth parameter LOGISTIC REGRESSION Notato Model Logstc regresso regresses a dchotomous depedet varable o a set of depedet varables. Several methods are mplemeted for selectg the depedet varables. The followg otato s

More information

A New Measure of Probabilistic Entropy. and its Properties

A New Measure of Probabilistic Entropy. and its Properties Appled Mathematcal Sceces, Vol. 4, 200, o. 28, 387-394 A New Measure of Probablstc Etropy ad ts Propertes Rajeesh Kumar Departmet of Mathematcs Kurukshetra Uversty Kurukshetra, Ida rajeesh_kuk@redffmal.com

More information

CHAPTER 3 POSTERIOR DISTRIBUTIONS

CHAPTER 3 POSTERIOR DISTRIBUTIONS CHAPTER 3 POSTERIOR DISTRIBUTIONS If scece caot measure the degree of probablt volved, so much the worse for scece. The practcal ma wll stck to hs apprecatve methods utl t does, or wll accept the results

More information

Source-Channel Prediction in Error Resilient Video Coding

Source-Channel Prediction in Error Resilient Video Coding Source-Chael Predcto Error Reslet Vdeo Codg Hua Yag ad Keeth Rose Sgal Compresso Laboratory ECE Departmet Uversty of Calfora, Sata Barbara Outle Itroducto Source-chael predcto Smulato results Coclusos

More information

Channel Polarization and Polar Codes; Capacity Achieving

Channel Polarization and Polar Codes; Capacity Achieving Chael Polarzato ad Polar Codes; Capacty chevg Peyma Hesam Tutoral of Iformato Theory Course Uversty of otre Dame December, 9, 009 bstract: ew proposed method for costructg codes that acheves the symmetrc

More information

2.28 The Wall Street Journal is probably referring to the average number of cubes used per glass measured for some population that they have chosen.

2.28 The Wall Street Journal is probably referring to the average number of cubes used per glass measured for some population that they have chosen. .5 x 54.5 a. x 7. 786 7 b. The raked observatos are: 7.4, 7.5, 7.7, 7.8, 7.9, 8.0, 8.. Sce the sample sze 7 s odd, the meda s the (+)/ 4 th raked observato, or meda 7.8 c. The cosumer would more lkely

More information

Wireless Link Properties

Wireless Link Properties Opportustc Ecrypto for Robust Wreless Securty R. Chadramoul ( Moul ) moul@steves.edu Multmeda System, Networkg, ad Commucatos (MSyNC) Laboratory, Departmet of Electrcal ad Computer Egeerg, Steves Isttute

More information

The internal structure of natural numbers, one method for the definition of large prime numbers, and a factorization test

The internal structure of natural numbers, one method for the definition of large prime numbers, and a factorization test Fal verso The teral structure of atural umbers oe method for the defto of large prme umbers ad a factorzato test Emmaul Maousos APM Isttute for the Advacemet of Physcs ad Mathematcs 3 Poulou str. 53 Athes

More information

1 Onto functions and bijections Applications to Counting

1 Onto functions and bijections Applications to Counting 1 Oto fuctos ad bectos Applcatos to Coutg Now we move o to a ew topc. Defto 1.1 (Surecto. A fucto f : A B s sad to be surectve or oto f for each b B there s some a A so that f(a B. What are examples of

More information

L5 Polynomial / Spline Curves

L5 Polynomial / Spline Curves L5 Polyomal / Sple Curves Cotets Coc sectos Polyomal Curves Hermte Curves Bezer Curves B-Sples No-Uform Ratoal B-Sples (NURBS) Mapulato ad Represetato of Curves Types of Curve Equatos Implct: Descrbe a

More information

Bayes (Naïve or not) Classifiers: Generative Approach

Bayes (Naïve or not) Classifiers: Generative Approach Logstc regresso Bayes (Naïve or ot) Classfers: Geeratve Approach What do we mea by Geeratve approach: Lear p(y), p(x y) ad the apply bayes rule to compute p(y x) for makg predctos Ths s essetally makg

More information

Feature Selection: Part 2. 1 Greedy Algorithms (continued from the last lecture)

Feature Selection: Part 2. 1 Greedy Algorithms (continued from the last lecture) CSE 546: Mache Learg Lecture 6 Feature Selecto: Part 2 Istructor: Sham Kakade Greedy Algorthms (cotued from the last lecture) There are varety of greedy algorthms ad umerous amg covetos for these algorthms.

More information

Lecture 8: Linear Regression

Lecture 8: Linear Regression Lecture 8: Lear egresso May 4, GENOME 56, Sprg Goals Develop basc cocepts of lear regresso from a probablstc framework Estmatg parameters ad hypothess testg wth lear models Lear regresso Su I Lee, CSE

More information

Statistics Descriptive and Inferential Statistics. Instructor: Daisuke Nagakura

Statistics Descriptive and Inferential Statistics. Instructor: Daisuke Nagakura Statstcs Descrptve ad Iferetal Statstcs Istructor: Dasuke Nagakura (agakura@z7.keo.jp) 1 Today s topc Today, I talk about two categores of statstcal aalyses, descrptve statstcs ad feretal statstcs, ad

More information

Pseudo-random Functions. PRG vs PRF

Pseudo-random Functions. PRG vs PRF Pseudo-radom Fuctos Debdeep Muhopadhyay IIT Kharagpur PRG vs PRF We have see the costructo of PRG (pseudo-radom geerators) beg costructed from ay oe-way fuctos. Now we shall cosder a related cocept: Pseudo-radom

More information

MEASURES OF DISPERSION

MEASURES OF DISPERSION MEASURES OF DISPERSION Measure of Cetral Tedecy: Measures of Cetral Tedecy ad Dsperso ) Mathematcal Average: a) Arthmetc mea (A.M.) b) Geometrc mea (G.M.) c) Harmoc mea (H.M.) ) Averages of Posto: a) Meda

More information

Assignment 5/MATH 247/Winter Due: Friday, February 19 in class (!) (answers will be posted right after class)

Assignment 5/MATH 247/Winter Due: Friday, February 19 in class (!) (answers will be posted right after class) Assgmet 5/MATH 7/Wter 00 Due: Frday, February 9 class (!) (aswers wll be posted rght after class) As usual, there are peces of text, before the questos [], [], themselves. Recall: For the quadratc form

More information

ECE 729 Introduction to Channel Coding

ECE 729 Introduction to Channel Coding chaelcodg.tex May 4, 2006 ECE 729 Itroducto to Chael Codg Cotets Fudametal Cocepts ad Techques. Chaels.....................2 Ecoders.....................2. Code Rates............... 2.3 Decoders....................

More information

GOALS The Samples Why Sample the Population? What is a Probability Sample? Four Most Commonly Used Probability Sampling Methods

GOALS The Samples Why Sample the Population? What is a Probability Sample? Four Most Commonly Used Probability Sampling Methods GOLS. Epla why a sample s the oly feasble way to lear about a populato.. Descrbe methods to select a sample. 3. Defe ad costruct a samplg dstrbuto of the sample mea. 4. Epla the cetral lmt theorem. 5.

More information

Homework 1: Solutions Sid Banerjee Problem 1: (Practice with Asymptotic Notation) ORIE 4520: Stochastics at Scale Fall 2015

Homework 1: Solutions Sid Banerjee Problem 1: (Practice with Asymptotic Notation) ORIE 4520: Stochastics at Scale Fall 2015 Fall 05 Homework : Solutos Problem : (Practce wth Asymptotc Notato) A essetal requremet for uderstadg scalg behavor s comfort wth asymptotc (or bg-o ) otato. I ths problem, you wll prove some basc facts

More information

Pseudo-random Functions

Pseudo-random Functions Pseudo-radom Fuctos Debdeep Mukhopadhyay IIT Kharagpur We have see the costructo of PRG (pseudo-radom geerators) beg costructed from ay oe-way fuctos. Now we shall cosder a related cocept: Pseudo-radom

More information

Simple Linear Regression

Simple Linear Regression Statstcal Methods I (EST 75) Page 139 Smple Lear Regresso Smple regresso applcatos are used to ft a model descrbg a lear relatoshp betwee two varables. The aspects of least squares regresso ad correlato

More information

Entropies & Information Theory

Entropies & Information Theory Etropes & Iformato Theory LECTURE II Nlajaa Datta Uversty of Cambrdge,U.K. See lecture otes o: http://www.q.damtp.cam.ac.uk/ode/223 quatum system States (of a physcal system): Hlbert space (fte-dmesoal)

More information

STK4011 and STK9011 Autumn 2016

STK4011 and STK9011 Autumn 2016 STK4 ad STK9 Autum 6 Pot estmato Covers (most of the followg materal from chapter 7: Secto 7.: pages 3-3 Secto 7..: pages 3-33 Secto 7..: pages 35-3 Secto 7..3: pages 34-35 Secto 7.3.: pages 33-33 Secto

More information

Basics of Information Theory: Markku Juntti. Basic concepts and tools 1 Introduction 2 Entropy, relative entropy and mutual information

Basics of Information Theory: Markku Juntti. Basic concepts and tools 1 Introduction 2 Entropy, relative entropy and mutual information : Maru Jutt Overvew he propertes of adlmted Gaussa chaels are further studed, parallel Gaussa chaels ad Gaussa chaels wth feedac are solved. Source he materal s maly ased o Sectos.4.6 of the course oo

More information

arxiv: v1 [math.st] 24 Oct 2016

arxiv: v1 [math.st] 24 Oct 2016 arxv:60.07554v [math.st] 24 Oct 206 Some Relatoshps ad Propertes of the Hypergeometrc Dstrbuto Peter H. Pesku, Departmet of Mathematcs ad Statstcs York Uversty, Toroto, Otaro M3J P3, Caada E-mal: pesku@pascal.math.yorku.ca

More information

1. A real number x is represented approximately by , and we are told that the relative error is 0.1 %. What is x? Note: There are two answers.

1. A real number x is represented approximately by , and we are told that the relative error is 0.1 %. What is x? Note: There are two answers. PROBLEMS A real umber s represeted appromately by 63, ad we are told that the relatve error s % What s? Note: There are two aswers Ht : Recall that % relatve error s What s the relatve error volved roudg

More information

KLT Tracker. Alignment. 1. Detect Harris corners in the first frame. 2. For each Harris corner compute motion between consecutive frames

KLT Tracker. Alignment. 1. Detect Harris corners in the first frame. 2. For each Harris corner compute motion between consecutive frames KLT Tracker Tracker. Detect Harrs corers the frst frame 2. For each Harrs corer compute moto betwee cosecutve frames (Algmet). 3. Lk moto vectors successve frames to get a track 4. Itroduce ew Harrs pots

More information

Lecture 1. (Part II) The number of ways of partitioning n distinct objects into k distinct groups containing n 1,

Lecture 1. (Part II) The number of ways of partitioning n distinct objects into k distinct groups containing n 1, Lecture (Part II) Materals Covered Ths Lecture: Chapter 2 (2.6 --- 2.0) The umber of ways of parttog dstct obects to dstct groups cotag, 2,, obects, respectvely, where each obect appears exactly oe group

More information

Dimensionality Reduction and Learning

Dimensionality Reduction and Learning CMSC 35900 (Sprg 009) Large Scale Learg Lecture: 3 Dmesoalty Reducto ad Learg Istructors: Sham Kakade ad Greg Shakharovch L Supervsed Methods ad Dmesoalty Reducto The theme of these two lectures s that

More information

Special Instructions / Useful Data

Special Instructions / Useful Data JAM 6 Set of all real umbers P A..d. B, p Posso Specal Istructos / Useful Data x,, :,,, x x Probablty of a evet A Idepedetly ad detcally dstrbuted Bomal dstrbuto wth parameters ad p Posso dstrbuto wth

More information

Lebesgue Measure of Generalized Cantor Set

Lebesgue Measure of Generalized Cantor Set Aals of Pure ad Appled Mathematcs Vol., No.,, -8 ISSN: -8X P), -888ole) Publshed o 8 May www.researchmathsc.org Aals of Lebesgue Measure of Geeralzed ator Set Md. Jahurul Islam ad Md. Shahdul Islam Departmet

More information

Chapter 8. Inferences about More Than Two Population Central Values

Chapter 8. Inferences about More Than Two Population Central Values Chapter 8. Ifereces about More Tha Two Populato Cetral Values Case tudy: Effect of Tmg of the Treatmet of Port-We tas wth Lasers ) To vestgate whether treatmet at a youg age would yeld better results tha

More information

On Fuzzy Arithmetic, Possibility Theory and Theory of Evidence

On Fuzzy Arithmetic, Possibility Theory and Theory of Evidence O Fuzzy rthmetc, Possblty Theory ad Theory of Evdece suco P. Cucala, Jose Vllar Isttute of Research Techology Uversdad Potfca Comllas C/ Sata Cruz de Marceado 6 8 Madrd. Spa bstract Ths paper explores

More information

Class 13,14 June 17, 19, 2015

Class 13,14 June 17, 19, 2015 Class 3,4 Jue 7, 9, 05 Pla for Class3,4:. Samplg dstrbuto of sample mea. The Cetral Lmt Theorem (CLT). Cofdece terval for ukow mea.. Samplg Dstrbuto for Sample mea. Methods used are based o CLT ( Cetral

More information

( ) = ( ) ( ) Chapter 13 Asymptotic Theory and Stochastic Regressors. Stochastic regressors model

( ) = ( ) ( ) Chapter 13 Asymptotic Theory and Stochastic Regressors. Stochastic regressors model Chapter 3 Asmptotc Theor ad Stochastc Regressors The ature of eplaator varable s assumed to be o-stochastc or fed repeated samples a regresso aalss Such a assumpto s approprate for those epermets whch

More information

STA 105-M BASIC STATISTICS (This is a multiple choice paper.)

STA 105-M BASIC STATISTICS (This is a multiple choice paper.) DCDM BUSINESS SCHOOL September Mock Eamatos STA 0-M BASIC STATISTICS (Ths s a multple choce paper.) Tme: hours 0 mutes INSTRUCTIONS TO CANDIDATES Do ot ope ths questo paper utl you have bee told to do

More information

Analysis of System Performance IN2072 Chapter 5 Analysis of Non Markov Systems

Analysis of System Performance IN2072 Chapter 5 Analysis of Non Markov Systems Char for Network Archtectures ad Servces Prof. Carle Departmet of Computer Scece U Müche Aalyss of System Performace IN2072 Chapter 5 Aalyss of No Markov Systems Dr. Alexader Kle Prof. Dr.-Ig. Georg Carle

More information

A Primer on Summation Notation George H Olson, Ph. D. Doctoral Program in Educational Leadership Appalachian State University Spring 2010

A Primer on Summation Notation George H Olson, Ph. D. Doctoral Program in Educational Leadership Appalachian State University Spring 2010 Summato Operator A Prmer o Summato otato George H Olso Ph D Doctoral Program Educatoal Leadershp Appalacha State Uversty Sprg 00 The summato operator ( ) {Greek letter captal sgma} s a structo to sum over

More information

Multiple Regression. More than 2 variables! Grade on Final. Multiple Regression 11/21/2012. Exam 2 Grades. Exam 2 Re-grades

Multiple Regression. More than 2 variables! Grade on Final. Multiple Regression 11/21/2012. Exam 2 Grades. Exam 2 Re-grades STAT 101 Dr. Kar Lock Morga 11/20/12 Exam 2 Grades Multple Regresso SECTIONS 9.2, 10.1, 10.2 Multple explaatory varables (10.1) Parttog varablty R 2, ANOVA (9.2) Codtos resdual plot (10.2) Trasformatos

More information

22 Nonparametric Methods.

22 Nonparametric Methods. 22 oparametrc Methods. I parametrc models oe assumes apror that the dstrbutos have a specfc form wth oe or more ukow parameters ad oe tres to fd the best or atleast reasoably effcet procedures that aswer

More information

Non-uniform Turán-type problems

Non-uniform Turán-type problems Joural of Combatoral Theory, Seres A 111 2005 106 110 wwwelsevercomlocatecta No-uform Turá-type problems DhruvMubay 1, Y Zhao 2 Departmet of Mathematcs, Statstcs, ad Computer Scece, Uversty of Illos at

More information

Lecture 02: Bounding tail distributions of a random variable

Lecture 02: Bounding tail distributions of a random variable CSCI-B609: A Theorst s Toolkt, Fall 206 Aug 25 Lecture 02: Boudg tal dstrbutos of a radom varable Lecturer: Yua Zhou Scrbe: Yua Xe & Yua Zhou Let us cosder the ubased co flps aga. I.e. let the outcome

More information

Module 7: Probability and Statistics

Module 7: Probability and Statistics Lecture 4: Goodess of ft tests. Itroducto Module 7: Probablty ad Statstcs I the prevous two lectures, the cocepts, steps ad applcatos of Hypotheses testg were dscussed. Hypotheses testg may be used to

More information

Unsupervised Learning and Other Neural Networks

Unsupervised Learning and Other Neural Networks CSE 53 Soft Computg NOT PART OF THE FINAL Usupervsed Learg ad Other Neural Networs Itroducto Mture Destes ad Idetfablty ML Estmates Applcato to Normal Mtures Other Neural Networs Itroducto Prevously, all

More information

Descriptive Statistics

Descriptive Statistics Page Techcal Math II Descrptve Statstcs Descrptve Statstcs Descrptve statstcs s the body of methods used to represet ad summarze sets of data. A descrpto of how a set of measuremets (for eample, people

More information

Point Estimation: definition of estimators

Point Estimation: definition of estimators Pot Estmato: defto of estmators Pot estmator: ay fucto W (X,..., X ) of a data sample. The exercse of pot estmato s to use partcular fuctos of the data order to estmate certa ukow populato parameters.

More information

Chapter 4 (Part 1): Non-Parametric Classification (Sections ) Pattern Classification 4.3) Announcements

Chapter 4 (Part 1): Non-Parametric Classification (Sections ) Pattern Classification 4.3) Announcements Aoucemets No-Parametrc Desty Estmato Techques HW assged Most of ths lecture was o the blacboard. These sldes cover the same materal as preseted DHS Bometrcs CSE 90-a Lecture 7 CSE90a Fall 06 CSE90a Fall

More information