CONVENTIONAL communication systems employ coding

Size: px
Start display at page:

Download "CONVENTIONAL communication systems employ coding"

Transcription

1 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 57, NO. 4, APRIL On Decodng Bnary Perfect and Quas-Perfect Codes Over Markov Nose Cannels Hader Al-Lawat and Fady Alajaj, Senor Member, IEEE Abstract We study te decodng problem wen a bnary lnear perfect or quas-perfect code s transmtted over a bnary cannel wt addtve Markov nose. After examnng te propertes of te cannel block transton dstrbuton, we derve suffcent condtons under wc strct maxmum-lkelood decodng s equvalent to strct mnmum Hammng dstance decodng wen te code s perfect. Addtonally, we sow a near equvalence relatonsp between strct maxmum lkelood and strct mnmum dstance decodng for quas-perfect codes for a range of cannel parameters and te code s mnmum dstance. As a result, an mproved (complete) mnmum dstance decoder s proposed and smulatons llustratng ts benefts are provded. Index Terms Bnary cannels wt memory, Markov nose, maxmum lkelood decodng, mnmum Hammng dstance decodng, lnear block codes, perfect and quas-perfect codes. I. INTRODUCTION CONVENTIONAL communcaton systems employ codng scemes tat are desgned for memoryless cannels. However, snce most real world cannels ave memory, nterleavng s used n an attempt to spread te cannel nose n a unform fason over te set of receved words so tat te cannel appears memoryless to te decoder. Ts n fact adds more complexty and delay to te system, wle falng to explot te benefts of te cannel memory. Progress as been aceved on te statstcal and nformaton teoretc modelng of cannels wt memory (e.g., see [2], [7], [11], [12]), as well as on te desgn of effectve teratve decoders for suc cannels (e.g., see [3], [4], [8], [9]). However, lttle s known about te structure of optmal maxmum lkelood (ML) decoders for suc cannels. We eren focus on one of te smplest models for a cannel wt memory, te bnary cannel wt addtve Markov nose and analyze te performance of bnary perfect and quas-perfect codes, wc can be useful for complexty and delay constraned applcatons suc as wreless sensor networks. Snce t s well known tat ML decodng of bnary codes over te memoryless bnary symmetrc cannel (wt bt error rate less tan 1/2) s equvalent to mnmum Hammng dstance decodng, t s natural to nvestgate weter a relaton exsts between tese two decodng metods for te Markov nose cannel. For suc Paper approved by M. Skoglund, te Edtor for Source/Cannel Codng of te IEEE Communcatons Socety. Manuscrpt receved August 24, 2007; revsed Marc 31, Ts work was supported n part by te Natural Scences and Engneerng Researc Councl (NSERC) of Canada and te Premer s Researc Excellence Award (PREA) of Ontaro. Parts of ts paper were presented at te Tent Canadan Worksop on Informaton Teory, Edmonton, Alberta, June Te autors are wt te Department of Matematcs and Statstcs, Queen s Unversty, Kngston, Ontaro K7L 3N6, Canada (E-mal: {ader, fady}@mast.queensu.ca). a cannel (wt memory), one would expect tat ML decodng s not equvalent to mnmum dstance decodng for general codes; owever, for certan codes wt good (coset) propertes (suc as perfect and quas-perfect codes), some equvalency may be establsed. Indeed, we provde a partal answer to ts problem by sowng (after elucdatng some propertes of te Markov cannel dstrbuton) tat te strct ML decodng of a bnary lnear perfect code can be equvalent to ts strct mnmum dstance decodng wle te strct ML decodng of a quas-perfect code can be nearly equvalent to ts strct mnmum dstance decodng. As a result, we propose a (complete) decoder wc s an mproved verson of te mnmum dstance decoder, and we llustrate ts performance va smulaton results. In a related work [5], te optmalty of te bnary perfect Hammng codes and te near-optmalty of subcodes of Hammng codes are demonstrated for te same Markov nose cannel. II. SYSTEM DEFINITION AND PROPERTIES We consder a bnary addtve nose cannel wose output symbol Y k at tme k s descrbed by Y k = k Z k, k = 1,2,, were denotes addton modulo-2, k {0,1} s te kt nput symbol and Z k {0,1} s te t nose symbol. We assume tat te nput and nose processes are ndependent of eac oter, and tat te nose process {Z k } s a statonary (frst-order) Markov source wt transton probablty matrx gven by Q = [Q j] =» ε + (1 ε)(1 p) (1 ε)p (1 ε)(1 p) ε + (1 ε)p, (1) were Q j Pr(Z k = j Z k 1 = ),,j {0,1}. Here p = Pr(Z k = 1) s te cannel bt error rate (CBER), and ε Cov(Z k,z k 1 )/V ar(z k ) = [Pr(Z k = 1,Z k 1 = 1) p 2 ]/[p(1 p)] s te correlaton coeffcent of te nose process, were Cov(Z k,z k 1 ) E[Z k Z k 1 ] E[Z k ]E[Z k 1 ] s te covarance of Z k and Z k 1 and V ar(z k ) E[Z 2 k ] E[Z k] 2 s te varance of Z k. We assume tat 0 < p < 1/2 and tat 0 ε < 1, ensurng tat te nose process s rreducble. Wen ε = 0, te nose process becomes ndependent and dentcally dstrbuted (..d.) and te resultng cannel reduces to te (memoryless) bnary symmetrc cannel wt crossover probablty or CBER p (wc we denote by BSC(p)). Note tat ts (memory-one) Markov nose cannel s a specal case of te Glbert-Ellott cannel [7] (realzed wen te probablty for causng an error equals zero n te good state and one n te bad state ). For x n = (x 1,,x n ) {0,1} n and y n = (y 1,,y n ) {0,1} n, te cannel block transton probablty Pr(Y n =

2 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 57, NO. 4, APRIL y n n = x n ) can be expressed n terms of te cannel nose block dstrbuton as follows Pr(Y n = y n n = x n ) = Pr(Z n = z n ) ny = L [z k 1 ε + (1 ε)p] z k k=2 [(1 z k 1 )ε + (1 ε)(1 p)] 1 z k were z k = x k y k, k = 1,,n and L = Pr(Z 1 = z 1 ) = p z1 (1 p) 1 z1. Gven z n = (z 1,,z n ) {0,1} n, let t j (z n ) denote te number of tmes two consecutve bts n z n are equal to (,j), were,j {0,1};.e., t 00(z n ) = (1 z k )(1 z k+1 ), t 11(z n ) = z k z k+1, t 10(z n ) = z k (1 z k+1 ), t 01(z n ) = (1 z k )z k+1. In terms of te t j (z n ) s Pr(Z n = z n ) can be wrtten as Pr(Z n = z n ) = L [ε + (1 ε)(1 p)] t 00 [(1 ε)p] t 01 [(1 ε)(1 p)] t 10 [ε + (1 ε)p] t 11. (2) But from te defnton of te t j (z n ) s, we ave te followng. t 10(z n ) = n 1 w(z n ) t 00(z n ) + z 1 (3) t 01(z n ) = w(z n ) z 1 t 11(z n ), (4) were w(z n ) = n z k s te Hammng wegt of z n. Substtutng (3) and (4) nto (2) yelds te followng expresson for te nose block dstrbuton, wc wll be nstrumental n our analyss:» Pr(Z n = z n ) = (1 ε) () (1 p) n p 1 p» ε + (1 ε)(1 p) (1 ε)(1 p) w(z n ) t00 (z n )» ε + (1 ε)p (1 ε)p t11 (z n ).(5) Te propertes of t 00 (z n ) and t 11 (z n ) n terms of only n and w(z n ) are as follows. 1) If w(z n ) = 0, ten t 00 (z n ) = n 1 and t 11 (z n ) = 0. 2) If 0 < w(z n ) = l n 1, ten t 00 (z n ) n l 1 wt equalty f and only f all te 0 s n z n occur consecutvely. Also t 11 (z n ) l 1 wt equalty f and only f all te 1 s n z n occur consecutvely. 3) If 0 < w(z n ) = l n 2, ten t 00(z n ) max{n 2l 1,0} and t 11 (z n ) 0. 4) If n 2 < w(zn ) = l n 1, ten t 00 (z n ) 0 and t 11 (z n ) 2l n 1. 5) If w(z n ) = n, ten t 11 (z n ) = n 1 and t 00 (z n ) = 0. Wen tere s no possblty for confuson, we wll wrte t 00 (z n ) and t 11 (z n ) as t 00 and t 11, respectvely. We also assume trougout tat te blocklengt n 2. III. ANALYSIS OF THE NOISE BLOCK DISTRIBUTION Lemma 1: Let 0 n be te all-zero word (of lengt n) and let z n 0 n be any non-zero bnary word. Ten Pr(Z n = z n ) < Pr(Z n = 0 n ). Proof: Usng (2), we ave Pr(Z n = z n ) = L [ε + (1 ε)(1 p)] t 00 [(1 ε)p] t 01 [(1 ε)(1 p)] t 10 [ε + (1 ε)p] t 11 < (1 p) [ε + (1 ε)(1 p)] t 00 [ε + (1 ε)(1 p)] t 01 [ε + (1 ε)(1 p)] t 10 [ε + (1 ε)(1 p)] t 11 = (1 p) [ε + (1 ε)(1 p)] t 00+t 01 +t 10 +t 11 = (1 p) [ε + (1 ε)(1 p)] = Pr(Z n = 0 n ), were te strct nequalty olds snce L = p < 1 p f z 1 = 1, and snce p < 1 p wt t 01 > 0 (snce z n 0 n ) f z 1 = 0. Lemma 2: Let z n 1 0 n be a non-zero nose word wt Hammng wegt w(z n 1) < n, t 00 = n w(z n 1) 1 and t 11 = w(z n 1) 1 (.e., z n 1 s of te form ( ) or ( ) ). Let z n 2 be anoter non-zero nose word wt w(z n 2 ) = w(z n 1 ) but wt dfferent t 00 and/or t 11. Ten, f ε > 0, Pr(Z n = z n 1) > Pr(Z n = z n 2). Proof: From (5), we note tat Pr(Z n = z n ) strctly ncreases wt bot t 00 and t 11 wen te wegt s kept constant and ε > 0. Snce z n 1 as maxmum values for bot t 00 and t 11 amongst all nose words of wegt w(z n 1) (but wt dfferent t 00 and/or t 11 ), te strct nequalty above follows. Note tat wen ε = 0, obvously all nose words wt te same wegt ave dentcal dstrbutons (snce te cannel reduces to te BSC(p)). and Lemma 3: Suppose tat ln u < u ln ε+ ε+ 0 < ε < 1 2p 2(1 p). 1 p p ε+(1 ε)p (1 ε)p 1 Let z n be a nose word of wegt w(z n ) = m suc tat 0 m u + 1 n 2. Ten Pr(Zn = z n ) > Pr(Z n = z n ) were z n s any nose word wt wegt w( z n ) = l > m. Proof: Frst, note tat te result drectly olds f m = 0 by Lemma 1. Now let z n be a nose word of nonzero wegt m u+1, and let z n be anoter nose word wt w( z n ) > m.

3 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 57, NO. 4, APRIL Case 1: Assume tat w( z n ) = m + were {1,2,...,n m 1}. Ten by (5), we ave» Pr(Z n = z n m ) ε + (1 ε)(1 p) Pr(Z n = z n ) (1 ε)(1 p)» m+ 1 «ε + (1 ε)p p (1 ε)p 1 p» m 1 ε + (1 ε)(1 p) (1 ε)(1 p)» m «ε + (1 ε)p p f(m). (1 ε)p 1 p Te frst nequalty follows from (5) and by applyng te bounds on t 00 and t 11 descrbed at te end of te prevous secton, wle te second nequalty follows by notng tat te rgt and sde of te frst nequalty decreases n for a fxed m. Snce f(m) s strctly ncreasng n m (wen ε > 0), and m u + 1 < u + 1, we obtan tat f(m) < f(u + 1) = 1 Pr(Zn = z n ) Pr(Z n = z n ) < 1. Case 2: Assume tat w( z n ) = n. Let ẑ n be anoter nose word wt w(ẑ n ) = n 1, t 11 (ẑ n ) = n 2 and t 00 (ẑ n ) = 0. Ten Pr(Z n = z n ) Pr(Z n = z n ) = Pr(Zn = ẑ n ) Pr(Z n = z n ) Pr(Z n = z n ) Pr(Z n = ẑ n ) < Pr(Zn = z n ) Pr(Z n = ẑ n )» «ε + (1 ε)p p = (1 ε)p 1 p» ε + (1 ε)p = < 1 (1 ε)(1 p) were te frst strct nequalty olds snce Pr(Z n = ẑ n ) < Pr(Z n = z n ) by Case 1, and te last strct nequalty olds snce ε < 1 2p 2(1 p). Remark: Note tat te wegt lmt u n te above lemma s ndependent of te codeword lengt. IV. DECODING PERFECT AND QUASI-PERFECT CODES We next study te relatonsp between strct maxmum lkelood (SML) decodng and strct mnmum (Hammng) dstance decodng for bnary lnear perfect and quas-perfect codes sent over te addtve Markov nose cannel. SML decodng s an (ncomplete) optmal decoder were optmalty s n te sense of mnmzng te probablty of codeword error (PCE) wen te codewords are equally lkely (wc we eren assume). Let F n 2 = {0,1} n denote te set of all bnary words of lengt n. A non-empty subset C of F n 2 s called a bnary lnear code f t s a subgroup of F n 2. Te elements of C are called codewords. We usually descrbe C wt te trplet (n,m,d) to ndcate tat n s te blocklengt of ts codewords, M s ts sze and d s ts mnmum Hammng dstance; n oter words, d mn c1,c 2 C:c 1 c 2 d(c 1,c 2 ) were d(c 1,c 2 ) = w(c 1 c 2 ) s te Hammng dstance between c 1 and c 2 and te modulo-2 operaton s appled component-wse on c 1 and c 2. Defnton 1: [10], [6] An (n,m,d) lnear code C s sad to be a perfect code f, for some non-negatve nteger t, t as all patterns (.e., elements of {0,1} n ) of Hammng wegt t or less and no oters as coset leaders. Defnton 2: [10], [6] An (n,m,d) bnary lnear code C s sad to be quas-perfect f, for some non-negatve nteger t, t as all patterns of wegt t or less, some of wegt t + 1, and none of greater wegt as coset leaders. An equvalent defnton for quas-perfectness s tat, for some non-negatve nteger t, C as a packng radus equal to t and a coverng radus equal to t + 1;.e., te speres wt (Hammng) radus t around te codewords of C are dsjont, and te speres wt radus t+1 around te codewords cover {0,1} n. On te oter and, perfectness means tat bot packng and coverng rad are equal. For tese two classes of codes, t = d 1 2 (wt d = 2t + 1 for perfect codes and d = 2t + 1 or d = 2t + 2 for quas-perfect codes). Te (2 m 1,2 2m 1 m,3) Hammng codes (m 2), te (n,2,n) repetton code wt n odd and te (23,2 12,7) Golay code are te only members of te famly of bnary perfect lnear codes. Examples of quas-perfect bnary lnear codes nclude te (n,2,n) repetton codes wt n even, te (2 m,2 2m 1 m,4) extended Hammng codes as well as te (2 m 2,2 2m 2 m,3) sortened Hammng codes (m 2), te (2 m 1,2 2m 1 2m,5) double-error correctng BCH codes (m 3), and te (24,2 12,8) extended Golay code. Perfect codes as well as quas-perfect codes are not powerful error-correctng codes due to ter small Hammng dstances. However, tey can be useful n complexty and delay constraned applcatons were codes wt sort blocklengts are needed. Suppose tat a codeword of a quas-perfect code C s transmtted over te Markov nose cannel and tat y n s receved at te decoder. Te followng are possble decodng rules one can use to recover te transmtted codeword. ML Decodng: y n s decoded nto codeword c 0 C f Pr(Y n = y n n = c 0 ) Pr(Y n = y n n = c) for all c C. If tere s more tan one codeword for wc te above condton olds, ten te decoder pcks one of suc codewords at random. Strct ML (SML) Decodng: It s dentcal to te ML rule wt te excepton of replacng te nequalty wt a strct nequalty; f no codeword c 0 satsfes te strct nequalty, te decoder declares a decodng falure. Mnmum Dstance (MD) Decodng: y n s decoded nto codeword c 0 C f w(c 0 y) w(c y) for all c C. If tere s more tan one codeword for wc te above condton olds, ten te decoder pcks one of suc codewords at random. Strct Mnmum Dstance (SMD) Decodng: It s dentcal to te MD rule wt te excepton of replacng te nequalty wt a strct nequalty; f no codeword c 0 satsfes te strct nequalty, te decoder declares a decodng falure. 1 1 Recall tat te ML and MD decoders are complete decoders.e., tey always select a codeword to decode te receved word wle te SML and SMD decoders are ncomplete decoders as tey declare a decodng falure wen tere are more tan one codeword wt mnmal decodng metrc.

4 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 57, NO. 4, APRIL Lemma 4: Let C be an (n,m,d) perfect code to be used over te Markov nose cannel. Assume tat d 1 ln ε+ 1 p p < 2 ln and ε+ 0 < ε < 1 2p 2(1 p). ε+(1 ε)p (1 ε)p Ten SMD and SML decodng are equvalent. Proof: Frst note tat for perfect codes, te element wtn eac coset of mnmum wegt (.e., te coset leader) s unque. Also notce tat te coset leader s of wegt less tan or equal to (d 1)/2 n/2. Assume tat y n s receved; ten ĉ C wc s unque suc tat w(ĉ y n ) < w(c y n ) c C\{ĉ}. Usng Lemma 3 wt u = (d 1)/2 1, we conclude tat c C\{ĉ} Pr(Z n = ĉ y n ) > Pr(Z n = c y n ) Pr(Y n = y n n = ĉ) > Pr(Y n = y n n = c). Hence, gven a receved word y n, te codeword wt te smallest Hammng dstance to y n wll be te most lkely codeword tat was sent over te cannel amongst all te codewords n C. Terefore, SMD and SML decodng are equvalent. Observatons: Te above lemma also proves tat for perfect codes MD and ML decodng are equvalent under te same assumptons on d,ε and p. Ts s because for suc codes SMD and MD are te same due to te unqueness of ter coset leaders wc results n no tes n te MD decoder. Smlarly, te unqueness of coset leaders coupled wt te proof of te above lemma also mply tat SML and ML are equvalent for te perfect codes under te range of cannel parameters gven n te lemma. In a related work [5], Hamada sowed tat for te Markov cannel wt a non-negatve nose correlaton coeffcent (.e., ε 0) and bt error rate p < 1/2, te bnary perfect Hammng codes (of mnmum dstance 3) are optmal (under ML decodng) n te sense of mnmzng te probablty of decodng error amongst all codes avng te same blocklengt and rate provded tat ε < (1 2p)/2(1 p). Tus, n lgt of te above lemma, for a communcaton system employng codes wt sort blocklengt due to delay constrants, Hammng codes used wt MD decodng wll be optmal over te Markov nose cannel amongst all codes of te same blocklengt and rate. Lemma 5: Let C be an (n,m,d) bnary lnear quas-perfect code to be used over te Markov nose cannel. Assume tat d 1 ln ε+ 1 p p < 1 2 ln and ε+ 0 < ε < 1 2p 2(1 p). ε+(1 ε)p (1 ε)p Ten, for a gven word y n receved at te cannel output, te followng old. (a) If ĉ C suc tat w(ĉ y n ) < w(c y n ) c C\{ĉ}, ten Pr(Y n = y n n = ĉ) > Pr(Y n = y n n = c) c C\{ĉ}. (b) If ĉ C suc tat Pr(Y n = y n n = ĉ) > Pr(Y n = y n n = c) c C\{ĉ}, ten w(ĉ y n ) w(c y n ) c C. Proof: (a) Let ĉ C suc tat w(ĉ y n ) < w(c y n ) c C\{ĉ}. Obvously, ĉ y n s a coset leader, tus w(ĉ y n ) d n 2 snce C s quas-perfect. By Lemma 3, Pr(Z n = ĉ y n ) > Pr(Z n = c y n ) c C Pr(Y n = y n n = ĉ) > Pr(Y n = y n n = c) c C\{ĉ}. (b) Let ĉ C suc tat Pr(Y n = y n n = ĉ) > Pr(Y n = y n n = c) c C\{ĉ}. Assume tat c C\{ĉ} suc tat w( c y n ) < w(ĉ y n ); te exstence of c s always guaranteed by coosng t suc tat c y n s te coset leader of C y n. Tus, we can assume tat w( c y n ) n 2 snce te coset leader as wegt less tan or equal to n 2 (as C s quasperfect). Ten by Lemma 3, Pr(Z n = ĉ y n ) < Pr(Z n = c y n ) Pr(Y n = y n n = ĉ) < Pr(Y n = y n n = c) wc contradcts our assumpton tat ĉ maxmzes Pr(y n c) over all codewords. Hence, w(ĉ y n ) w(c y n ) c C. Note te above lemma mples tat f a quas-perfect code as no decodng falures n ts SMD decoder, ten ts SMD and SML decoders are equvalent under te stated condtons on te Markov cannel parameters (p, ǫ) and te code s mnmum dstance. 2 Inspred by te above result, Lemma 2 and (5), we next propose te followng complete decoder tat mproves over MD decodng. It ncludes SMD decodng and explots te knowledge of t 00 and t 11 to resolve tes (wc occur wen tere are more tan one codeword tat are closest to te receved word). MD+ Decodng: Assume tat y n s receved at te cannel output. Suppose te decoder outputs te codeword c satsfyng te MD decodng condton. If tere s more tan one suc codeword, ten te decoder cooses c tat maxmzes t 00 ( c y n )+t 11 ( c y n ). If tere s stll a te, ten te decoder cooses c tat maxmzes t 11 ( c y n ). Fnally, f tere s stll a te, ten te codeword c s pcked at random. 3 V. SIMULATION RESULTS AND DISCUSSION Gven an (n,m,d) perfect (respectvely, quas-perfect) code and a fxed CBER p, we let ε t 1 (respectvely, ε t ) be te largest ε for wc Lemma 4 (respectvely, Lemma 5) olds, were t (d 1)/2. In Table I, we provde te values of ε t for t = 1,2,3 and dfferent values of p. 2 In contrast, recall tat for te BSC(p) wt p < 1/2, SML and SMD decodng are equvalent for all bnary codes (te same equvalence also olds between ML and MD decodng). j Note k also tat wen ε 0, te condtons n te above lemma reduce to d 1 <, and p < 1 (wc s consstent 2 2 wt wat was just mentoned). 3 Clearly, MD+ and MD decodng are equvalent for te BSC, snce for ts cannel, t does not matter wat codeword te decoder selects wen tere s a te (as long as t s one of te codewords closest to te receved word).

5 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 57, NO. 4, APRIL p ε 0 ε 1 ε 2 ε / / / / / TABLE I VALUES OF ε t FOR DIFFERENT p AND t. LEMMA 4 HOLDS FOR ALL ε ε t 1 AND LEMMA 5 HOLDS FOR ALL ε ε t. PCE MD decodng MD+ decodng ML decodng Cannel Bt Error Rate (p) Fg. 3. PCE vs CBER p under dfferent decodng scemes for te BCH (15, 2 7, 5) code over te Markov cannel wt nose correlaton ǫ = PCE Cannel Bt Error Rate (p) MD (epslon=) ML (epslon=) MD (epslon=0.5) ML (epslon=0.5) Fg. 1. PCE vs CBER p under dfferent decodng scemes for te Hammng (15, 2 11, 3) code over te Markov cannel wt nose correlaton ε =, 0.5. PCE Cannel Bt Error Rate (p) MD decodng MD+ decodng ML decodng Fg. 2. PCE vs CBER p under dfferent decodng scemes for te Hammng (8, 2 4, 4) code over te Markov cannel wt nose correlaton ǫ = We frst examne te perfect (15,2 11,3) Hammng code under dfferent cannel condtons, and sow tat ndeed MD decodng and ML decodng are equvalent for te cannel condtons specfed by Lemma 4, as llustrated n Table I. A large sequence of a unformly dstrbuted bnary..d. source was generated, encoded va one of tese codes and sent over te Markov cannel. Typcal values are sown for ε {, 0.5} n Fg. 1. Note tat ε = satsfes te condtons of Lemma 4 wle ε = 0.5 does not. Te smulaton results sow tat MD and ML are dentcal for te case ε = and almost dentcal at ε = 0.5. We next present smulaton results for decodng two quasperfect codes, te bnary (8,2 4,4) extended Hammng code and te (15,2 7,5) BCH code. For te Hammng code, t = 1; tus te values for ε 1 n Table I provde te largest values of ε for wc Lemma 5 olds for dfferent CBERs p. As a result, we smulated te Hammng system for te 5 values of p lsted n Table I and ε {0.05,,0.2,0.25}. Smlarly, snce t = 2 for te BCH code, te values for ε 2 apply, and te BCH system was smulated for ε = 0.05 and all values of p n Table I except p = A typcal Hammng code smulaton result s presented n Fg. 2 for ε = 0.25, and te BCH code smulaton s sown n Fg. 3 for ε = Te results ndcate tat MD+ performs nearly dentcally to ML decodng and provdes sgnfcant gan over MD decodng. By comparng te two fgures, we also note tat te performance gap between MD and ML decodng decreases wt ε (wc s consstent wt te fact tat MD and ML decodng are equvalent wen ε = 0). Addtonal results are avalable n [1]. As ts work s a basc frst step towards understandng te structure of ML decoders for cannels wt memory, tere are several drectons for future work. For example, note tat one lmtaton of Lemmas 4 and 5 s tat ter condtons are too strngent to accommodate codes wt larger mnmum dstance, unless f te cannel correlaton ε s substantally decreased towards 0, tus renderng te Markov cannel nearly

6 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 57, NO. 4, APRIL memoryless 4 (e.g, see ow ε t decreases as t ncreases n Table I). Te determnaton of less strngent condtons s an nterestng topc for future work. Anoter possble future drecton s to desgn a decoder tat explots te memory between blocks by usng estmates of te prevous nose samples. Ts can result n an mproved performance over te block-by-block ML and MD+ metods (studed ere) at a cost of ncreased complexty. Fnally, extendng ts work to cannels wt Mt order Markovan nose [12] or dden Markovan nose [7], wc are good models for correlated fadng cannels, may be a wortwle endeavor. REFERENCES [1] H. Al-Lawat, Performance Analyss of Lnear Block Codes Over te Queue-Based Cannel, M.Sc. Tess, Matematcs and Engneerng, Department of Matematcs and Statstcs, Queen s Unversty, Kngston, Ontaro K7L 3N6, Canada, Aug Avalable [Onlne] at ttp:// web/publcatons.tml/ [2] F. Alajaj and T. Fuja, A communcaton cannel modeled on contagon, IEEE Trans. Inform. Teory, pp , Nov [3] A. Ecford, F. Kscscang and S. Pasupaty, Analyss of low-densty party-ceck codes for te Glbert-Ellott cannel, IEEE Trans. Inform. Teory, vol. 51, pp , Nov [4] J. Garca-Fras, Decodng of low-densty party-ceck codes over fnte-state bnary Markov cannels, IEEE Trans. Commun., vol. 52, pp , Nov [5] M. Hamada, Near-optmalty of subcodes of Hammng codes on te two-state Markovan addtve cannel, IEICE Trans. Fundamentals Electroncs, Commun. and Comp. Scences, vol. E84-A, pp , Oct [6] F. J. MacWllams and N. J.A. Sloane, Te Teory of Error-Correctng Codes, Nort-Holland, Amsterdam, [7] M. Muskn and I. Bar-Davd, Capacty and codng for te Glbert- Ellott cannel, IEEE Trans. Inform. Teory, vol. 35, pp , Nov [8] V. Nagarajan and O. Mlenkovc, Performance analyss of structured LDPC over te Polya-urn cannel wt fnte memory, n Proc. CCECE, May [9] C. Ncola, F. Alajaj, and T. Lnder, Decodng LDPC codes over bnary cannels wt addtve Markov nose, n Proc. CWIT 05, pp , Montreal, June [10] W. W. Peterson and E. J. Weldon Jr., Error-Correctng Codes, 2nd Ed., MIT, [11] C. Pmentel and I. F. Blake, Modelng burst cannels usng parttoned Frtcman s Markov models, IEEE Trans. Ve. Tecnol., pp , Aug [12] L. Zong, F. Alajaj and G. Takaara, A model for correlated Rcan fadng cannels based on a fnte queue, IEEE Trans. Ve. Tecnol., vol. 57, no. 1, pp , Jan Note tat as ε decreases towards 0, te cannel nose becomes less and less bursty, beavng more and more lke a memoryless process.

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7 Stanford Unversty CS54: Computatonal Complexty Notes 7 Luca Trevsan January 9, 014 Notes for Lecture 7 1 Approxmate Countng wt an N oracle We complete te proof of te followng result: Teorem 1 For every

More information

Error Probability for M Signals

Error Probability for M Signals Chapter 3 rror Probablty for M Sgnals In ths chapter we dscuss the error probablty n decdng whch of M sgnals was transmtted over an arbtrary channel. We assume the sgnals are represented by a set of orthonormal

More information

Lecture 3: Shannon s Theorem

Lecture 3: Shannon s Theorem CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts

More information

Multivariate Ratio Estimator of the Population Total under Stratified Random Sampling

Multivariate Ratio Estimator of the Population Total under Stratified Random Sampling Open Journal of Statstcs, 0,, 300-304 ttp://dx.do.org/0.436/ojs.0.3036 Publsed Onlne July 0 (ttp://www.scrp.org/journal/ojs) Multvarate Rato Estmator of te Populaton Total under Stratfed Random Samplng

More information

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem. Lecture 14 (03/27/18). Channels. Decodng. Prevew of the Capacty Theorem. A. Barg The concept of a communcaton channel n nformaton theory s an abstracton for transmttng dgtal (and analog) nformaton from

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Throughput Capacities and Optimal Resource Allocation in Multiaccess Fading Channels

Throughput Capacities and Optimal Resource Allocation in Multiaccess Fading Channels Trougput Capactes and Optmal esource Allocaton n ultaccess Fadng Cannels Hao Zou arc 7, 003 Unversty of Notre Dame Abstract oble wreless envronment would ntroduce specal penomena suc as multpat fadng wc

More information

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Wreless Informaton Transmsson System Lab. Chapter 7 Channel Capacty and Codng Insttute of Communcatons Engneerng atonal Sun Yat-sen Unversty Contents 7. Channel models and channel capacty 7.. Channel models

More information

Problem Set 4: Sketch of Solutions

Problem Set 4: Sketch of Solutions Problem Set 4: Sketc of Solutons Informaton Economcs (Ec 55) George Georgads Due n class or by e-mal to quel@bu.edu at :30, Monday, December 8 Problem. Screenng A monopolst can produce a good n dfferent

More information

On Pfaff s solution of the Pfaff problem

On Pfaff s solution of the Pfaff problem Zur Pfaff scen Lösung des Pfaff scen Probles Mat. Ann. 7 (880) 53-530. On Pfaff s soluton of te Pfaff proble By A. MAYER n Lepzg Translated by D. H. Delpenc Te way tat Pfaff adopted for te ntegraton of

More information

Adaptive Kernel Estimation of the Conditional Quantiles

Adaptive Kernel Estimation of the Conditional Quantiles Internatonal Journal of Statstcs and Probablty; Vol. 5, No. ; 206 ISSN 927-7032 E-ISSN 927-7040 Publsed by Canadan Center of Scence and Educaton Adaptve Kernel Estmaton of te Condtonal Quantles Rad B.

More information

The lower and upper bounds on Perron root of nonnegative irreducible matrices

The lower and upper bounds on Perron root of nonnegative irreducible matrices Journal of Computatonal Appled Mathematcs 217 (2008) 259 267 wwwelsevercom/locate/cam The lower upper bounds on Perron root of nonnegatve rreducble matrces Guang-Xn Huang a,, Feng Yn b,keguo a a College

More information

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Chapter 7 Channel Capacty and Codng Contents 7. Channel models and channel capacty 7.. Channel models Bnary symmetrc channel Dscrete memoryless channels Dscrete-nput, contnuous-output channel Waveform

More information

ECE 534: Elements of Information Theory. Solutions to Midterm Exam (Spring 2006)

ECE 534: Elements of Information Theory. Solutions to Midterm Exam (Spring 2006) ECE 534: Elements of Informaton Theory Solutons to Mdterm Eam (Sprng 6) Problem [ pts.] A dscrete memoryless source has an alphabet of three letters,, =,, 3, wth probabltes.4,.4, and., respectvely. (a)

More information

TR/95 February Splines G. H. BEHFOROOZ* & N. PAPAMICHAEL

TR/95 February Splines G. H. BEHFOROOZ* & N. PAPAMICHAEL TR/9 February 980 End Condtons for Interpolatory Quntc Splnes by G. H. BEHFOROOZ* & N. PAPAMICHAEL *Present address: Dept of Matematcs Unversty of Tabrz Tabrz Iran. W9609 A B S T R A C T Accurate end condtons

More information

Lecture 4: November 17, Part 1 Single Buffer Management

Lecture 4: November 17, Part 1 Single Buffer Management Lecturer: Ad Rosén Algorthms for the anagement of Networs Fall 2003-2004 Lecture 4: November 7, 2003 Scrbe: Guy Grebla Part Sngle Buffer anagement In the prevous lecture we taled about the Combned Input

More information

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD he Gaussan classfer Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world X observatons g decson functon L[g,y] loss of predctng y wth g Bayes decson rule s

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

A Discrete Approach to Continuous Second-Order Boundary Value Problems via Monotone Iterative Techniques

A Discrete Approach to Continuous Second-Order Boundary Value Problems via Monotone Iterative Techniques Internatonal Journal of Dfference Equatons ISSN 0973-6069, Volume 12, Number 1, pp. 145 160 2017) ttp://campus.mst.edu/jde A Dscrete Approac to Contnuous Second-Order Boundary Value Problems va Monotone

More information

Refined Coding Bounds for Network Error Correction

Refined Coding Bounds for Network Error Correction Refned Codng Bounds for Network Error Correcton Shenghao Yang Department of Informaton Engneerng The Chnese Unversty of Hong Kong Shatn, N.T., Hong Kong shyang5@e.cuhk.edu.hk Raymond W. Yeung Department

More information

The Partition Weight Enumerator of MDS Codes and its Applications.

The Partition Weight Enumerator of MDS Codes and its Applications. Te Partton Wegt Enumerator of MDS Codes and ts Applcatons. Mostafa El-Kamy and Robert J. McElece Electrcal Engneerng Department Calforna Insttute of Tecnology Pasadena, CA 91125, USA E-mal: mostafa@systems.caltec.edu

More information

COMP4630: λ-calculus

COMP4630: λ-calculus COMP4630: λ-calculus 4. Standardsaton Mcael Norrs Mcael.Norrs@ncta.com.au Canberra Researc Lab., NICTA Semester 2, 2015 Last Tme Confluence Te property tat dvergent evaluatons can rejon one anoter Proof

More information

The Finite Element Method: A Short Introduction

The Finite Element Method: A Short Introduction Te Fnte Element Metod: A Sort ntroducton Wat s FEM? Te Fnte Element Metod (FEM) ntroduced by engneers n late 50 s and 60 s s a numercal tecnque for solvng problems wc are descrbed by Ordnary Dfferental

More information

Application of Nonbinary LDPC Codes for Communication over Fading Channels Using Higher Order Modulations

Application of Nonbinary LDPC Codes for Communication over Fading Channels Using Higher Order Modulations Applcaton of Nonbnary LDPC Codes for Communcaton over Fadng Channels Usng Hgher Order Modulatons Rong-Hu Peng and Rong-Rong Chen Department of Electrcal and Computer Engneerng Unversty of Utah Ths work

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

A new construction of 3-separable matrices via an improved decoding of Macula s construction

A new construction of 3-separable matrices via an improved decoding of Macula s construction Dscrete Optmzaton 5 008 700 704 Contents lsts avalable at ScenceDrect Dscrete Optmzaton journal homepage: wwwelsevercom/locate/dsopt A new constructon of 3-separable matrces va an mproved decodng of Macula

More information

2016 Wiley. Study Session 2: Ethical and Professional Standards Application

2016 Wiley. Study Session 2: Ethical and Professional Standards Application 6 Wley Study Sesson : Ethcal and Professonal Standards Applcaton LESSON : CORRECTION ANALYSIS Readng 9: Correlaton and Regresson LOS 9a: Calculate and nterpret a sample covarance and a sample correlaton

More information

PubH 7405: REGRESSION ANALYSIS. SLR: INFERENCES, Part II

PubH 7405: REGRESSION ANALYSIS. SLR: INFERENCES, Part II PubH 7405: REGRESSION ANALSIS SLR: INFERENCES, Part II We cover te topc of nference n two sessons; te frst sesson focused on nferences concernng te slope and te ntercept; ts s a contnuaton on estmatng

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

Lecture 5 Decoding Binary BCH Codes

Lecture 5 Decoding Binary BCH Codes Lecture 5 Decodng Bnary BCH Codes In ths class, we wll ntroduce dfferent methods for decodng BCH codes 51 Decodng the [15, 7, 5] 2 -BCH Code Consder the [15, 7, 5] 2 -code C we ntroduced n the last lecture

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

Low Complexity Soft-Input Soft-Output Hamming Decoder

Low Complexity Soft-Input Soft-Output Hamming Decoder Low Complexty Soft-Input Soft-Output Hammng Der Benjamn Müller, Martn Holters, Udo Zölzer Helmut Schmdt Unversty Unversty of the Federal Armed Forces Department of Sgnal Processng and Communcatons Holstenhofweg

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence

More information

Maximizing the number of nonnegative subsets

Maximizing the number of nonnegative subsets Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum

More information

Durban Watson for Testing the Lack-of-Fit of Polynomial Regression Models without Replications

Durban Watson for Testing the Lack-of-Fit of Polynomial Regression Models without Replications Durban Watson for Testng the Lack-of-Ft of Polynomal Regresson Models wthout Replcatons Ruba A. Alyaf, Maha A. Omar, Abdullah A. Al-Shha ralyaf@ksu.edu.sa, maomar@ksu.edu.sa, aalshha@ksu.edu.sa Department

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

The internal structure of natural numbers and one method for the definition of large prime numbers

The internal structure of natural numbers and one method for the definition of large prime numbers The nternal structure of natural numbers and one method for the defnton of large prme numbers Emmanul Manousos APM Insttute for the Advancement of Physcs and Mathematcs 3 Poulou str. 53 Athens Greece Abstract

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

338 A^VÇÚO 1n ò Lke n Mancn (211), we make te followng assumpton to control te beavour of small jumps. Assumpton 1.1 L s symmetrc α-stable, were α (,

338 A^VÇÚO 1n ò Lke n Mancn (211), we make te followng assumpton to control te beavour of small jumps. Assumpton 1.1 L s symmetrc α-stable, were α (, A^VÇÚO 1n ò 1oÏ 215c8 Cnese Journal of Appled Probablty and Statstcs Vol.31 No.4 Aug. 215 Te Speed of Convergence of te Tresold Verson of Bpower Varaton for Semmartngales Xao Xaoyong Yn Hongwe (Department

More information

Complete subgraphs in multipartite graphs

Complete subgraphs in multipartite graphs Complete subgraphs n multpartte graphs FLORIAN PFENDER Unverstät Rostock, Insttut für Mathematk D-18057 Rostock, Germany Floran.Pfender@un-rostock.de Abstract Turán s Theorem states that every graph G

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

Research Article Green s Theorem for Sign Data

Research Article Green s Theorem for Sign Data Internatonal Scholarly Research Network ISRN Appled Mathematcs Volume 2012, Artcle ID 539359, 10 pages do:10.5402/2012/539359 Research Artcle Green s Theorem for Sgn Data Lous M. Houston The Unversty of

More information

On a nonlinear compactness lemma in L p (0, T ; B).

On a nonlinear compactness lemma in L p (0, T ; B). On a nonlnear compactness lemma n L p (, T ; B). Emmanuel Matre Laboratore de Matématques et Applcatons Unversté de Haute-Alsace 4, rue des Frères Lumère 6893 Mulouse E.Matre@ua.fr 3t February 22 Abstract

More information

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1 Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons

More information

Anti-van der Waerden numbers of 3-term arithmetic progressions.

Anti-van der Waerden numbers of 3-term arithmetic progressions. Ant-van der Waerden numbers of 3-term arthmetc progressons. Zhanar Berkkyzy, Alex Schulte, and Mchael Young Aprl 24, 2016 Abstract The ant-van der Waerden number, denoted by aw([n], k), s the smallest

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

Graph Reconstruction by Permutations

Graph Reconstruction by Permutations Graph Reconstructon by Permutatons Perre Ille and Wllam Kocay* Insttut de Mathémathques de Lumny CNRS UMR 6206 163 avenue de Lumny, Case 907 13288 Marselle Cedex 9, France e-mal: lle@ml.unv-mrs.fr Computer

More information

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran

More information

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva Econ 39 - Statstcal Propertes of the OLS estmator Sanjaya DeSlva September, 008 1 Overvew Recall that the true regresson model s Y = β 0 + β 1 X + u (1) Applyng the OLS method to a sample of data, we estmate

More information

Solution for singularly perturbed problems via cubic spline in tension

Solution for singularly perturbed problems via cubic spline in tension ISSN 76-769 England UK Journal of Informaton and Computng Scence Vol. No. 06 pp.6-69 Soluton for sngularly perturbed problems va cubc splne n tenson K. Aruna A. S. V. Rav Kant Flud Dynamcs Dvson Scool

More information

Tornado and Luby Transform Codes. Ashish Khisti Presentation October 22, 2003

Tornado and Luby Transform Codes. Ashish Khisti Presentation October 22, 2003 Tornado and Luby Transform Codes Ashsh Khst 6.454 Presentaton October 22, 2003 Background: Erasure Channel Elas[956] studed the Erasure Channel β x x β β x 2 m x 2 k? Capacty of Noseless Erasure Channel

More information

Competitive Experimentation and Private Information

Competitive Experimentation and Private Information Compettve Expermentaton an Prvate Informaton Guseppe Moscarn an Francesco Squntan Omtte Analyss not Submtte for Publcaton Dervatons for te Gamma-Exponental Moel Dervaton of expecte azar rates. By Bayes

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

Quantum and Classical Information Theory with Disentropy

Quantum and Classical Information Theory with Disentropy Quantum and Classcal Informaton Theory wth Dsentropy R V Ramos rubensramos@ufcbr Lab of Quantum Informaton Technology, Department of Telenformatc Engneerng Federal Unversty of Ceara - DETI/UFC, CP 6007

More information

The Information Bottleneck Revisited or How to Choose a Good Distortion Measure

The Information Bottleneck Revisited or How to Choose a Good Distortion Measure Te Informaton Bottleneck Revsted or How to Coose a Good Dstorton Measure Peter Harremoës Centrum voor Wskunde en Informatca P.O. 94079, 090 GB Amsterdam Te Nederlands P.Harremoes@cw.nl Naftal Tsby Te Hebrew

More information

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Lossy Compression. Compromise accuracy of reconstruction for increased compression. Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost

More information

Lecture 10: May 6, 2013

Lecture 10: May 6, 2013 TTIC/CMSC 31150 Mathematcal Toolkt Sprng 013 Madhur Tulsan Lecture 10: May 6, 013 Scrbe: Wenje Luo In today s lecture, we manly talked about random walk on graphs and ntroduce the concept of graph expander,

More information

Multigrid Methods and Applications in CFD

Multigrid Methods and Applications in CFD Multgrd Metods and Applcatons n CFD Mcael Wurst 0 May 009 Contents Introducton Typcal desgn of CFD solvers 3 Basc metods and ter propertes for solvng lnear systems of equatons 4 Geometrc Multgrd 3 5 Algebrac

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Google PageRank with Stochastic Matrix

Google PageRank with Stochastic Matrix Google PageRank wth Stochastc Matrx Md. Sharq, Puranjt Sanyal, Samk Mtra (M.Sc. Applcatons of Mathematcs) Dscrete Tme Markov Chan Let S be a countable set (usually S s a subset of Z or Z d or R or R d

More information

arxiv: v1 [math.co] 1 Mar 2014

arxiv: v1 [math.co] 1 Mar 2014 Unon-ntersectng set systems Gyula O.H. Katona and Dánel T. Nagy March 4, 014 arxv:1403.0088v1 [math.co] 1 Mar 014 Abstract Three ntersecton theorems are proved. Frst, we determne the sze of the largest

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

The L(2, 1)-Labeling on -Product of Graphs

The L(2, 1)-Labeling on -Product of Graphs Annals of Pure and Appled Mathematcs Vol 0, No, 05, 9-39 ISSN: 79-087X (P, 79-0888(onlne Publshed on 7 Aprl 05 wwwresearchmathscorg Annals of The L(, -Labelng on -Product of Graphs P Pradhan and Kamesh

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Not-for-Publication Appendix to Optimal Asymptotic Least Aquares Estimation in a Singular Set-up

Not-for-Publication Appendix to Optimal Asymptotic Least Aquares Estimation in a Singular Set-up Not-for-Publcaton Aendx to Otmal Asymtotc Least Aquares Estmaton n a Sngular Set-u Antono Dez de los Ros Bank of Canada dezbankofcanada.ca December 214 A Proof of Proostons A.1 Proof of Prooston 1 Ts roof

More information

A note on almost sure behavior of randomly weighted sums of φ-mixing random variables with φ-mixing weights

A note on almost sure behavior of randomly weighted sums of φ-mixing random variables with φ-mixing weights ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 7, Number 2, December 203 Avalable onlne at http://acutm.math.ut.ee A note on almost sure behavor of randomly weghted sums of φ-mxng

More information

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k.

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k. THE CELLULAR METHOD In ths lecture, we ntroduce the cellular method as an approach to ncdence geometry theorems lke the Szemeréd-Trotter theorem. The method was ntroduced n the paper Combnatoral complexty

More information

An (almost) unbiased estimator for the S-Gini index

An (almost) unbiased estimator for the S-Gini index An (almost unbased estmator for the S-Gn ndex Thomas Demuynck February 25, 2009 Abstract Ths note provdes an unbased estmator for the absolute S-Gn and an almost unbased estmator for the relatve S-Gn for

More information

EGR 544 Communication Theory

EGR 544 Communication Theory EGR 544 Communcaton Theory. Informaton Sources Z. Alyazcoglu Electrcal and Computer Engneerng Department Cal Poly Pomona Introducton Informaton Source x n Informaton sources Analog sources Dscrete sources

More information

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013 ISSN: 2277-375 Constructon of Trend Free Run Orders for Orthogonal rrays Usng Codes bstract: Sometmes when the expermental runs are carred out n a tme order sequence, the response can depend on the run

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

DC-Free Turbo Coding Scheme Using MAP/SOVA Algorithms

DC-Free Turbo Coding Scheme Using MAP/SOVA Algorithms Proceedngs of the 5th WSEAS Internatonal Conference on Telecommuncatons and Informatcs, Istanbul, Turkey, May 27-29, 26 (pp192-197 DC-Free Turbo Codng Scheme Usng MAP/SOVA Algorthms Prof. Dr. M. Amr Mokhtar

More information

Chapter 8 Indicator Variables

Chapter 8 Indicator Variables Chapter 8 Indcator Varables In general, e explanatory varables n any regresson analyss are assumed to be quanttatve n nature. For example, e varables lke temperature, dstance, age etc. are quanttatve n

More information

System in Weibull Distribution

System in Weibull Distribution Internatonal Matheatcal Foru 4 9 no. 9 94-95 Relablty Equvalence Factors of a Seres-Parallel Syste n Webull Dstrbuton M. A. El-Dacese Matheatcs Departent Faculty of Scence Tanta Unversty Tanta Egypt eldacese@yahoo.co

More information

Iranian Journal of Mathematical Chemistry, Vol. 5, No.2, November 2014, pp

Iranian Journal of Mathematical Chemistry, Vol. 5, No.2, November 2014, pp Iranan Journal of Matematcal Cemstry, Vol. 5, No.2, November 204, pp. 85 90 IJMC Altan dervatves of a grap I. GUTMAN (COMMUNICATED BY ALI REZA ASHRAFI) Faculty of Scence, Unversty of Kragujevac, P. O.

More information

COMPLEX NUMBERS AND QUADRATIC EQUATIONS

COMPLEX NUMBERS AND QUADRATIC EQUATIONS COMPLEX NUMBERS AND QUADRATIC EQUATIONS INTRODUCTION We know that x 0 for all x R e the square of a real number (whether postve, negatve or ero) s non-negatve Hence the equatons x, x, x + 7 0 etc are not

More information

Lecture 4: Universal Hash Functions/Streaming Cont d

Lecture 4: Universal Hash Functions/Streaming Cont d CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected

More information

Direct Methods for Solving Macromolecular Structures Ed. by S. Fortier Kluwer Academic Publishes, The Netherlands, 1998, pp

Direct Methods for Solving Macromolecular Structures Ed. by S. Fortier Kluwer Academic Publishes, The Netherlands, 1998, pp Drect Metods for Solvng Macromolecular Structures Ed. by S. Forter Kluwer Academc Publses, Te Neterlands, 998, pp. 79-85. SAYRE EQUATION, TANGENT FORMULA AND SAYTAN FAN HAI-FU Insttute of Pyscs, Cnese

More information

Math 426: Probability MWF 1pm, Gasson 310 Homework 4 Selected Solutions

Math 426: Probability MWF 1pm, Gasson 310 Homework 4 Selected Solutions Exercses from Ross, 3, : Math 26: Probablty MWF pm, Gasson 30 Homework Selected Solutons 3, p. 05 Problems 76, 86 3, p. 06 Theoretcal exercses 3, 6, p. 63 Problems 5, 0, 20, p. 69 Theoretcal exercses 2,

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

A CHARACTERIZATION OF ADDITIVE DERIVATIONS ON VON NEUMANN ALGEBRAS

A CHARACTERIZATION OF ADDITIVE DERIVATIONS ON VON NEUMANN ALGEBRAS Journal of Mathematcal Scences: Advances and Applcatons Volume 25, 2014, Pages 1-12 A CHARACTERIZATION OF ADDITIVE DERIVATIONS ON VON NEUMANN ALGEBRAS JIA JI, WEN ZHANG and XIAOFEI QI Department of Mathematcs

More information

On the Correlation between Boolean Functions of Sequences of Random Variables

On the Correlation between Boolean Functions of Sequences of Random Variables On the Correlaton between Boolean Functons of Sequences of Random Varables Farhad Shran Chaharsoogh Electrcal Engneerng and Computer Scence Unversty of Mchgan Ann Arbor, Mchgan, 48105 Emal: fshran@umch.edu

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

Channel Carrying: A Novel Hando Scheme. for Mobile Cellular Networks. Purdue University, West Lafayette, IN 47907, U.S.A.

Channel Carrying: A Novel Hando Scheme. for Mobile Cellular Networks. Purdue University, West Lafayette, IN 47907, U.S.A. Cannel Carryng: A Novel Hando Sceme for oble Cellular Networks Juny L Ness B. Sro y Edwn K.P. Cong Scool of Electrcal and Computer Engneerng Purdue Unversty, West Lafayette, IN 47907, U.S.A. E-mal: funy,

More information

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION Advanced Mathematcal Models & Applcatons Vol.3, No.3, 2018, pp.215-222 ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EUATION

More information

Rethinking MIMO for Wireless Networks: Linear Throughput Increases with Multiple Receive Antennas

Rethinking MIMO for Wireless Networks: Linear Throughput Increases with Multiple Receive Antennas Retnng MIMO for Wreless etwors: Lnear Trougput Increases wt Multple Receve Antennas ar Jndal Unversty of Mnnesota Unverstat Pompeu Fabra Jont wor wt Jeff Andrews & Steven Weber MIMO n Pont-to-Pont Cannels

More information

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1 On an Extenson of Stochastc Approxmaton EM Algorthm for Incomplete Data Problems Vahd Tadayon Abstract: The Stochastc Approxmaton EM (SAEM algorthm, a varant stochastc approxmaton of EM, s a versatle tool

More information

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora prnceton unv. F 13 cos 521: Advanced Algorthm Desgn Lecture 3: Large devatons bounds and applcatons Lecturer: Sanjeev Arora Scrbe: Today s topc s devaton bounds: what s the probablty that a random varable

More information

Continuous Time Markov Chain

Continuous Time Markov Chain Contnuous Tme Markov Chan Hu Jn Department of Electroncs and Communcaton Engneerng Hanyang Unversty ERICA Campus Contents Contnuous tme Markov Chan (CTMC) Propertes of sojourn tme Relatons Transton probablty

More information