How to Find Good Finite-Length Codes: From Art Towards Science

Size: px
Start display at page:

Download "How to Find Good Finite-Length Codes: From Art Towards Science"

Transcription

1 How to Fnd Good Fnte-Length Codes: From Art Towards Scence Abdelazz Amraou Andrea Montanar and Ruedger Urbanke arxv:cs.it/6764 v 3 Jul 26 Abstract We explan how to optmze fnte-length LDPC codes for transmsson over the bnary erasure channel. Our approach reles on an analytc approxmaton of the erasure probablty. Ths s n turn based on a fnte-length scalng result to model large scale erasures and a unon bound nvolvng mnmal stoppng sets to take nto account small error events. We show that the performances of optmzed ensembles as observed n smulatons are well descrbed by our approxmaton. Although we only address the case of transmsson over the bnary erasure channel our method should be applcable to a more general settng. I. INTRODUCTION In ths paper we consder transmsson usng random elements from the standard ensemble of low-densty partycheck LDPC codes defned by the degree dstrbuton par λ ρ. For an ntroducton to LDPC codes and the standard notaton see []. In [2] one of the authors AM suggested that the probablty of error of teratve codng systems follows a scalng law. In [3] [5] t was shown that ths s ndeed true for LDPC codes assumng that transmsson takes place over the BEC. Strctly speakng scalng laws descrbe the asymptotc behavor of the error probablty close to the threshold for ncreasng blocklengths. However as observed emprcally n the papers mentoned above scalng laws provde good approxmatons to the error probablty also away from the threshold and already for modest blocklengths. Ths s the startng pont for our fnte-length optmzaton. In [3] [5] the form of the scalng law for transmsson over the BEC was derved and t was shown how to compute the scalng parameters by solvng a system of ordnary dfferental equatons. Ths system was called covarance evoluton n analogy to densty evoluton. Densty evoluton concerns the evoluton of the average number of erasures stll contaned n the graph durng the decodng process whereas covarance evoluton concerns the evoluton of ts varance. Whereas Ths nvted paper s an enhanced verson of the work presented at the 4 th Internatonal Symposum on Turbo Codes and Related Topcs Münch Germany 26. The work presented n ths paper was supported n part by the Natonal Competence Center n Research on Moble Informaton and Communcaton Systems NCCR-MICS a center supported by the Swss Natonal Scence Foundaton under grant number AM has been partally supported by the EU under the ntegrated proect EVERGROW. A. Amraou s wth EPFL School of Computer and Communcaton Scences Lausanne CH-5 Swtzerland abdelazz.amraou@epfl.ch A. Montanar s wth Ecole Normale Supéreure Laboratore de Physque Théorque 7523 Pars Cedex 5 France montanar@lpt.ens.fr R. Urbanke s wth EPFL School of Computer and Communcaton Scences Lausanne CH-5 Swtzerland ruedger.urbanke@epfl.ch Luby et al. [6] found an explct soluton to the densty evoluton equatons to date no such soluton s known for the system of covarance equatons. Covarance evoluton must therefore be ntegrated numercally. Unfortunately the dmenson of the ODE s system ranges from hundreds to thousand for typcal examples. As a consequence numercal ntegraton can be qute tme consumng. Ths s a serous problem f we want to use scalng laws n the context of optmzaton where the computaton of scalng parameters must be repeated for many dfferent ensembles durng the optmzaton process. In ths paper we make two man contrbutons. Frst we derve explct analytc expressons for the scalng parameters as a functon of the degree dstrbuton par and quanttes whch appear n densty evoluton. Second we provde an accurate approxmaton to the erasure probablty stemmng from small stoppng sets and resultng n the erasure floor. The paper s organzed as follows. Secton II descrbes our approxmaton for the error probablty the scalng law beng dscussed n Secton II-B and the error floor n Secton II-C. We combne these results and gve n Secton II-D an approxmaton to the erasure probablty curve denoted by Pn λ ρ ǫ that can be computed effcently for any blocklength degree dstrbuton par and channel parameter. The basc deas behnd the explct determnaton of the scalng parameters together wth the resultng expressons are collected n Secton III. Fnally the most techncal and trcky part of ths computaton s deferred to Secton IV. As a motvaton for some of the rather techncal ponts to come we start n Secton I-A by showng how Pn λ ρ ǫ can be used to perform an effcent fnte-length optmzaton. A. Optmzaton The optmzaton procedure takes as nput a blocklength n the BEC erasure probablty ǫ and a target probablty of erasure call t P target. Both bt or block probablty can be consdered. We want to fnd a degree dstrbuton par λ ρ of maxmum rate so that Pn λ ρ ǫ P target where Pn λ ρ ǫ s the approxmaton dscussed n the ntroducton. Let us descrbe an effcent procedure to accomplsh ths optmzaton locally however many equvalent approaches are possble. Although provdng a global optmzaton scheme goes beyond the scope of ths paper the local procedure was found emprcally to converge often to the global optmum. It s well known [] that the desgn rate rλ ρ assocated to a degree dstrbuton par λ ρ s equal to rλ ρ = ρ λ.

2 2 For most ensembles the actual rate of a randomly chosen element of the ensemble LDPCn λ ρ s close to ths desgn rate [7]. In any case rλ ρ s always a lower bound. Assume we change the degree dstrbuton par slghtly by λx = λ x and ρx = ρ x where λ = = ρ and assume that the change s suffcently small so that λ + λ as well as ρ + ρ are stll vald degree dstrbutons non-negatve coeffcents. A quck calculaton then shows that the desgn rate changes by rλ + λ ρ + ρ rλ ρ r λ ρ λ λ. 2 In the same way the erasure probablty changes accordng to the approxmaton by Pn λ + λ ρ + ρ ǫ Pn λ ρ ǫ P λ + P ρ. 3 λ ρ Equatons 2 and 3 gve rse to a smple lnear program to optmze locally the degree dstrbuton: Start wth some ntal degree dstrbuton par λ ρ. If Pn λ ρ ǫ P target then ncrease the rate by a repeated applcaton of the followng lnear program. LP : [Lnear program to ncrease the rate] max{ r λ / ρ / λ = ; mn{δ λ } λ δ; ρ = ; mn{δ ρ } ρ δ; P λ + P ρ P target Pn λ ρ ǫ}. λ ρ Hereby δ s a suffcently small non-negatve number to ensure that the degree dstrbuton par changes only slghtly at each step so that changes of the rate and of the probablty of erasure are accurately descrbed by the lnear approxmaton. The value δ s best adapted dynamcally to ensure convergence. One can start wth a large value and decrease t the closer we get to the fnal answer. The obectve functon n LP s equal to the total dervatve of the rate as a functon of the change of the degree dstrbuton. Several rounds of ths lnear program wll gradually mprove the rate of the code ensemble whle keepng the erasure probablty below the target last nequalty. Sometmes t s necessary to ntalze the optmzaton procedure wth degree dstrbuton pars that do not fulfll the target erasure probablty constrant. Ths s for nstance the case f the optmzaton s repeated for a large number of randomly chosen ntal condtons. In ths way we can check whether the procedure always converges to the same pont thus suggestng that a global optmum was found or otherwse pck the best outcome of many trals. To ths end we defne a lnear program that decreases the erasure probablty. LP 2: [Lnear program to decrease Pn λ ρ ǫ] mn{ P λ + P ρ λ ρ λ = ; mn{δ λ } λ δ; ρ = ; mn{δ ρ } ρ δ}. Example : [Sample Optmzaton] Let us show a sample optmzaton. Assume we transmt over a BEC wth channel erasure probablty ǫ =.5. We are nterested n a block length of n = 5 bts and the maxmum varable and check degree we allow are l max = 3 and r max = respectvely. We constran the block erasure probablty to be smaller than P target = 4. We further count only erasures larger or equal to s mn = 6 bts. Ths corresponds to lookng at an expurgated ensemble.e. we are lookng at the subset of codes of the ensemble that do not contan stoppng sets of szes smaller than 6. Alternatvely we can nterpret ths constrant n the sense that we use an outer code whch cleans up the remanng small erasures. Usng the technques dscussed n Secton II-C we can compute the probablty that a randomly chosen element of an ensemble does not contan stoppng sets of sze smaller than 6. If ths probablty s not too small then we have a good chance of fndng such a code n the ensemble by samplng a suffcent number of random elements. Ths can be checked at the end of the optmzaton procedure. We start wth an arbtrary degree dstrbuton par: λx =.39976x x x x x x x x x x x x 2 ρx = x x x x x x x x x 9. Ths par was generated randomly by choosng each coeffcent unformly n [ ] and then normalzng so that λ = ρ =. The approxmaton of the block erasure probablty curve of ths code as gven n Secton II-D s shown n Fg.. For ths P B %. rate/capacty. contrbuton to error floor Fg. : Approxmaton of the block erasure probablty for the ntal ensemble wth degree dstrbuton par gven n 4 and 5. ntal degree dstrbuton par we have rλ ρ =.229 and P B n = 5 λ ρ ǫ =.5 =.552 > P target. Therefore ǫ

3 3 we start by reducng P B n = 5 λ ρ ǫ =.5 over the choce of λ and ρ usng LP 2 untl t becomes lower than P target. After a number of LP 2 rounds we obtan the degree P B %. rate/capacty. contrbuton to error floor Fg. 2: Approxmaton of the block erasure probablty for the ensemble obtaned after the frst part of the optmzaton see 6 and 7. The erasure probablty has been lowered below the target. dstrbuton par: λx =.93x x x x x x x x x x x 2 ρx = x +.94x x x x x x x x 9. For ths degree dstrbuton par we have P B n = 5 λ ρ ǫ =.5 =.997 P target and rλ ρ =.28. We show the correspondng approxmaton n Fg. 2. Now we start the second phase of the optmzaton and optmze the rate whle nsurng that the block erasure probablty remans below the target usng LP. The resultng degree dstrbuton par s: λx =.73996x x x 2 8 ρx =.39753x x x 9 9 where rλ ρ =.465. The block erasure probablty plot for the result of the optmzaton s shown n Fg 3. P B %. rate/capacty. contrbuton to error floor %. rate/capacty. contrbuton to error floor Fg. 3: Error probablty curve for the result of the optmzaton see 8 and 9. The sold curve s P B n = 5 λ ρ ǫ =.5 whle the small dots correspond to smulaton ponts. In dotted are the results wth a more aggressve expurgaton. ǫ ǫ Each LP step takes on the order of seconds on a standard PC. In total the optmzaton for a gven set of parameters n ǫ P target l max r max s mn takes on the order of mnutes. Recall that the whole optmzaton procedure was based on P B n λ ρ ǫ whch s only an approxmaton of the true block erasure probablty. In prncple the actual performances of the optmzed ensemble could be worse or better than predcted by P B n λ ρ ǫ. To valdate the procedure we computed the block erasure probablty for the optmzed degree dstrbuton also by means of smulatons and compare the two. The smulaton results are shown n Fg 3 dots wth 95% confdence ntervals : analytcal approxmate and numercal results are n almost perfect agreement! How hard s t to fnd a code wthout stoppng sets of sze smaller than 6 wthn the ensemble LDPC5 λ ρ wth λ ρ gven by Eqs. 8 and 9? As dscussed n more detal n Secton II-C n the lmt of large blocklengths the number of small stoppng sets has a ont Posson dstrbuton. As a consequence f à denotes the expected number of mnmal stoppng sets of sze n a random element from LDPC5 λ ρ the probablty that t contans no stoppng set of sze smaller than 6 s approxmately exp{ 5 = Ã}. For the optmzed ensemble we get exp{ }.753 a qute large probablty. We repeated the optmzaton procedure wth varous dfferent random ntal condtons and always ended up wth essentally the same degree dstrbuton. Therefore we can be qute confdent that the result of our local optmzaton s close to the global optmal degree dstrbuton par for the gven constrants n ǫ P target l max r max s mn. There are many ways of mprovng the result. E.g. f we allow hgher degrees or apply a more aggressve expurgaton we can obtan degree dstrbuton pars wth hgher rate. E.g. for the choce l max = 5 and s mn = 8 the resultng degree dstrbuton par s λx =.253x x x x 4 ρx =.6829x x 6 where rλ ρ = The correspondng curve s depcted n Fg 3 as a dotted lne. However ths tme the probablty that a random element from LDPC5 λ ρ has no stoppng set of sze smaller than 8 s approxmately It wll therefore be harder to fnd a code that fulflls the expurgaton requrement. It s worth stressng that our results could be mproved further by applyng the same approach to more powerful ensembles e.g. mult-edge type ensembles or ensembles defned by protographs. The steps to be accomplshed are: derve the scalng laws and defne scalng parameters for such ensembles; fnd effcently computable expressons for the scalng parameters; optmze the ensemble wth respect to ts defnng parameters e.g. the degree dstrbuton as above. Each of these steps s a manageable task albet not a trval one. Another generalzaton of our approach whch s slated for future work s the extenson to general bnary memoryless

4 4 symmetrc channels. Emprcal evdence suggests that scalng laws should also hold n ths case see [2] [3]. How to prove ths fact or how to compute the requred parameters however s an open ssue. In the rest of ths paper we descrbe n detal the approxmaton Pn λ ρ ǫ for the BEC. II. APPROXIMATION P B n λ ρ ǫ AND P b n λ ρ ǫ In order to derve approxmatons for the erasure probablty we separate the contrbutons to ths erasure probablty nto two parts the contrbutons due to large erasure events and the ones due to small erasure events. The large erasure events gve rse to the so-called waterfall curve whereas the small erasure events are responsble for the erasure floor. In Secton II-B we recall that the water fall curve follows a scalng law and we dscuss how to compute the scalng parameters. We denote ths approxmaton of the water fall curve by P W B/b n λ ρ ǫ. We next show n Secton II-C how to approxmate the erasure floor. We call ths approxmaton P E B/bs mn n λ ρ ǫ. Hereby s mn denotes the expurgaton parameter.e we only count error events nvolvng at least s mn erasures. Fnally we collect n Secton II-D our results and gve an approxmaton to the total erasure probablty. We start n Secton II-A wth a short revew of densty evoluton. A. Densty Evoluton The ntal analyss of the performance of LDPC codes assumng that transmsson takes place of the BEC s due to Luby Mtzenmacher Shokrollah Spelman and Stemann see [6] and t s based on the so-called peelng algorthm. In ths algorthm we peel-off one varable node at a tme and all ts adacent check nodes and edges creatng a sequence of resdual graphs. Decodng s successful f and only f the fnal resdual graph s the empty graph. A varable node can be peeled off f t s connected to at least one check node whch has resdual degree one. Intally we start wth the complete Tanner graph representng the code and n the frst step we delete all varable nodes from the graph whch have been receved have not been erased all connected check nodes and all connected edges. From the descrpton of the algorthm t should be clear that the number of degree-one check nodes plays a crucal role. The algorthm stops f and only f no degree-one check node remans n the resdual graph. Luby et al. were able to gve analytc expressons for the expected number of degree-one check nodes as a functon of the sze of the resdual graph n the lmt of large blocklengths. They further showed that most nstances of the graph and the channel follow closely ths ensemble average. More precsely let r denote the fracton of degree-one check nodes n the decoder. Ths means that the actual number of degree-one check nodes s equal to n rr where n s the blocklength and r s the desgn rate of the code. Then as shown n [6] r s gven parametrcally by r y = ǫλy[y + ρ ǫλy]. 2 where y s determned so that ǫly s the fractonal wth respect to n sze of the resdual graph. Hereby Lx = L x x = λudu s the node perspectve varable node λudu dstrbuton.e. L s the fracton of varable nodes of degree n the Tanner graph. Analogously we let R denote the fracton of degree check nodes and set Rx = R x. Wth an abuse of notaton we shall sometmes denote the rregular LDPC ensemble as LDPCn L R. The threshold nose parameter ǫ = ǫ λ ρ s the supremum value of ǫ such that r y > for all y ] and therefore teratve decodng s successful wth hgh probablty. In Fg. 4 we show the functon r y depcted for the ensemble wth λx = x 2 and ρx = x 5 for ǫ = ǫ. As r y erasure floor erasures waterfall erasures y Fg. 4: r y for y [ ] at the threshold. The degree dstrbuton par s λx = x 2 and ρy = x 5 and the threshold s ǫ = the fracton of degree-one check nodes concentrates around r y the decoder wll fal wth hgh probablty only n two possble ways. The frst relates to y and corresponds to small erasure events. The second one corresponds to the value y such that r y =. In ths case the fracton of varable nodes that can not be decoded concentrates around ν = ǫ Ly. We call a pont y where the functon y +ρ ǫλy and ts dervatve both vansh a crtcal pont. At threshold.e. for ǫ = ǫ there s at least one crtcal pont but there may be more than one. Notce that the functon r y always vanshes together wth ts dervatve at y = cf. Fg. 4. However ths does not mply that y = s a crtcal pont because of the extra factor λy n the defnton of r y. Note that f an ensemble has a sngle crtcal pont and ths pont s strctly postve then the number of remanng erasures condtoned on decodng falure concentrates around ν ǫ Ly. In the rest of ths paper we wll consder ensembles wth a sngle crtcal pont and separate the two above contrbutons. We wll consder n Secton II-B erasures of sze at least nγν wth γ. In Secton II-C we wll nstead focus on erasures of sze smaller than nγν. We wll fnally combne the two results n Secton II-D B. Waterfall Regon It was proved n [3] that the erasure probablty due to large falures obeys a well defned scalng law. For our purpose t s best to consder a refned scalng law whch was conectured n the same paper. For convenence of the reader we restate t here. y

5 5 Conecture : [Refned Scalng Law] Consder transmsson over a BEC of erasure probablty ǫ usng random elements from the ensemble LDPCn λ ρ = LDPCn L R. Assume that the ensemble has a sngle crtcal pont y > and let ν = ǫ Ly where ǫ s the threshold erasure probablty. Let P W b n λ ρ ǫ respectvely P W B n λ ρ ǫ denote the expected bt block erasure probablty due to erasures of sze at least nγν where γ. Fx z := nǫ βn 2 3 ǫ. Then as n tends to nfnty z P W B n λ ρ ǫ = Q α P W b n λ ρ ǫ = ν Q + On /3 z + On α /3 where α = αλ ρ and β = βλ ρ are constants whch depend on the ensemble. In [3] [5] a procedure called covarance evoluton was defned to compute the scalng parameter α through the soluton of a system of ordnary dfferental equatons. The number of equatons n the system s equal to the square of the number of varable node degrees plus the largest check node degree mnus one. As an example for an ensemble wth 5 dfferent varable node degrees and r max = 3 the number of coupled equatons n covarance evoluton s = 56. The computaton of the scalng parameter can therefore become a challengng task. The man result n ths paper s to show that t s possble to compute the scalng parameter α wthout explctly solvng covarance evoluton. Ths s the crucal ngredent allowng for effcent code optmzaton. Lemma : [Expresson for α] Consder transmsson over a BEC wth erasure probablty ǫ usng random elements from the ensemble LDPCn λ ρ = LDPCn L R. Assume that the ensemble has a sngle crtcal pont y > and let ǫ denote the threshold erasure probablty. Then the scalng parameter α n Conecture s gven by ρ x 2 ρ x 2 + ρ x 2x ρ x x 2 ρ x 2 α = L λy 2 ρ x 2 + ǫ 2 λy 2 ǫ 2 λy 2 y 2 ǫ 2 λ y 2 L λy 2 /2 where x = ǫ λy x = x. The dervaton of ths expresson s explaned n Secton III For completeness and the convenence of the reader we repeat here also an explct characterzaton of the shft parameter β whch appeared already n a slghtly dfferent form n [3] [5]. Conecture 2: [Scalng Parameter β] Consder transmsson over a BEC of erasure probablty ǫ usng random elements from the ensemble LDPCn λ ρ = LDPCn L R. Assume that the ensemble has a sngle crtcal pont y > and let ǫ denote the threshold erasure probablty. Then the scalng parameter β n Conecture s gven by ǫ β/ω = 4 r2 2 ǫ λ y 2 r2 x λ y r2 +λ y x 2 /3 L 2 ρ x 3 x 2ǫ λ y 2 r3 λ y r2 x 3 where x = ǫ λy and x = x and for 2 r = m + ρ m ǫ λy. m Further Ω s a unversal code ndependent constant defned n Ref. [3] [5]. We also recall that Ω s numercally qute close to. In the rest of ths paper we shall always adopt the approxmate Ω by. C. Error Floor Lemma 2: [Error Floor] Consder transmsson over a BEC of erasure probablty ǫ usng random elements from an ensemble LDPCn λ ρ = LDPCn L R. Assume that the ensemble has a sngle crtcal pont y >. Let ν = ǫ Ly where ǫ s the threshold erasure probablty. Let P E bs mn n λ ρ ǫ respectvely P E Bs mn n λ ρ ǫ denote the expected bt block erasure probablty due to stoppng sets of sze between s mn and nγν where γ. Then for any ǫ < ǫ P E bs mn n λ ρ ǫ = s s mn sãsǫ s + o 4 P E Bs mn n λ ρ ǫ = e s s mn à sǫ s + o 5 where Ãs = coef {log Ax x s } for s wth Ax = s A sx s and A s = { } coef + xy nl x s y e 6 e coef { + x x n rr x e}. nl e Dscusson: In the lemma we only clam a multplcatve error term of the form o snce ths s easy to prove. Ths weak statement would reman vald f we replaced the expresson for A s gven n 6 wth the explct and much easer to compute asymptotc expresson derved n []. In practce however the approxmaton s much better than the stated o error term f we use the fnte-length averages gven by 6. The hurdle n provng stronger error terms s due to the fact that for a gven length t s not clear how to relate the number of stoppng sets to the number of mnmal stoppng sets. However ths relatonshp becomes easy n the lmt of large blocklengths. Proof: The key n dervng ths erasure floor expresson s n focusng on the number of mnmal stoppng sets. These are stoppng set that are not the unon of smaller stoppng sets. The asymptotc dstrbuton of the number of mnmal stoppng sets contaned n an LDPC graph was already studed n []. We recall that the dstrbuton of the number of mnmal stoppng sets tends to a Posson dstrbuton wth ndependent components as the length tends to nfnty. Because of ths ndependence one can relate the number of mnmal stoppng sets to the number of stoppng sets any combnaton of mnmal stoppng sets gves rse to a stoppng set. In the lmt of nfnty blocklenghts the mnmal stoppng sets are nonoverlappng wth probablty one so that the weght of the resultng stoppng set s ust the sum of the weghts of the ndvdual stoppng sets. For example the number of stoppng sets of sze two s equal to the number of mnmal stoppng sets of sze two plus the number of stoppng sets we get by takng all pars of mnmal stoppng sets of sze one.

6 6 Therefore defne Ãx = s Ãsx s wth Ãs the expected number of mnmal stoppng sets of sze s n the graph. Defne further Ax = s A sx s wth A s the expected number of stoppng sets of sze s n the graph not necessarly mnmal. We then have Ax = eãx Ãx2 = + Ãx + + Ãx3 + 2! 3! so that conversely Ãx = log Ax. It remans to determne the number of stoppng sets. As remarked rght after the statement of the lemma any expresson whch converges n the lmt of large blocklength to the asymptotc value would satsfy the statement of the lemma but we get the best emprcal agreement for short lengths f we use the exact fnte-length averages. These average were already compute n [] and are gven as n 6. Consder now e.g. the bt erasure probablty. We frst compute Ax usng 6 and then Ãx by means of Ãx = log Ax. Consder one mnmal stoppng set of sze s. The probablty that ts s assocated bts are all erased s equal to ǫ s and f ths s the case ths stoppng set causes s erasures. Snce there are n expectaton Ās mnmal stoppng sets of sze s and mnmal stoppng sets are non-overlappng wth ncreasng probablty as the blocklength ncreases a smple unon bound s asymptotcally tght. The expresson for the block erasure probablty s derved n a smlar way. Now we are nterested n the probablty that a partcular graph and nose realzaton results n no small stoppng set. Usng the fact that the dstrbuton of mnmal stoppng sets follows a Posson dstrbuton we get equaton 5. D. Complete Approxmaton In Secton II-B we have studed the erasure probablty stemmng from falures of sze bgger than nγν where γ and ν = ǫ Ly.e. ν s the asymptotc fractonal number of erasures remanng after the decodng at the threshold. In Secton II-C we have studed the probablty of erasures resultng from stoppng sets of sze between s mn and nγν. Combnng the results n the two prevous sectons we get P B n λ ρ ǫ = P W B n λ ρ ǫ + PE Bs mn n λ ρ ǫ nǫ βn 2 3 ǫ =Q α + e s s mn à sǫ s P b n λ ρ ǫ = P W b n λ ρ ǫ + PE bs mn n λ ρ ǫ nǫ ν βn 2 3 ǫ Q α + sãsǫ s. s s mn 7 8 Here we assume that there s a sngle crtcal pont. If the degree dstrbuton has several crtcal ponts at dfferent values of the channel parameter ǫ ǫ 2... then we smply take a sum of terms P W B n λ ρ ǫ one for each crtcal pont. Let us fnally notce that summng the probabltes of dfferent error types provdes n prncple only an upper bound on the overall error probablty. However for each gven channel parameter ǫ only one of the terms n Eqs. 7 8 domnates. As a consequence the bound s actually tght. III. ANALYTIC DETERMINATION OF α Let us now show how the scalng parameter α can be determned analytcally. We accomplsh ths n two steps. We frst compute the varance of the number of erasure messages. Then we show n a second step how to relate ths varance to the scalng parameter α. A. Varance of the Messages Consder the ensemble LDPCn λ ρ and assume that transmsson takes place over a BEC of parameter ǫ. Perform l teratons of BP decodng. Set µ l equal to f the message sent out along edge from varable to check node s an erasure and otherwse. Consder the varance of these messages n the lmt of large blocklengths. More precsely consder V l lm E[ µl 2 ] E[ µl nl ] 2 Lemma 3 n Secton IV contans an analytc expresson for ths quantty as a functon of the degree dstrbuton par λ ρ the channel parameter ǫ and the number of teratons l. Let us consder ths varance as a functon of the parameter ǫ and the number of teratons l. Fgure 5 shows the result of ths evaluaton for the case Lx = 2 5 x x3 ; Rx = 3 x2 + 7 x3. The threshold for ths example s ǫ Fg. 5: The varance as a functon of ǫ and l = 9 for Lx = 2 5 x x3 ; Rx = 3 x2 + 7 x3. Ths value s ndcated as a vertcal lne n the fgure. As we can see from ths fgure the varance s a unmodal functon of the channel parameter. It s zero for the extremal values of ǫ ether all messages are known or all are erased and t takes on a maxmum value for a parameter of ǫ whch approaches the crtcal value ǫ as l ncreases. Further for ncreasng l the maxmum value of the varance ncreases. The lmt of these curves as l tends to nfnty V = lm l V l s also shown bold curve: the varance s zero below threshold; above threshold t s postve and dverges as the threshold s approached. In Secton IV we state the exact form of the lmtng curve. We show that for ǫ approachng ǫ from above γ V = ǫλ yρ x 2 + O ǫλ yρ x 9.

7 7 where γ = ǫ 2 λ y 2 { [ρ x 2 ρ x 2 + ρ x 2x ρ x x 2 ρ x 2 ] + ǫ 2 ρ x 2 [λy 2 λy 2 y 2 λ y 2 ] }. Here y s the unque crtcal pont x = ǫ λy and x = x. Snce ǫλ yρ x = Θ ǫ ǫ Eq. 9 mples a dvergence at ǫ. B. Relaton Between γ and α Now that we know the asymptotc varance of the edges messages let us dscuss how ths quantty can be related to the scalng parameter α. Thnk of a decoder operatng above the threshold of the code. Then for large blocklengths t wll get stuck wth hgh probablty before correctng all nodes. In Fg 6 we show R the number of degree-one check nodes as a functon of the number of erasure messages for a few decodng runs. Let V represent the normalzed varance of R r x x r x µ Fg. 6: Number of degree-one check nodes as a functon of the number of erasure messages n the correspondng BP decoder for LDPCn = 892 λx = x 2 ρx = x 5. The thn lnes represent the decodng traectores that stop when r = and the thck lne s the mean curve predcted by densty evoluton. the number of erased messages n the decoder after an nfnte number of teratons V lm lm l E[ µl 2 ] E[ µl nl ] 2 In other words V s the varance of the pont at whch the decodng traectores ht the R = axs. Ths quantty can be related to the varance of the number of degree-one check nodes through the slope of the densty evoluton curve. Normalze all the quanttes by nl the number of edges n the graph. Consder the curve r ǫ x gven by densty evoluton and representng the fracton of degree-one check nodes n the resdual graph around the crtcal pont for an erasure probablty above the threshold see Fg.6. The real decodng process stops when httng the r = axs. Thnk of a vrtual process dentcal to the decodng for r > but that contnues below the r = axs for a proper defnton see [3]. A smple calculaton shows that f the pont at whch the curve hts the x-axs vares by x whle keepng the mnmum at x t results n a varaton of the heght of the curve by r = 2 r ǫ x x 2 x x x + ox x. Takng the expectaton of the square on both sde and lettng ǫ tend to ǫ we obtan the normalzed varance of R at threshold 2 r δ rr ǫ x 2 = lm ǫ ǫ x 2 x x 2 V + ox x 2 x 2 = ǫ λ y lm ǫ ǫ ǫλ yρ x 2 V. The transton between the frst and the second lne comes the relatonshp between the ǫ and x wth r ǫ x = when ǫ tends to ǫ. The quantty V dffers from V computed n the prevous paragraphs because of the dfferent order of the lmts n and l. However t can be proved that the order does not matter and V = V. Usng the result 9 we fnally get x 2 δ rr = ǫ λ y γ. We conclude that the scalng parameter α can be obtaned as δ r r α = L γ r 2 = L x 2 λ y 2 ρ x 2 ǫ The last expresson s equal to the one n Lemma. IV. MESSAGE VARIANCE Consder the ensemble LDPCn λ ρ and transmsson over the BEC of erasure probablty ǫ. As ponted out n the prevous secton the scalng parameter α can be related to the normalzed varance wth respect to the choce of the graph and the channel realzaton of the number of erased edge messages sent from the varable nodes. Although what really matters s the lmt of ths quantty as the blocklength and the number of teratons tend to nfnty n ths order we start by provdng an exact expresson for fnte number of teratons l at nfnte blocklength. At the end of ths secton we shall take the lmt l. To be defnte we ntalze the teratve decoder by settng all check-to-varable messages to be erased at tme. We let x respectvely y be the fracton of erased messages sent from varable to check nodes from check to varable nodes at teraton n the nfnte blocklength lmt. These values are determned by the densty evoluton [] recursons y + = ρ x wth x = ǫλy where we used the notaton x = x. The above ntalzaton mples y =. For future convenence we also set x = y = for <. Usng these varables we have the followng characterzaton of V l the normalzed varance after l teratons. Lemma 3: Let G be chosen unformly at random from LDPCn λ ρ and consder transmsson over the BEC of erasure probablty ǫ. Label the nl edges of G n some fxed order by the elements of { nl }. Assume that the recever performs l rounds of Belef Propagaton decodng and let µ l be equal to one f the message sent at the end of the l-th teraton along edge from a varable node to a check

8 8 node s an erasure and zero otherwse. Then V l E[ lm µl 2 ] E[ µl nl l =x l + x l Vl Cl T = 2l = ] 2 2 edges n T l + x 2 lρ λ ρ edges n T 2 + x l = Vl Cl T edges n T 3 l + yl U + y l U + = 2l =l+ Vl C2l y l U l + y l U l x l Wl l + F x Wl ǫwl y = edges n T 4 l F ǫλ y Dl ρ x Dl x = l + F xl + Vl CV T = V C l T l F x xl + Vl CV T = λ ρ l where we ntroduced the shorthand V C k= V kc k. 2 We We defne the matrces ǫλ V = y λ ǫλ y λ 22 ρ C = ρ ρ x ρ 23 x λ V = λ 24 ρ C = ρ <. 25 Further U U l U and U l are computed through the followng recurson. For l set U =y l ǫλ y l y l ǫλ y l T U = T whereas for > l ntalze by ǫλ U l = Vl C2l T y 2l ǫλ + Vl C2l T λ y 2l λ ǫ ǫλ U l = Vl C2l T ǫλ. The recurson s U k =M kcl + k U k 26 + M 2 k[n k U k + N 2 k U k ] U k =Vl + k[n k U k 27 wth + N 2 k U k ] M k = ǫλ y max{l kl +k} {<2k} ǫλ y l k λ y l +k ǫλ y l k M 2 k = {>2k} ǫλ y l +k λ y l k λ ǫλ y mn{l kl +k} λ ǫλ y l k N k = ρ ρ x l k ρ ρ x max{l k l +k} { 2k} ρ x l +k ρ x l k N 2 k = ρ x l k {>2k} ρ x l k ρ x l +k ρ x mn{l k l +k} The coeffcents F are gven by and fnally F = l k=+. ǫλ y k ρ x k 28 2l Wl α = Vl Cl kal k α k= l + x l αλ α + λαρ ρ λ = wth Al k α equal to ǫαy l k λ αy l k + ǫλαy l k αλ α + λα ǫαy l k λ αy l k ǫλαy l k αλ α + λα k l k > l

9 9 and 2l Dl α = Vl Cl k + Vl k + k= αρ α + ρα α x l k ρ α x l k ρα x l k α x l k ρ α x l k + ρα x l k l + x l αρ α + ρα ρ λ. = Proof: Expand V l n 2 as b c a A d e B V l = lm lm = lm E[ µl = lm E[µl = lm E[µl 2 ] E[ µl nl µl ] E[µ l ]E[ µl ] nl µ l ] E[µ l ]E[ µ l ] E[µ l ] 2 µ l ] nl x 2 l. 29 In the last step we have used the fact that x l = E[µ l ] for any { Λ }. Let us look more carefully at the frst term of 29. After a fnte number of teratons each message µ l depends upon the receved symbols of a subset of the varable nodes. Snce l s kept fnte ths subset remans fnte n the large blocklength lmt and by standard arguments s a tree wth hgh probablty. As usual we refer to the subgraph contanng all such varable nodes as well as the check nodes connectng them as to the computaton tree for µ l. It s useful to splt the sum n the frst term of Eq. 29 nto two contrbutons: the frst contrbuton stems from edges so that the computaton trees of µ l and µ l ntersect and the second one stems from the remanng edges. More precsely we wrte E[µ l µ l ] nl x 2 l = lm E[µl µ l ] + lm E[µ l µ l T c ] nl x 2 l T. 3 We defne T to be that subset of the varable-to-check edge ndces so that f T then the computaton trees µ l and µ l ntersect. Ths means that T ncludes all the edges whose messages depend on some of the receved values that are used n the computaton of µ l. For convenence we complete T by ncludng all edges that are connected to the same varable nodes as edges that are already n T. T c s the complement n { nl } of the set of ndces T. The set of ndces T depends on the number of teratons performed and on the graph realzaton. For any fxed l T s a tree wth hgh probablty n the large blocklength lmt and admts a smple characterzaton. It contans two sets of edges: the ones above and the ones below edge we call ths the root edge and the varable node t s connected to the root varable node. Edges above the root are the ones departng from a varable node that can be reached by a non reversng path startng wth the root edge and nvolvng at most f g Fg. 7: Graph representng all edges contaned n T for the case of l = 2. The small letters represent messages sent along the edges from a varable node to a check node and the captal letters represent varable nodes. The message µ l s represented by a. l varable nodes not ncludng the root one. Edges below the root are the ones departng from a varable node that can be reached by a non reversng path startng wth the opposte of the root edge and nvolvng at most 2l varable nodes not ncludng the root one. Edges departng from the root varable node are consdered below the root apart from the root edge tself. We have depcted n Fg. 7 an example for the case of an rregular graph wth l = 2. In the mddle of the fgure the edge a carres the message µ l. We wll call µl the root message. We expand the graph startng from ths root node. We consder l varable node levels above the root. As an example notce that the channel output on node A affects µ l as well as the message sent on b at the l-th teraton. Therefore the correspondng computaton trees ntersect and accordng to our defnton b T. On the other hand the computaton tree of c does not ntersect the one of a but c T because t shares a varable node wth b. We also expand 2l levels below the root. For nstance the value receved on node B affects both µ l and the message sent on g at the l-th teraton. We compute the two terms n 3 separately. Defne S = lm E[µ l T µl ] and S c = lm E[µ l T µ l c ] nl x 2 l. Computaton of S: Havng defned T we can further dentfy four dfferent types of terms appearng n S and wrte S = lm E[µl µ l ] = lm E[µl T µ l T lm E[µl µ l T 3 ] + lm E[µl µ l T 2 ] + lm ]+ E[µl µ l ] T 4

10 l 2l µ l Fg. 8: Sze of T. It contans l layers of varable nodes above the root edge and 2l layer of varable nodes below the root varable node. The gray area represent the computaton tree of the message µ l. It contans l layers of varable nodes below the root varable node. The subset T T contans the edges above the root varable node that carry messages that pont upwards we nclude the root edge n T. In Fg. 7 the message sent on edge b s of ths type. T 2 contans all edges above the root but pont downwards such as c n Fg. 7. T 3 contans the edges below the root that carry an upward messages lke d and f. Fnally T 4 contans the edges below the root varable node that pont downwards lke e and g. Let us start wth the smplest term nvolvng the messages n T 2. If T 2 then the computaton trees of µ l and µl are wth hgh probablty dsont n the large blocklength lmt. In ths case the messages µ l and µ l do not depend on any common channel observaton. The messages are nevertheless correlated: condtoned on the computaton graph of the root edge the degree dstrbuton of the computaton graph of edge s based assume that the computaton graph of the root edge contans an unusual number of hgh degree check nodes; then the computaton graph of edge must contan n expectaton an unusual low number of hgh degree check nodes. Ths correlaton s however of order O/n and snce T only contans a fnte number of edges the contrbuton of ths correlaton vanshes as n. We obtan therefore lm E[µl T 2 µ l l l ] =x 2 l ρ ρ λ where we used lm E[µ l µl ] = x 2 l and the fact that the expected number of edges n T 2 s ρ l = λ ρ. For the edges n T we obtan lm E[µl µ l ] = x l + 3 T l x l VlCl Vl + Cl T = = wth the matrces V and C defned n Eqs. 22 and 23. In order to understand ths expresson consder the followng case cf. Fg. 9 for an llustraton. We are at the -th teraton of BP decodng and we pck an edge at random n the graph. It s connected to a check node of degree wth probablty ρ. Assume further that the message carred by ths edge from the varable node to the check node ncomng message s erased wth probablty p and known wth probablty p. We want to compute the expected numbers of erased and known messages sent out by the check node on ts other edges outgong messages. If the ncomng message s erased then the number of erased outgong messages s exactly. Averagng over the check node degrees gves us ρ. If the ncomng message s known then the expected number of erased outgong messages s x 2. Averagng over the check node degrees gves us ρ ρ x. The expected number of erased outgong messages s therefore pρ + pρ ρ x. Analogously the expected number of known outgong messages s pρ x. Ths result can be wrtten usng a matrx notaton: the expected number of erased respectvely known outgong messages s the frst respectvely second component of the vector Cp p T wth C beng defned n Eq. 23. The stuaton s smlar f we consder a varable node nstead of the check node wth the matrx the matrx V replacng C. The result s generalzed to several layers of check and varable nodes by takng the product of the correspondng matrces cf. Fg. 9. Cp p T p p T Vp p T p p T V + Cp p T p p T Fg. 9: Number of outgong erased messages as a functon of the probablty of erasure of the ncomng message. The contrbuton of the edges n T to S s obtaned by wrtng lm E[µl µ l ] T = lm P{µl = }E[ µ l µ l = ]. 32 T The condtonal expectaton on the rght hand sde s gven by l + Vl Cl. 33 = where the s due to the fact that E[µ l µ l = ] = and each summand Vl Cl T s the expected number of erased messages n the -th layer of edges n T condtoned on the fact that the root edge s erased at teraton

11 l notce that µ l = mples µ = for all l. Now multplyng 33 by P{µ l = } = x l gves us 3. The computaton s smlar for the edges n T 3 and results n lm E[µl µ l ] = x l T 3 2l = Vl Cl In ths sum when > l we have to evaluate the matrcesv andc for negatve ndces usng the defntons gven n 24 and 25. The meanng of ths case s smple: f > l then the observatons n these layers do not nfluence the message µ l. Therefore for these steps we only need to count the expected number of edges. In order to obtan S t remans to compute the contrbuton of the edges n T 4. Ths case s slghtly more nvolved than the prevous ones. Recall that T 4 ncludes all the edges that are below the root node and pont downwards. In Fg. 7 edges e and g are elements of T 4. We clam that lm E[µl µ l ] T 4 = l yl U + y l U 34 = + 2l =l+ Vl C2l y l U l + y l U l. The frst sum on the rght hand sde corresponds to messages µ l T 4 whose computaton tree contans the root varable node. In the case of Fg. 7 where l = 2 the contrbuton of edge e would be counted n ths frst sum. The second term n 34 corresponds to edges T 4 that are separated from the root edge by more than l + varable nodes. In Fg. 7 edge g s of ths type. In order to understand the frst sum n 34 consder the root edge and an edge T 4 separated from the root edge by + varable node wth { l}. For ths edge n T 4 consder two messages t carres: the message that s sent from the varable node to the check node at the l-th teraton ths outgong message partcpates n our second moment calculaton and the one sent from the check node to the varable node at the l -th teraton ncomng. Defne the two-components vector U as follows. Its frst component s the ont probablty that both the root and the outgong messages are erased condtoned on the fact that the ncomng message s erased multpled by the expected number of edges n T 4 whose dstance from the root s the same as for edge. Its second component s the ont probablty that the root message s erased and that the outgong message s known agan condtoned on the ncomng message beng erased and multpled by the expected number of edges n T 4 at the same dstance from the root. The vector U s defned n exactly the same manner except that n ths case we condton on the ncomng message beng known. The superscrpt or accounts respectvely for the cases where the ncomng message s erased or known.. From these defntons t s clear that the contrbuton to S of the edges that are n T 4 and separated from the root edge by + varable nodes wth { l} s y l U + y l U. We stll have to evaluate U and U. In order to do ths we defne the vectors U k and U k wth k analogously to the case k = except that ths tme we consder the root edge and an edge n T 4 separated from the root edge by k + varable nodes. The outgong message we consder s the one at the l + k-th teraton and the ncomng message we condton on s the one at the l k-th teraton. It s easy to check that U and U can be computed n a recursve manner usng U k and U k. The ntal condtons are U yl ǫλ y l = y l ǫλ U = y l and the recurson s for k { } s the one gven n Lemma 3 cf. Eqs. 26 and 27. Notce that any receved value whch s on the path between the root edge and the on the correspondng edges. Ths s why ths recurson s slghtly more nvolved than the one for T. The stuaton s depcted n the left sde of Fg.. edge n T 4 affects both the messages µ l and µ l root edge edge n T 4 root edge edge n T 4 Fg. : The two stuatons that arse when computng the contrbuton of T 4. In the left sde we show the case where the two edges are separated by at most l + varable nodes and n the rght sde the case where they are separated by more than l + varable nodes. Consder now the case of edges nt 4 that are separated from the root edge by more than l+ varable nodes cf. rght pcture n Fg.. In ths case not all of the receved values along the path connectng the two edges do affect both messages. We therefore have to adapt the prevous recurson. We start from the root edge and compute the effect of the receved values that only affect ths message resultng n a expresson smlar to the one we used to compute the contrbuton of T. Ths gves us the followng ntal condton ǫλ U l = Vl C2l T y 2l ǫλ + Vl C2l T λ y 2l λ ǫ ǫλ U l = Vl C2l T ǫλ.

12 2 We then apply the recurson gven n Lemma 3 to the ntersecton of the computaton trees. We have to stop the recurson at k = l end of the ntersecton of the computaton trees. It remans to account for the receved values that only affect the messages on the edge n T 4. Ths s done by wrtng 2l =l+ Vl C2l y l U l + y l U l whch s the second term on the rght hand sde of Eq Computaton of S c : We stll need to compute S c = lm E[µ l T c µl ] nl x 2 l. Recall that by defnton all the messages that are carred by edges n T c at the l-th teraton are functons of a set of receved values dstnct from the ones µ l depends on. At frst sght one mght thnk that such messages are ndependent from µ l. Ths s ndeed the case when the Tanner graph s regular.e. for the degree dstrbutons λx = x l and ρx = x r. We then have S c = lm E[µ l ] nl x 2 l µ l T c = lm T c x 2 l Λ x 2 l = lm Λ T x 2 l Λ x 2 l = T x 2 l wth the cardnalty of T beng T = 2l = l r l + l = l r l. Consder now an rregular ensemble and let G T be the graph composed by the edges n T and by the varable and check nodes connectng them. Unlke n the regular case G T s not fxed anymore and depends n ts sze as well as n ts structure on the graph realzaton. It s clear that the root message µ l depends on the realzaton of G T. We wll see that the messages carred by the edges n T c also depend on the realzaton of G T. On the other hand they are clearly condtonally ndependent gven G T because condtoned on G T µ l s ust a determnstc functon of the receved symbols n ts computaton tree. If we let denote a generc edge n T c for nstance the one wth the lowest ndex we can therefore wrte S c = lm = lm = lm = lm E[µ l E GT [E[µ l µ l T c ] nl x 2 l µ l T c G T ]] nl x 2 l E GT [ T c E[µ l G T ]E[µ l G T ]] nl x 2 l E GT [nl T E[µ l G T ]E[µ l G T ]] nl x 2 l = lm nl E GT [E[µ l G T]E[µ l G T ]] nl x 2 l lm E G T [ T E[µ l G T ]E[µ l G T ]]. 35 We need to compute E[µ l G T ] for a fxed realzaton of G T and an arbtrary edge taken from T c the expectaton does not depend on T c : we can therefore consder t as a random edge as well. Ths value dffers slghtly from x l for two reasons. The frst one s that we are dealng wth a fxedsze Tanner graph although takng later the lmt n and therefore the degrees of the nodes n G T are correlated wth the degrees of nodes n ts complement G\G T. Intutvely f G T contans an unusually large number of hgh degree varable nodes the rest of the graph wll contan an unusually small number of hgh degree varable nodes affectng the average E[µ l G T ]. The second reason why E[µ l G T ] dffers from x l s that certan messages carred by edges n T c whch are close to G T are affected by messages that flow out of G T. The frst effect can be characterzed by computng the degree dstrbuton on G\G T as a functon of G T. Defne V G T respectvely C G T to be the number of varable nodes check nodes of degree n G T and let V x;g T = V G T x and Cx;G T = C G T x. We shall also need the dervatves of these polynomals: V x;g T = V G T x and C x;g T = C G T x. It s easy to check that f we take a bpartte graph havng a varable degree dstrbutons λx and remove a varable node of degree the varable degree dstrbuton changes by δ λx = λx x nl + O/n 2. Therefore f we remove G T from the bpartte graph the remanng graph wll have a varable perspectve degree dstrbuton that dffer from the orgnal by δλx = V ;G T λx V x;g T nl + O/n 2. In the same way the check degree dstrbuton when we remove G T changes by δρx = C ;G T ρx C x;g T nl + O/n 2. If the degree dstrbutons change by δλx and δρx the fracton x l of erased varable-to-check messages changes by δx l. To the lnear order we get l l δx l = ǫλ y k ρ x k [ǫδλy ǫλ y δρ x ] = k=+ = Λ l F [ǫv ;G T λy V y ;G T = ǫλ y C ;G T ρ x C x ;G T ] + O/n 2 wth F defned as n Eq. 28. Imagne now that we x the degree dstrbuton of G\G T. The condtonal expectaton E[µ l G T ] stll depends on the detaled structure of G T. The reason s that the messages that flow out of the boundary of G T both ther number and value depend on G T and these message affect messages n G\G T. Snce the fracton of such boundary messages s O/n ther effect can be evaluated agan perturbatvely. Call B the number of edges formng the boundary of G T edges emanatng upwards from the varable nodes that are l

13 3 levels above the root edge and emanatng downwards from the varable nodes that are 2l levels below the root varable node and let B be the number of erased messages carred at the -th teraton by these edges. Let x be the fracton of erased messages ncomng to check nodes n G\G T from varable nodes n G\G T at the -th teraton. Takng nto account the messages comng from varable nodes ng T.e. correspondng to boundary edge the overall fracton wll be x +δ x where δ x = B B x nl + O/n 2. Ths expresson smply comes from the fact that at the - th teraton we have nλ T = nλ + O/n messages n the complement of G T of whch a fracton x s erased. Further B messages ncomng from the boundares of whch B are erasures. Combnng the two above effects we have for an edge T c E[µ l G T ] = x l + Λ l F [x V ;G T ǫv y ;G T = ǫλ y C ;G T ρ x C x ;G T ] + l Λ F B Bx + O/n 2. = We can now use ths expresson 35 to obtan S c = lm Λ E GT [E[µ l G T ]E[µ l G T ]] nl x 2 l lm E G T [ T E[µ l G T ]E[µ l G T ]] l = F x E[µ l V ;G T ] ǫe[µ l V y ;G T ] = l = = F ǫλ y E[µ l C ;G T ]ρ x = E[µ l C x ;G T ] l l + F E[µ l B ] F x E[µ l B] x le[µ l V GT ] where we took the lmt n and replaced T by V ;G T. It s clear what each of these values represent. For example E[µ l V ;G T ] s the expectaton of µ l tmes the number of edges that are n G T. Each of these terms can be computed through recursons that are smlar n sprt to the ones used to compute S. These recursons are provded n the body of Lemma 3. We wll ust explan n further detal how the terms E[µ l B] and E[µl B ] are computed. We clam that E[µ l B] = x l + Vl CV T λ ρ l. The reason s that µ l depends only on the realzaton of ts computaton tree and not on the whole G T. From the defntons of G T the boundary of G T s n average λ ρ l larger than the boundary of the computaton tree. Fnally the expectaton of µ l tmes the number of edges n the boundary of ts computaton tree s computed analogously to what has been done for the contrbuton of S. The result s xl + Vl CV T the term x l accounts for the root edge and the other one of the lower boundary of the computaton tree. Multplyng ths by λ ρ l we obtan the above expresson. The calculaton of E[µ l B ] s smlar. We start by computng the expectaton of µ l multpled by the number of edges n the boundary of ts computaton tree. Ths number has to be multpled by VC V l+c l T to account for what happens between the boundary of the computaton tree and the boundary of G T. We therefore obtan E[µ l B ] = x l + Vl CV T V C l T. The expresson provded n the above lemma has been used to plot V l for ǫ and for several values of l n the case of an rregular ensemble n Fg. 5. It remans to determne the asymptotc behavor of ths quantty as the number of teratons converges to nfnty. Lemma 4: Let G be chosen unformly at random from LDPCn λ ρ and consder transmsson over the BEC of erasure probablty ǫ. Label the nl edges of G n some fxed order by the elements of {...nl }. Set µ l equal to one f the message along edge from varable to check node after l teratons s an erasure and equal to zero otherwse. Then lm lm l E[ µl 2 ] E[ µl nl ] 2 + ǫ2 λ y 2 ρ x 2 ρ x 2 + ρ x 2xρ x x 2 ρ x 2 ǫλ yρ x 2 + ǫ2 λ y 2 ρ x 2 ǫ 2 λy 2 ǫ 2 λy 2 y 2 ǫ 2 λ y 2 ǫλ yρ x 2 + x ǫ2 λy 2 y 2 ǫ 2 λ y 2 + ǫλ yρ x + ǫy 2 λ y ǫλ yρ. x The proof s a partcularly tedous calculus exercse and we omt t here for the sake of space. REFERENCES [] T. Rchardson and R. Urbanke. Modern Codng Theory. Cambrdge Unversty Press 26. In preparaton. [2] A. Montanar. Fnte-sze scalng of good codes. In Proc. 39th Annual Allerton Conference on Communcaton Control and Computng Montcello IL 2. [3] A. Amraou A. Montanar T. Rchardson and R. Urbanke. Fnte-length scalng for teratvely decoded ldpc ensembles. submtted to IEEE IT June 24. [4] A. Amraou A. Montanar T. Rchardson and R. Urbanke. Fnte-length scalng and fnte-length shft for low-densty party-check codes. In Proc. 42th Annual Allerton Conference on Communcaton Control and Computng Montcello IL 24. [5] A. Amraou A. Montanar and R. Urbanke. Fnte-length scalng of rregular LDPC code ensembles. In Proc. IEEE Informaton Theory Workshop Rotorua New-Zealand Aug-Sept 25. [6] M. Luby M. Mtzenmacher A. Shokrollah D. A. Spelman and V. Stemann. Practcal loss-reslent codes. In Proceedngs of the 29th annual ACM Symposum on Theory of Computng pages [7] C. Méasson A. Montanar and R. Urbanke. Maxwell s constructon: The hdden brdge between maxmum-lkelhood and teratve decodng. submtted to IEEE Transactons on Informaton Theory 25. =

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Convergence of random processes

Convergence of random processes DS-GA 12 Lecture notes 6 Fall 216 Convergence of random processes 1 Introducton In these notes we study convergence of dscrete random processes. Ths allows to characterze phenomena such as the law of large

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Linear Regression Analysis: Terminology and Notation

Linear Regression Analysis: Terminology and Notation ECON 35* -- Secton : Basc Concepts of Regresson Analyss (Page ) Lnear Regresson Analyss: Termnology and Notaton Consder the generc verson of the smple (two-varable) lnear regresson model. It s represented

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

Complete subgraphs in multipartite graphs

Complete subgraphs in multipartite graphs Complete subgraphs n multpartte graphs FLORIAN PFENDER Unverstät Rostock, Insttut für Mathematk D-18057 Rostock, Germany Floran.Pfender@un-rostock.de Abstract Turán s Theorem states that every graph G

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

Min Cut, Fast Cut, Polynomial Identities

Min Cut, Fast Cut, Polynomial Identities Randomzed Algorthms, Summer 016 Mn Cut, Fast Cut, Polynomal Identtes Instructor: Thomas Kesselhem and Kurt Mehlhorn 1 Mn Cuts n Graphs Lecture (5 pages) Throughout ths secton, G = (V, E) s a mult-graph.

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Vapnik-Chervonenkis theory

Vapnik-Chervonenkis theory Vapnk-Chervonenks theory Rs Kondor June 13, 2008 For the purposes of ths lecture, we restrct ourselves to the bnary supervsed batch learnng settng. We assume that we have an nput space X, and an unknown

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

Lecture 4. Instructor: Haipeng Luo

Lecture 4. Instructor: Haipeng Luo Lecture 4 Instructor: Hapeng Luo In the followng lectures, we focus on the expert problem and study more adaptve algorthms. Although Hedge s proven to be worst-case optmal, one may wonder how well t would

More information

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique Outlne and Readng Dynamc Programmng The General Technque ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Matrx Chan-Product ( 5.3.1) Dynamc Programmng verson 1.4 1 Dynamc Programmng verson 1.4 2 Dynamc Programmng

More information

A new construction of 3-separable matrices via an improved decoding of Macula s construction

A new construction of 3-separable matrices via an improved decoding of Macula s construction Dscrete Optmzaton 5 008 700 704 Contents lsts avalable at ScenceDrect Dscrete Optmzaton journal homepage: wwwelsevercom/locate/dsopt A new constructon of 3-separable matrces va an mproved decodng of Macula

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem. Lecture 14 (03/27/18). Channels. Decodng. Prevew of the Capacty Theorem. A. Barg The concept of a communcaton channel n nformaton theory s an abstracton for transmttng dgtal (and analog) nformaton from

More information

Error Probability for M Signals

Error Probability for M Signals Chapter 3 rror Probablty for M Sgnals In ths chapter we dscuss the error probablty n decdng whch of M sgnals was transmtted over an arbtrary channel. We assume the sgnals are represented by a set of orthonormal

More information

= z 20 z n. (k 20) + 4 z k = 4

= z 20 z n. (k 20) + 4 z k = 4 Problem Set #7 solutons 7.2.. (a Fnd the coeffcent of z k n (z + z 5 + z 6 + z 7 + 5, k 20. We use the known seres expanson ( n+l ( z l l z n below: (z + z 5 + z 6 + z 7 + 5 (z 5 ( + z + z 2 + z + 5 5

More information

Homework Assignment 3 Due in class, Thursday October 15

Homework Assignment 3 Due in class, Thursday October 15 Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

On the correction of the h-index for career length

On the correction of the h-index for career length 1 On the correcton of the h-ndex for career length by L. Egghe Unverstet Hasselt (UHasselt), Campus Depenbeek, Agoralaan, B-3590 Depenbeek, Belgum 1 and Unverstet Antwerpen (UA), IBW, Stadscampus, Venusstraat

More information

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed

More information

Affine transformations and convexity

Affine transformations and convexity Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/

More information

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2 Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.

More information

Chapter 13: Multiple Regression

Chapter 13: Multiple Regression Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to

More information

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem. prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove

More information

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence) /24/27 Prevew Fbonacc Sequence Longest Common Subsequence Dynamc programmng s a method for solvng complex problems by breakng them down nto smpler sub-problems. It s applcable to problems exhbtng the propertes

More information

Lecture 4: Universal Hash Functions/Streaming Cont d

Lecture 4: Universal Hash Functions/Streaming Cont d CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected

More information

Formulas for the Determinant

Formulas for the Determinant page 224 224 CHAPTER 3 Determnants e t te t e 2t 38 A = e t 2te t e 2t e t te t 2e 2t 39 If 123 A = 345, 456 compute the matrx product A adj(a) What can you conclude about det(a)? For Problems 40 43, use

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Expected Value and Variance

Expected Value and Variance MATH 38 Expected Value and Varance Dr. Neal, WKU We now shall dscuss how to fnd the average and standard devaton of a random varable X. Expected Value Defnton. The expected value (or average value, or

More information

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence

More information

Maximizing the number of nonnegative subsets

Maximizing the number of nonnegative subsets Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum

More information

1 Matrix representations of canonical matrices

1 Matrix representations of canonical matrices 1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The

More information

Snce h( q^; q) = hq ~ and h( p^ ; p) = hp, one can wrte ~ h hq hp = hq ~hp ~ (7) the uncertanty relaton for an arbtrary state. The states that mnmze t

Snce h( q^; q) = hq ~ and h( p^ ; p) = hp, one can wrte ~ h hq hp = hq ~hp ~ (7) the uncertanty relaton for an arbtrary state. The states that mnmze t 8.5: Many-body phenomena n condensed matter and atomc physcs Last moded: September, 003 Lecture. Squeezed States In ths lecture we shall contnue the dscusson of coherent states, focusng on ther propertes

More information

Lecture 4 Hypothesis Testing

Lecture 4 Hypothesis Testing Lecture 4 Hypothess Testng We may wsh to test pror hypotheses about the coeffcents we estmate. We can use the estmates to test whether the data rejects our hypothess. An example mght be that we wsh to

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

Tornado and Luby Transform Codes. Ashish Khisti Presentation October 22, 2003

Tornado and Luby Transform Codes. Ashish Khisti Presentation October 22, 2003 Tornado and Luby Transform Codes Ashsh Khst 6.454 Presentaton October 22, 2003 Background: Erasure Channel Elas[956] studed the Erasure Channel β x x β β x 2 m x 2 k? Capacty of Noseless Erasure Channel

More information

The Feynman path integral

The Feynman path integral The Feynman path ntegral Aprl 3, 205 Hesenberg and Schrödnger pctures The Schrödnger wave functon places the tme dependence of a physcal system n the state, ψ, t, where the state s a vector n Hlbert space

More information

18.1 Introduction and Recap

18.1 Introduction and Recap CS787: Advanced Algorthms Scrbe: Pryananda Shenoy and Shjn Kong Lecturer: Shuch Chawla Topc: Streamng Algorthmscontnued) Date: 0/26/2007 We contnue talng about streamng algorthms n ths lecture, ncludng

More information

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law: CE304, Sprng 2004 Lecture 4 Introducton to Vapor/Lqud Equlbrum, part 2 Raoult s Law: The smplest model that allows us do VLE calculatons s obtaned when we assume that the vapor phase s an deal gas, and

More information

Statistics II Final Exam 26/6/18

Statistics II Final Exam 26/6/18 Statstcs II Fnal Exam 26/6/18 Academc Year 2017/18 Solutons Exam duraton: 2 h 30 mn 1. (3 ponts) A town hall s conductng a study to determne the amount of leftover food produced by the restaurants n the

More information

Perfect Competition and the Nash Bargaining Solution

Perfect Competition and the Nash Bargaining Solution Perfect Competton and the Nash Barganng Soluton Renhard John Department of Economcs Unversty of Bonn Adenauerallee 24-42 53113 Bonn, Germany emal: rohn@un-bonn.de May 2005 Abstract For a lnear exchange

More information

Uncertainty and auto-correlation in. Measurement

Uncertainty and auto-correlation in. Measurement Uncertanty and auto-correlaton n arxv:1707.03276v2 [physcs.data-an] 30 Dec 2017 Measurement Markus Schebl Federal Offce of Metrology and Surveyng (BEV), 1160 Venna, Austra E-mal: markus.schebl@bev.gv.at

More information

Density matrix. c α (t)φ α (q)

Density matrix. c α (t)φ α (q) Densty matrx Note: ths s supplementary materal. I strongly recommend that you read t for your own nterest. I beleve t wll help wth understandng the quantum ensembles, but t s not necessary to know t n

More information

Graph Reconstruction by Permutations

Graph Reconstruction by Permutations Graph Reconstructon by Permutatons Perre Ille and Wllam Kocay* Insttut de Mathémathques de Lumny CNRS UMR 6206 163 avenue de Lumny, Case 907 13288 Marselle Cedex 9, France e-mal: lle@ml.unv-mrs.fr Computer

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora prnceton unv. F 13 cos 521: Advanced Algorthm Desgn Lecture 3: Large devatons bounds and applcatons Lecturer: Sanjeev Arora Scrbe: Today s topc s devaton bounds: what s the probablty that a random varable

More information

Inductance Calculation for Conductors of Arbitrary Shape

Inductance Calculation for Conductors of Arbitrary Shape CRYO/02/028 Aprl 5, 2002 Inductance Calculaton for Conductors of Arbtrary Shape L. Bottura Dstrbuton: Internal Summary In ths note we descrbe a method for the numercal calculaton of nductances among conductors

More information

Société de Calcul Mathématique SA

Société de Calcul Mathématique SA Socété de Calcul Mathématque SA Outls d'ade à la décson Tools for decson help Probablstc Studes: Normalzng the Hstograms Bernard Beauzamy December, 202 I. General constructon of the hstogram Any probablstc

More information

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.

More information

CHAPTER 14 GENERAL PERTURBATION THEORY

CHAPTER 14 GENERAL PERTURBATION THEORY CHAPTER 4 GENERAL PERTURBATION THEORY 4 Introducton A partcle n orbt around a pont mass or a sphercally symmetrc mass dstrbuton s movng n a gravtatonal potental of the form GM / r In ths potental t moves

More information

The optimal delay of the second test is therefore approximately 210 hours earlier than =2.

The optimal delay of the second test is therefore approximately 210 hours earlier than =2. THE IEC 61508 FORMULAS 223 The optmal delay of the second test s therefore approxmately 210 hours earler than =2. 8.4 The IEC 61508 Formulas IEC 61508-6 provdes approxmaton formulas for the PF for smple

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Lecture 5 Decoding Binary BCH Codes

Lecture 5 Decoding Binary BCH Codes Lecture 5 Decodng Bnary BCH Codes In ths class, we wll ntroduce dfferent methods for decodng BCH codes 51 Decodng the [15, 7, 5] 2 -BCH Code Consder the [15, 7, 5] 2 -code C we ntroduced n the last lecture

More information

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k.

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k. THE CELLULAR METHOD In ths lecture, we ntroduce the cellular method as an approach to ncdence geometry theorems lke the Szemeréd-Trotter theorem. The method was ntroduced n the paper Combnatoral complexty

More information

PHYS 705: Classical Mechanics. Calculus of Variations II

PHYS 705: Classical Mechanics. Calculus of Variations II 1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary

More information

LECTURE 9 CANONICAL CORRELATION ANALYSIS

LECTURE 9 CANONICAL CORRELATION ANALYSIS LECURE 9 CANONICAL CORRELAION ANALYSIS Introducton he concept of canoncal correlaton arses when we want to quantfy the assocatons between two sets of varables. For example, suppose that the frst set of

More information

Chapter Newton s Method

Chapter Newton s Method Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve

More information

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all

More information

Lecture 3: Shannon s Theorem

Lecture 3: Shannon s Theorem CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts

More information

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Lossy Compression. Compromise accuracy of reconstruction for increased compression. Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost

More information