THe advent of high recording density enabling technologies,

Size: px
Start display at page:

Download "THe advent of high recording density enabling technologies,"

Transcription

1 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 28, NO. 2, FEBRUARY 200 An Iteratvely Decodable Tensor Product Code wth Applcaton to Data Storage Hakm Alhussen, Member, IEEE, and Jaekyun Moon, Fellow, IEEE arxv: v [cs.it] 29 Mar 200 Abstract The error pattern correctng code EPCC can be constructed to provde a syndrome decodng table targetng the domnant error events of an nter-symbol nterference channel at the output of the Vterb detector. For the sze of the syndrome table to be manageable and the lst of possble error events to be reasonable n sze, the codeword length of EPCC needs to be short enough. However, the rate of such a short length code wll be too low for hard drve applcatons. To accommodate the requred large redundancy, t s possble to record only a hghly compressed functon of the party bts of EPCC s tensor product wth a symbol correctng code. In ths paper, we show that the proposed tensor error-pattern correctng code T-EPCC s lnear tme encodable and also devse a low-complexty soft teratve decodng algorthm for EPCC s tensor product wth q-ary LDPC T-EPCC-qLDPC. Smulaton results show that T-EPCC-qLDPC acheves almost smlar performance to sngle-level qldpc wth a /2 KB sector at 50% reducton n decodng complexty. Moreover, KB T-EPCC-qLDPC surpasses the performance of /2 KB sngle-level qldpc at the same decoder complexty. Index Terms Tensor product codes, nter-symbol nterference, turbo equalzaton, error-pattern correcton, q-ary LDPC, mult-level log lkelhood rato, tensor symbol sgnatures, sgnature-correctng code, detecton postprocessng. I. INTRODUCTION THe advent of hgh recordng densty enablng technologes, poneered by gallopng mprovements n head and meda desgn and manufacturng processes, has pushed for smlar advances n read channel desgn and error correcton codng, drvng research efforts nto developng channelcapacty-approachng codng schemes based on soft teratve decodng that are also mplementaton frendly [], [2]. Soft decodable error correcton codes ECC, manly low densty party check LDPC codes, would eventually replace conventonal Reed-Solomon RS outer ECC, whch despte ts large mnmum dstance, possesses a dense party check matrx that does not lend tself easly to powerful belef propagaton BP decodng. There exsts vast lterature on the varous desgn aspects of LDPC coded systems for magnetc recordng applcatons. Ths ncludes code constructon [3] [6], effcent encodng [7], [8], decoder optmzaton [9] [], and performance evaluaton [2] [4]. In ths work, we propose an LDPC coded system optmzed for the magnetc recordng channel that spans contrbutons n most of these areas. Manuscrpt receved January 5, 2009; revsed August, Ths work was supported n part by the NSF Theoretcal Foundaton Grant Hakm Alhussen s wth Lnk-A-Meda Devces, Santa Clara, CA 9505, USA e-mal:hakma@lnk-a-meda.com. Jaekyun Moon s a Professor of Electrcal Engneerng at KAIST, Yuseonggu, Daeeon, , Republc of Korea e-mal:aemoon@ee.kast.ac.kr. The error-pattern correctng code EPCC s proposed n [5] [7] motvated by the well-known observaton that the error rate at the channel detector output of an ISI channel s domnated by a few specfc known error cluster patterns. Ths s due to the fact that the channel output energes assocated wth these error patterns are smaller than those of other patterns. A multparty cyclc EPCC was frst descrbed n [6], wth an RS outer ECC, possessng dstnct syndrome sets for all such domnant error patterns. To reduce the code rate penalty, whch s a severe SNR degradaton n recordng applcatons, a method to ncrease the code rate was ntroduced n [7] that also mproved EPCC s algebrac sngle and multple error-pattern correcton capablty. In ths method, the generator polynomal of the short base EPCC s multpled by a prmtve polynomal that s not already a factor of the generator polynomal. Also, the prmtve polynomal degree s chosen so as to acheve a certan desred codeword length. Moreover, [7] descrbes a Vterb detecton postprocessor that provdes error-event-relablty nformaton adng syndromemappng of EPCC to mprove ts correcton accuracy. However, mprovng the EPCC code rate by extendng ts codeword length ncreases the probablty of multple domnant error patterns wthn the codeword, and ths requres ncreasng the sze of the syndrome table consderably to mantan the same correcton power, whch eventually results n prohbtve decodng complexty. To mantan correcton power wth a manageable sze syndrome decodng table, [8] dscusses a more effcent method based on a lst decodng strategy that delvers satsfactory sector error rate SER gan wth an outer RS ECC. Later, ths lst decodng scheme was formulated as a soft-nput soft-output block n [9] and utlzed to enhance the performance of turbo equalzaton based on convolutonal codes CC. Nevertheless, the seral concatenaton scheme that proved successful wth RS hard decodng and CC-based turbo equalzaton does not work as well n seral concatenaton of long-epcc and LDPC. The reason s that when the LDPC decoder fals, especally n the water-fall regon, the sector contans a large number of multple error occurrences. When many such error events occur n a gven EPCC codeword, decodng by any reasonable sze lst decoder s formdable. Thus, an nner EPCC cannot n any capacty reduce the SER of a serally concatenated outer LDPC. On the other hand, f the EPCC codeword length s decreased substantally, then the number of errors per codeword s reasonable, as long as the overall code rate s somehow kept hgh. Here, the concept of tensor product constructon comes nto play. Tensor product party codes TPPC were frst proposed n [2] as the null-space of the party check matrx resultng

2 2 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 28, NO. 2, FEBRUARY 200 from the tensor product of two other party check matrces correspondng to a varety of code famles. As a result, based on the choce of the concatenated codes, TPPC would be classfed as an error correcton code f constructed from two ECCs, an error detecton code EDC f constructed from two EDCs, and an error locaton code ELC f constructed from an EDC and an ECC n a specal order. As a matter of fact, ELCs were ntroduced earler n [24] and ther algebrac propertes studed n detal, but later ncorporated n the unfed theme of TPPCs n [2]. Furthermore, a generalzed class of harddecodable ELCs was later suggested for applcaton n the magnetc recordng channel n [25]. In addton, TPPCs can be generalzed by combnng a number of codes on varous extenson felds wth shorter bnary codes. For ths more general case, a decodng algorthm was developed n [26]. An ECCtype TPPC was appled to longtudnal magnetc recordng n [22], and to perpendcular magnetc recordng n [23]. In [22], a hard decodable tensor product code based on a sngle party code and a BCH code s proposed as an nner code for RS. Ths code s sutable for low densty longtudnal recordng channels for whch domnant errors have odd weghts, such as {+} and {+,, +}. Also, [22] proposes that the hard decoder passes the corrected party bts to a Vterb detector reflectng channel states and party code states n order to compute the decoder output. Later, [23] presented two methods for combnng a tensor-product sngle party code wth a dstanceenhancng constraned code. Ths code combnaton acheved more satsfactory performance wth RS as an outer code n hgh densty perpendcular recordng channels. Our goal n ths work s to utlze the concept of tensor product concatenaton to construct hgh rate soft-decodable EPCCs on the symbol-level of the outer ECC. The EPCC target error lst s matched to the domnant error events normally observed n hgh densty perpendcular recordng channels. Snce domnant error events n perpendcular recordng are not only of odd weght [2], ths requres that our EPCC be a multparty code. However, n ths case, a Vterb detector matchng the channel and party wll have prohbtve complexty. In spte of ths, the performance of the optmal decoder of the baselne party-coded channel can be approached by the low complexty detecton postprocessng technque n [8]. We also present n detal a low complexty hghly parallel soft decoder for T-EPCC and show that t acheves a better performance-complexty tradeoff compared to conventonal teratve decodng schemes. A. Notatons and Defntons For a lnear code C : n,k,p, n denotes the codeword length, k denotes the user data length, and p n k denotes the number of code party bts. For a certan party check matrx H correspondng to a lnear code {C : Hc t 0, c C}, a syndrome s s the range of a perturbaton of a codeword Hc+e t s. A sgnature refers to the range under H for any bt block, not necessarly a codeword formed of data and party bts. The multlevel log-lkelhood rato mlllr of a random varable β GFq correspondng to the p.m.f. probablty mass functon p β Prβ, q 0 p β, can be defned as: γβ log pβ p 0β,γβ 0 0. [x] denotes a local segment [x,x +,...,x ] of the sequence x k. The perod of a generator polynomal on GF2 correspondng to a lnear code s equal to the order of that polynomal, as defned n [36]. Also, for a syndrome set {s } L 0 that corresponds to all the L possble startng postons of an error event, the perod P s defned as the smallest nteger such that s ρ+p s ρ [8]. Assume α L logα and β L logβ, then α + β L loge αl + e βl. Defne max αl β L α + β L maxα L,β L + log + e αl βl. Also, max {γ k } b b ka and max γ k are two dfferent representatons of the recursve mplementaton of max ka actng on the elements of the set {γ k } b ka. B. Acronyms TPPC: Tensor Product Party Code. qldpc: q-ary Low Densty Party Check code. RS: Reed Solomon code. BCJR: Bahl-Cocke-Jelnek-Ravv. T-EPCC: Tensor product Error Pattern Correcton Code. T-EPCC-qLDPC and T-EPCC-RS: Tensor product of EPCC and qldpc or RS, respectvely. LLR: Log-Lkelhood Rato. mlllr: mult-level Log-Lkelhood Rato. ML: Maxmum Lkelhood. MAP: Maxmum A Posteror. QC: Quas-Cyclc. SPA: Sum-Product Algorthm. II. REVIEW OF EPCC AND THE TENSOR PRODUCT CODING PARADIGM In ths secton we gve a bref revew on the concept of EPCC, ncludng the desgn of two example codes that wll be utlzed later n the smulaton study. Also, we revew the tensor product codng paradgm and present an encodng method that allows for EPCC-based lnear-tme-encodable TPPCs. A. EPCC Revew and Examples We revew constructng a cyclc code targetng the set of l max domnant error events {e k x,e2 k x,...,elmax k x} represented as polynomals on GF2 that can occur at any startng poston k n a codeword of length l T. A syndrome of error type e x at poston k s defned as s k x e k x mod gx, where gx s the generator polynomal of the code and mod s the polynomal modulus operaton. A syndrome set S for error type e x contans elements correspondng to all cyclc shfts of polynomal e x; elements of S are thus related by s k+ x s k mod gx.

3 ALHUSSIEN and MOON: AN ITERATIVELY DECODABLE TENSOR PRODUCT CODE WITH APPLICATION TO DATA STORAGE 3 Tensor symbol Tensor symbol 2 Tensor symbol n 2 - p 2 + Tensor symbol n 2 n bts p bts Fg.. TPPC C n,k,p C 2 n 2,k 2,p 2 codeword structure For unambguous decodng of e x and e x, {,}, we must have S S. Ths desgn requrement constransgx to have dstnct greatest common dvsors wth e x, for all targeted [6]. However, even f ths constrant s satsfed, an element n S can stll map to more than one error poston,.e., the perod of the syndrome set- and perod of gx- can be less than l T. Moreover, ths constrant s only suffcent but not necessary. As shown n [6], there may exst a lower degreegx that can yeld dstnct syndrome sets for the targeted error polynomals, resultng n a hgher rate EPCC. A search method to fnd ths gx s already dscussed n detal n [6] and [8]. We next gve two example EPCC constructons that wll be used throughout the paper. We target the domnant error events of the deal equalzed monc channel D n AWGN, whch s sutable as a partal response target n perpendcular magnetc recordng read channels. For ths channel, the domnant errors are gven by: e x, e 2 x + x, e 3 x + x + x 2, etc.,.e. they can be represented as polynomals on GF2 for whch all powers of x have nonzero coeffcents. The two EPCCs are: Example : Targetng error polynomals up to degree 4, we get the generator polynomal gx + x + x 3 + x 5 + x 6 of perod 2 va the search procedure of [6]. Choosng a codeword length of 2, 5 dstnct, non-overlappng syndrome sets are utlzed to dstngush the 5 target errors. Then, syndrome set S 3 wll have perod 6, whle all other sets have perod 2. A syndrome set of perod 6 means that each syndrome decodes to one of 2 possble error postons wthn the 2-bt codeword. Nonetheless, e 3 x can be decoded relably va channel relablty nformaton and the polarty of data support. The low code rate of 0.5 makes ths code unattractve as an nner code n a seral concatenaton setup for recordng channel applcatons. However, as we wll see later, a tensor code setup makes t practcal to use such powerful codes for recordng applcatons. Example 2: Targetng error polynomals up to degree 9, we have to record more redundancy. To accomplsh ths feat, a cyclc code wth 8 party bts, code rate 0.56, and a generator polynomal gx +x 2 +x 3 +x 5 +x 6 +x 8 of perod 8 s found by the search procedure n [6]. Then, syndrome sets S, S 3, S 5, and S 7 each have perod 8 and thus can be decoded wthout ambguty. Whle syndrome sets S 2, S 4, S 6, S 8, and S 0 each have perod 9, decodng to one of two postons. The worst s S 9 of perod2, whch would decode to one of 9 possble postons. Stll, the algebrac decoder can quckly shrnk ths number to few postons by checkng the data support, and then would choose the one poston wth hghest local relablty. B. Tensor Product Party Codes Constructon and Propertes of the TPPC Party Check Matrx: Consder a bnary lnear codec : n,k,p derved from the null space of party check matrxh c, and assumec corrects any error event that belongs to class ε. Also, consder a non-bnary lnear codec 2 : n 2,k 2,p 2 derved from the null space of party check matrx H c2 and defned over elements of GF2 p. Moreover, assume ths code corrects any symbol error type that belongs to class ε 2. As a prelmnary step, convert the bnary p n matrx H c, column by column, nto a strng of GF2 p elements of dmenson n. Then, construct the matrx H c3 H c H c2 as a p 2 n n 2 array of GF2 p elements. Fnally, convert the elements of H c3 nto p -bt columns, based on the same prmtve polynomal of degree p used all over n the constructon method. The null space of the p p 2 n n 2 bnary H c3 corresponds to a lnear bnary code C 3 : n n 2,k 3,p p 2. As shown n Fg., a C 3 codeword s composed of n 2 blocks termed tensor-symbols, each havng n bts. Also, t can be shown that C 3 can correct any collecton of tensor symbol errors belongng to class ε 2, provded that all errors wthn each tensor symbol belong to class ε [2]. Note that a tensor symbol s not an actual C codeword, and as such, usng the terms nner and outer codes would not be completely accurate. In addton, the tensor symbols are not codewords themselves, as can be seen n Fg., the frstk 2 tensor symbols are all data bts to start wth, and even the last p 2 tensor symbols, whch are composed of data and party bts, have non-zero syndromes under H c. Thus, a TPPC codeword does not correspond drectly to ether H c or H c2, and as a result, the component codebooks they descrbe are not recorded drectly on the channel. Another nterestng property of the resultng TPPC s that the symbol-mappng of the sequence of tensor-symbol sgnatures under H c forms a codeword of C 2, whch we refer to as the sgnature-correctng component code. 2 Encodng of Tensor Product Party Codes: The encodng of a TPPC can be performed usng ts bnary party check matrx, but the correspondng bnary generator matrx s not guaranteed to possess algebrac propertes that enable lnear tme encodablty. Thus, an mplementaton-frendly approach would be to utlze the encoders of the consttuent codes, whch can be chosen to be lnear tme encodable. Consder a bnary code C : n,k,p that s the null space of party check matrx H c, and a non-bnary code C 2 : n 2,k 2,p 2 defned on GF2 p, the tensor-product concatenaton s a bnary C 3 : n 3,k 3,p 3, where: n 3 n 2 n,k 3 n n 2 p p 2,p 3 p p 2

4 4 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 28, NO. 2, FEBRUARY 200 Fg. 2. TPPC encoder of C n,k,p C 2 n 2,k 2,p 2 :a sgnatures calculated under H c, then the p -bt sgnatures are mapped to GF2 p symbols, b k 2 sgnatures encoded by generator matrx G c2 nto a C 2 codeword, then party symbols are mapped back to GF2, c p p 2 TPPC party bts are calculated by back substtuton. Assume that C s a cyclc code, and C 2 s any of the lnear tme encodable codes, where we choose a quas-cyclc QC component code for the purpose of ths study. Then, the encoders of C and C 2 communcate va the followng algorthm to generate a codeword of C 3, see Fg. 2: Receve a block of n k 2 +k n 2 k k 2 bts from the data source, call t maor block α. Dvde maor block α nto mnor block β of n k 2 bts, and mnor block γ of k n 2 k k 2 bts.e. k p 2 bts. Dvde blockβ nto k 2 columns each of n bts. Then, for each column, calculate the ntermedate p -bt sgnature under the party check matrx of C. Usng a feedback shft regster FSR to calculate the sgnatures, the computatonal cost s n operatons per sgnature, and n k 2 for ths entre step. v Convert ntermedate sgnatures from p -bt strngs nto GF2 p symbols. v Encode the k 2 non-bnary sgnatures nto a C 2 codeword of length n 2. Usng FSRs to encode the quas-cyclc C 2, the computatonal complexty of ths step s n 2. v Convert computed sgnatures back nto p -bt strngs. v Dvde block γ nto p 2 columns each of k bts. Add p blanks n each column to be flled wth the party bts of C 3. Then, algn each column wth the p 2 sgnatures computed n the prevous step, leavng p blanks n each column. v Fll blanks n the prevous step such that the sgnature of data plus party blanks underc equals the correspondng algned sgnature from step v. The party can be calculated usng the systematc H c and the method of back substtuton whch requres a computatonal complexty n per column. The total computatonal complexty of ths encodng algorthm s n k 2 +n 2 +n p 2,.e. t s n n 2 n 3, whch s the TPPC codeword length. Thus, we have shownwth some constrants- that f C and C 2 are lnear tme encodable, then C 3 C C 2 s lnear tme encodable. III. T-EPCC-RS CODES To demonstrate the algebrac propertes of TPPC codes, we present an example code sutable for recordng applcatons wth /2 KB sector sze. Consder two component codes: A bnary cyclc 8,0 EPCC of example 2 above wth rate 0.556, 8 party bts, and party check matrx n GF2 8 : [ α α 2 α 3 α 4 α 5... H epcc α 6 α 7 α 33 α 34 α 96 α α 82 α 236 α 234 α 27 α 92 α 93 ] 8. A 255,95 RS over GF2 8, of rate 0.765, t 30, and 60 party symbols. The resultng TPPC s a bnary 4590,40 code, of rate 0.896, and redundancy of 480 party bts. For ths code, a codeword s made of bt tensor symbols, of whch, any combnaton of 30 or less tensor symbol errors are correctable, provded that each 8-bt tensor symbol has a sngle or multple occurrence of a domnant error that s correctable by EPCC, those beng combnatons of error polynomals up to degree 9. Furthermore, although the EPCC consttuent code has a very low rate of 0.556, the resultng T-EPCC has a hgh rate of Notably, n the vew of the 8-bt EPCC, ths 6% reducton n recorded redundancy corresponds to an SNR mprovement of 2 db n a channel wth rate penalty 0log 0 /R, and 4. db n a channel wth rate penalty 0log 0 /R 2. A. Hard Decodng of T-EPCC-RS Codes Hard decodng of T-EPCC-RS drectly reflects the code s algebrac propertes, and thus, serves to further clarfy the concept of tensor product codes. Hence, we dscuss the hard decodng approach before gong nto the desgn of soft decodng of T-EPCC codes. The decodng algorthm s summarzed by the followng procedure, see Fg. 3: After hard slcng the output of the Vterb channel detector, the sgnature of each tensor symbol s calculated under H epcc. Each sgnature s then mapped nto a Galos feld symbol, where the sequence of non-bnary sgnatures

5 ALHUSSIEN and MOON: AN ITERATIVELY DECODABLE TENSOR PRODUCT CODE WITH APPLICATION TO DATA STORAGE 5 8 bts Tensor symbol Tensor symbol 2 Tensor symbol 3 Tensor symbol 255 Compute EPCC bnary Sgnatures and convert to GF2 8 EPCC Sgnature EPCC Sgnature 2 EPCC Sgnature 3 EPCC Sgnature bts or GF2 8 symbol RS hard decodng n GF2 8 or any lst soft decodng algorthm. RS symbol RS symbol 2 RS symbol 3 RS symbol 255 Convert back to bnary EPCC error syndromes. EPCC Error Syndrome 8 bts Lkely domnant Error 8 bts EPCC Error Syndrome 2 Lkely domnant Error 2 EPCC Error Syndrome 3 Lkely domnant Error 3 Fnd most lkely sngle and double errors. Add to ML word EPCC Error Syndrome 255 Lkely domnant Error 255 Fg. 3. Hard decoder of 8,0,8 EPCC 255,95,2t RS of t 30 over GF2 8. consttute an RS codeword - that s f the channel detector dd not suffer any errors. Any hard-nput RS decoder, such as the Berlekamp- Massey decoder, acts to fnd a legtmate RS codeword based on the observed sgnature-sequence. If the number of sgnature-symbols n error s larger than the RS correcton power, RS decodng fals and the tensor product decoder halts. v Otherwse, f RS decodng s deemed successful, the corrected sgnature-symbol sequence s added to the orgnal observed sgnature-symbol sequence to generate the error syndrome-symbol sequence. v Each error syndrome-symbol s mapped nto an EPCC bt-syndrome of the correspondng tensor symbol. v Fnally, EPCC decodes each tensor symbol to satsfy the error-syndrome generated by the component RS, n whch t faces two scenaros: A zero error-syndrome at the output of RS decodng ndcates ether no error occurred or a multple error occurrence that has a zero EPCC-syndrome, whch goes undetected. In ths case, the EPCC decoder s turned off to save power. A non zero error-syndrome wll turn EPCC correcton on. If the error-syndrome ndcates a sngle error occurrence n the target set, then, the EPCC sngle error algebrac decoder s turned on. On the other hand, f the error-syndrome s not recognzed, then EPCC lst decodng s turned on wth a reasonable-sze lst of test words. Note that although the number of EPCC codewords tensor symbols s huge, the decoder complexty s reasonable snce EPCC decodng s turned on only for nonzero error-syndromes. IV. T-EPCC-qLDPC CODES We learned from the desgn of T-EPCC-RS that the component sgnature-correctng codeword length can be substantally shorter than the competng sngle level code. Although the mnmum dstance s bound to be hurt f the ncreased redundancy does not compensate for the shorter codeword length, employng teratve soft decodng of the component sgnaturecorrectng code can recover performance f desgned properly. Whle LDPC codes have strctly lower mnmum dstances compared to comparable rate and code length RS codes, the sparsty of ts party check matrx allows for effectve belef propagaton BP decodng. BP decodng of LDPC codes consstently performs better than the best known soft decodng algorthm for RS codes. Snce the TPPC expanson enables the use of 2 to 4 tmes shorter component LDPC compared to a competng sngle level LDPC, a class of LDPC codes effcent at such short lengths are crtcal. LDPC codes on hgh order felds represent such good canddates. In that respect, [29] showed that the performance of bnary LDPC codes n AWGN can be sgnfcantly enhanced by a move to felds of hgher orders extensons of GF2 beng an example. Moreover, [29] establshed that for a monotonc mprovement n waterfall performance wth feld order, the party check matrx for very short blocks has to be very sparse. Specfcally, column weght 3 codes over GFq exhbt worse bt-error-rate BER as q ncreases, whereas column weght 2 codes over GFq exhbt monotoncally lower BER as q ncreases. These results were later confrmed n [30], where they also showed through a densty evoluton study of large q codes that optmum degree sequences favor a regular graph of degree 2 n all symbol nodes. On the other hand, for satsfactory error floor performance, we found that usng a column weght hgher than 2 was necessary. Ths becomes more mportant as the mnmum dstance decreases for lower q. For nstance, we found that a column weght of 3 mproved the error floor behavor of GF2 6 -LDPC at the expense of performance degradaton n the waterfall regon.

6 6 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 28, NO. 2, FEBRUARY 200 A. Desgn and Constructon of qldpc The low rate and relatvely low column weght desgn of qldpc n a TPPC results n a very sparse party check matrx, allowng the usage of hgh grth component qldpc codes. To optmze the grth for a gven rate, we employ the progressve edge growth PEG algorthm [30] n qldpc code desgn. PEG optmzes the placement of a new edge connectng a partcular symbol node to a check node on the Tanner graph, such that the largest possble local grth s acheved. Furthermore, PEG constructon s very flexble, allowng arbtrary code rates, Galos feld szes, and column weghts. In addton, modfed PEG-constructon wth lneartme encodng can be acheved wthout notceable performance degradaton, facltatng the desgn of lnear tme encodable tensor product codes. Of the two approaches to acheve lnear tme encodablty, namely, the upper trangular party check matrx constructon [30] and PEG constructon wth a QC constrant [3], we choose the latter approach, for whch the desgned codes have better error floor behavor. T-EPCCqLDPC lends tself to teratve soft decodng qute naturally. Next, we present a low complexty soft decoder utlzng ths mportant feature. B. Soft Decodng of T-EPCC-qLDPC To fully utlze the power of the component codes n T- EPCC-qLDPC, we need to develop a soft teratve verson of the hard decoder of T-EPCC-RS. To lmt the complexty of the proposed soft decoder, sub-optmal detecton post-processng s adopted nstead of the maxmum a posteror MAP detector to evaluate tensor symbol sgnature relabltes. The complexty of the optmal MAP detector matched to both the channel of memory length L and H epcc of row length p s exponental n p + L. We present a practcal soft detecton scheme that separates soft channel detecton from tensor symbol sgnature detecton, though, through a component sgnature-correctng LDPC n a TPPC setup, approaches the ont MAP performance through channel teratons. The man stages of the decoder are, see Fg. 4: Detecton postprocessng: Utlzng a pror nformaton from the prevous decodng teraton, bnary Vterb generates the hard ML word based on channel observatons, for whch the error sequence s calculated and passed to the correlator bank. A bank of local correlators estmates the probablty of domnant error type/locaton pars for all postons nsde each tensor symbol. 2 Sgnature p.m.f. calculaton: For each tensor symbol, the lst of most lkely error patterns s constructed. Ths lst ncludes sngle occurrences and a predetermned set of ther combnatons. The lst s then dvded nto sublsts, each under the sgnature value t satsfes. For each tensor symbol, usng each sgnature value s error lkelhood lst, we fnd the sgnature p.m.f. of that symbol. 3 q-ary LDPC decodng: Usng the observed sequence of sgnature p.m.f. s, we decode the component q-ary LDPC va FFT-based SPA. For each tensor symbol, the LDPC-corrected sgnature p.m.f. s convolved wth the observed sgnature p.m.f. at ts nput to generate the error-syndrome p.m.f.. 4 EPCC decodng: For each tensor symbol, we fnd the lst of most probable error-syndromes and generate a lst of test error words to satsfy each syndrome n the lst. A bank of parallel EPCC sngle-error correctng decoders generates a lst of most probable codewords along wth ther relabltes. 5 Bt-LLR feedback: Usng the codeword relabltes we generate bt-level relabltes that are fed back to the Vterb detector and the detecton postprocessng stage. Those bt-level relabltes, servng as a pror nformaton, favor paths whch satsfy both the ISI and party constrants. We explan each of these steps n the followng sectons, but we replace any occurrence n the text of syndrome sgnature p.m.f. by syndrome sgnature mult-level log-lkelhood ratos mlllr, as decodng wll be entrely n the log doman for reasons explaned below. r k λ k Hard-decson bnary Vterb c c ˆ + [ e ] h k + c : ML word ˆk q k { Ce } l T 2 { Ce } l k t l T l + l - e C E E Convert from local error pattern relablty to sgnature p.m.f. mlllr ch γ Sg t C ɶ + h + l + l k [ e ] * k + k [ s ] + k q h w w 0 T 0 l { max Ce } lt 0 k t + l T Fg. 5. Bank of parallel error-matched correlators to fnd error pattern type/poston relabltes. Detecton Postprocessng: At ths decoder stage we prepare a relablty matrx CE for error type/poston pars - captured n a tensor symbol of lengthl T - that s usable by the next stage to calculate the tensor symbol s sgnature mlllr: CE 2. l max 0 l T Ce Ce 0 Ce Ce 2 0 Ce2 l T l T Ce Ce lmax 0 Ce lmax Ce lmax l T wherece k s the error pattern type/ postonk relablty measure computed by the maxmum a posteror MAP- based error-pattern correlator shown n Fg. 5. The bank of local correlators dscussed here was also employed n [8]

7 ALHUSSIEN and MOON: AN ITERATIVELY DECODABLE TENSOR PRODUCT CODE WITH APPLICATION TO DATA STORAGE 7 LDPC teraton Map to bt-level a pror nfo EPCC lst decoder e γ Syn GF 6 2 Convolve 6 GF2 GF2 6 -LDPC Log-FFT-SPA r k λ k Bnary Vterb RS decoder t 6 c ˆk Global teraton h k Correlatore q k Correlatore 2 + Correlatore lmax 380 Tensor symbol : : 0 α 62 Syndrome Lst of lkely errors and relabltes ch γ Sg GF α 62 bˆk : : Fg. 4. T-EPCC-qLDPC soft decoder of 2,6,6 EPCC 380, 323 GF2 6 -LDPC. for AWGN channels, and n [9] for data-dependent nose envronments. We now dscuss how to generate these local metrcs. Let r k be the channel detector nput sequence r k c k h k + w k, where c k s the bpolar representaton of the recorded codeword sequence, h k s the partal response channel of length l h, and w k s zero-mean AWGN nose wth varance σ 2. Also, let q k r k ĉ k h k c k ĉ k h k +w k be the channel detector s output error sequence. If a target error pattern sequence e k occurs at postons from k to k +l, then q k can be wrtten as q k [c ĉ ] +l h k +w k [e ] +l h k +w k [s ] +lh +w k where s k s the channel response of the error sequence, and s gven by s k e k h k, and l h l +l h 2. Note that we defne the start of the tensor symbol at 0. So, f < 0, then the error pattern startng poston s n a precedng tensor symbol. The relablty for each error pattern wth startng poston,, can be computed by the local a posteror probabltes gnorng tensor symbol boundares for now: Pr Pr [e ] +l [s ] +lh [r] +lh,[ĉ] +lh l h + [q] +lh,[ĉ] +lh l h The most lkely assumed error type/poston par n a tensor symbol maxmzes the a posteror probablty rato of ts relablty to the relablty of the most probable error event the competng event n ths case would be the ML word tself, wth no error occurrence assumed at the output of Vterb detecton. Hence, utlzng 3 and Bayes rule, the rato to maxmze becomes Pr e ĉ,[q] +lh Pr [ML word] +l ĉ,[q] +lh Pr [q] +lh [ĉ] +lh Pr Pr [q] +lh l h +,[s ] +lh [ĉ] +lh l h +,[ s ] +lh Pr [s ] +lh [ s ] +lh where [ s ] +lh s the ML word s noseless channel response. Gven the nose model, [q] +lh s a sequence of ndependent Gaussan random varables wth varance σ 2. Therefore, maxmzng 4 can be shown to be equvalent to maxmzng the log-lkelhood local measure [8]: h +l Ce k 2σ 2 qk 2 q k s where the a pror bas n 5 s evaluated as: log Pr[ s ] +l h Pr[s ] +lh +l h k,ĉ k + k 2 log Pr[ s ] +l h λ k +l h k,ĉ k 4 Pr[s ] +lh 5 λ k 6 where λ k s the a pror LLR of the error-event bt at poston k as receved from the outer soft decoder, and we are assumng here that error event sequences do not nclude 0 bts,.e., the ML sequence and error sequence do not agree for the entre duraton of the error event. Equaton 5 represents the local error-pattern correlator output n the sense that t essentally descrbes the correlator operaton between q k and the channel output verson of the domnant error pattern e wthn the local regon [, + l h ]. However, equaton 5 gnores that errors can span tensor symbol boundares when < 0 or + l > l T. For nstance, an error n the frst bt of the tensor symbol can result from a sngle error event n that

8 8 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 28, NO. 2, FEBRUARY 200 bt, a double error event n the last bt of the precedng tensor symbol, a trple error event occurrng two bts nto the prevous symbol, and so on. Hence, the probablty of an error n the frst bt s the sum of all these parent error event probabltes. Moreover, ths can be easly generalzed to boundary errors extendng beyond the frst bt. In a smlar manner, an error n the last bt of a tensor symbol can result from a sngle error event n that bt, a double error event startng n that bt and contnung nto the next tensor symbol, a trple error startng at the last bt and contnung nto the next tensor symbol, and so on. Agan, the probablty of an error event n that bt s the sum of the probabltes of all these parent events. Moreover, we have to nullfy the probablty of the parent error events n the modfed relablty matrx snce they are already accounted for n the last bt s relablty calculaton. Furthermore, ths can also be generalzed to error events startng earler than the last bt and extendng nto the next tensor symbol. In summary, to calculate a modfed metrc relevant to the current tensor symbol, we utlze the followng procedure: at 0, modfy Ce 0 Ce 0 max lmax k Ce k k+, ndependently for each, where l max s the maxmum length of a targeted error pattern. Startng at and l T, do: Ce max lmax Ce k. k k k >, set Ce. Set +,. v If < l max go back to. We assume here that domnant error events span only two tensor symbols at a tme and that they do not nclude error free gaps, whch s certanly true for the case study of ths paper. Followng ths procedure we obtan the modfed relablty matrx CE. 2 Sgnature mlllr Calculaton: For each tensor symbol, utlzng CE, we need to fnd the p.m.f. or the log doman mlllr of ts sgnature Sg GF2 pepcc, for EPCC wth p epcc party bts. To lmt the computatonal complexty of ths calculaton, we construct a sgnature only from the domnant errors and a subset of ther multple occurrences. Denote PrSg ˆ α as the runnng estmate of the p.m.f. at α, and ˆγSg α log PrSg ˆ α as the runnng estmate of mlllr. Denote a one dmensonal ndex of CE as p rc p c l max + p r correspondng to the p r -th row and p c -th column of CE and error Ep rc. We choose the domnant lst as the L patterns wth the largest correspondng elements of CE havng ndexes {p rc }L. Based on ths lst, we developed the followng procedure to compute ˆγSg α : Step Sngle occurrences: ˆγSg α prc L max kp rc Ck, k : G f q epcc H epcc [ĉ + l T l T Ek] t α 7 where q epcc 2 pepcc, and G f. s an operator that maps p epcc -bt vectors nto GFq epcc symbols. Step 2 Double occurrences: ˆγSg α ˆγ Sg α max } p rc L { Ck+ Cm,prc L kp rc,mprc, {k,m} : D k m Ek,Em > E free, G f H epcc [ĉ + l T l T Ek Em] t α q epcc 8 where D s the error free dstance between the two errors, E free l h s the error free dstance of the channel beyond whch the errors are ndependent.... Step M M occurrences: ˆγSg α max { M ξ Cq ξ {q,q 2,...,q M } : D s,t,s t ξ ˆγ Sg α } p rc L q ξ p rc,ξ,...,m, Eq s,eq t > E free, ξm G f H epcc [ Eq ξ ĉ + lt l T ] t α q epcc Step M + ML-sgnature relablty; computed so that the resultng sgnature p.m.f. sums to : γ Sg α βml max 0 q epcc max ML+ˆγ Sg α, β ˆγ Sg α βml max γ Sg α βml ˆγ Sg α βml 9 0 ˆγSg α ˆγSg α + ˆγSg α βml, Step M +2 Normalzaton:,,...,q epcc ; β ML +. γsg α ˆγSg α ˆγSg α. 3 In steps through M, to calculate the log-lkelhood of sgnature assumng value α, we sum the probabltes of all presumed sngle and multple errors n the ML word whose sgnatures equal α. Ths s equvalent to performng the max operaton n the log doman on error relabltes dctated by CE. However, to lmt the complexty of ths stage, we only use a truncated set of possble error combnatons, n all steps from to M. Also, for sgnature values that do not correspond to any of the combnatons, we set ther relablty 2

9 ALHUSSIEN and MOON: AN ITERATIVELY DECODABLE TENSOR PRODUCT CODE WITH APPLICATION TO DATA STORAGE 9 to, or more precsely, a reasonably large negatve value n practcal decoder mplementaton. Snce there are many such sgnature values, the correspondng constructed p.m.f. wll be sparse. In step M +, the lkelhood of the ML sgnature value s computed so that the p.m.f. of the tensor symbol sgnature sums to. In ths step, the max operaton n s a reflecton of the fact that n prevous steps, through M, some multple error occurrences have the same sgnature as the ML tensor symbol value. So, we have to account for such error nstances n the runnng estmate of the ML sgnature relablty. These events correspond to cases where error events are not detectable by H epcc,.e., they belong to the null space of H epcc. In step M + 2, the mlllr of the tensor symbol s centered around ˆγSg 0 to prevent the qldpc SPA messages from saturatng after a few BP teratons. 3 q-ary LDPC Decodng: Now, the sequence of sgnature mlllrs s passed as mult-level channel observatons to the qldpc decoder. We choose to mplement the log-doman q-ary fast Fourer transform-based SPA FFT-SPA decoder n [35] for ths purpose. The choce of log-doman decodng s essental, snce f we use the sgnature p.m.f. as nput, the SPA would run nto numercal nstablty resultng from the sparse p.m.f. generated by the precedng stage. The LDPC output posteror mlllrs correspond to the sgnatures of tensor symbols, rather than the syndromes of errors expected by EPCC decodng. Smlar to the decoder of T-EPCC-RS, error-syndrome Syn e s the fnte feld sum of the LDPC s nput channel observaton of sgnature, Sg ch, and output posteror sgnature relablty, Sg p. Moreover, the addton of hard sgnatures corresponds to the convoluton of ther p.m.f. s, and ths convoluton n probablty doman corresponds to the followng operaton n log-doman: ˆγSyn e αβe max ˆγSyn e αβe γsg ch α β ch +γsg p, αβp {β ch,β p } : α βe α β ch GFq epcc α βp, β ch,0,...,q epcc 2; β p,0,...,q epcc 2. 4 The error-syndrome mlllr s later normalzed, smlar to LDPC BP mlllr message normalzaton, accordng to: γsyn e αβe ˆγSyn e αβe ˆγSyn e α, β e,0,...,q epcc EPCC Decodng: An error-syndrome wll decode to many possble error events due to the low mnmum dstance of sngle-error correctng EPCC. However, EPCC reles on local channel sde nformaton to mplement a lst-decodng-lke procedure that enhances ts multple error correcton capablty. Moreover, the short codeword length of EPCC reduces the probablty of such multple error occurrences consderably. To mnmze power consumpton, EPCC s turned on for a tensor symbol only f the most lkely value of the error-syndrome mlllr s nonzero,.e., arg max γsyn e αβ α, α β GFq epcc ndcatng that a resolvable error has occurred. After ths, a few syndrome values, 3 n our case, most lkely accordng to the mlllr, are decoded n parallel. For each of these syndromes, the lst decodng algorthm goes as [8], [9]: A test error word lst s generated by nsertng the most probable combnaton of local error patterns nto the ML tensor symbol. An array of parallel EPCC sngle-pattern correctng decoders decodes the test words to produce a lst of vald codewords that satsfy the current error-syndrome. The probablty of a canddate codeword s computed as the sum of lkelhoods of ts parent test-word and the error pattern separatng the two. Each canddate codeword probablty s based by the lkelhood of the error-syndrome t s supposed to satsfy. In addton, when generatng test words, we only combne ndependent error patterns that are separated by the error free dstance of the ISI channel. 5 Soft Bt-level Feedback LLR Calculaton: The lst of canddate codewords and probabltes are used to generate bt level-probabltes n a smlar manner to [9], [27]. The converson of word-level relablty nto bt-level relablty for a gven bt poston can be done by groupng the canddate codewords nto two groups, accordng to the bnary value of the hard decson bt n that bt poston, and then performng group-wse summng of the word-level probabltes. Three scenaros are possble for ths calculaton: The canddate codewords do not all agree on the bt decson for locaton k; then, gven the lst of codewords and ther accompanyng a posteror probabltes, the relablty λ k of the coded bt c k s evaluated as λ k log c S + k c S k Prc ĉ,r Prc ĉ,r 6 where S + k s the set of canddate codewords where c k +, and S k s the set of canddate codewords where c k Although rare for such short codeword lengths, n the event that all codewords do agree on the decson for c k, a method nspred by [27] s adopted for generatng soft nformaton as follows λ k β ter λ max ˆd k 7 where ˆd k s the bpolar representaton of the agreedupon decson, λ max s a preset value for the maxmum relablty at convergence of turbo performance, and the multpler β ter < s a scalng factor. β ter n the frst global teratons and s ncreased to as more global teratons are performed and the confdence n bt decsons mproved. Thus, ths back-off control process reduces the rsk of error propagaton. The heurstc scalng n 7 s agan useful when EPCC s turned off for a tensor symbol, n case the most lkely error-syndrome beng 0. Then, the base hard value of the tensor symbol corresponds to the most lkely error event

10 0 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 28, NO. 2, FEBRUARY 200 found as a sde product n stage 2 of the T-EPCC-qLDPC decoder. C. Stoppng Crteron for T-EPCC-qLDPC and RS Erasure Decodng Due to the ambguty n mappng tensor symbols to sgnatures and syndromes to errors n stages 2 and 4 of the decoder, respectvely, the possblty of non-targeted error patterns, or errors that have zero error-syndromes that are transparent to H epcc, a second lne of defense s essental to take care of undetected errors. Therefore, an outer RS code of small correcton power t out s concatenated to T-EPCC-qLDPC to take care of the mperfectons of the component EPCC. Several concurrent functons are offered by ths code, ncludng: Stoppng Flag: If the RS syndrome s zero, then, global teratons are halted and decsons are released. Outer ECC: Attempt to correct resdual errors at the output of EPCC after each global teraton. Erasure Decodng: If the RS syndrome s nonzero, then, for those tensor symbols that EPCC was turned on, declare ther bts as erasures. Next, fnd the correspondng RS symbol erasures, and attempt RS erasure decodng whch s capable of correctng up to2 t out such erasures. In ths case, T-EPCC acts as an error locatng code. V. SIMULATION RESULTS AND DISCUSSION We compare three codng systems based on LDPC: conventonal bnary LDPC, q-ary LDPC, and T-EPCC-qLDPC, where all the component LDPC codes are regular and constructed by PEG wth a QC constrant. We study ther sector error rate SER performance on the deal equalzed partal response target +0.85D corrupted by AWGN, and wth codng rate penalty 0log 0 /R. The nomnal systems run at a codng rate of 0.9. The mnmum SNR requred to acheve relable recordng at ths rate s 3.9 db, estmated by followng the same approach as n [28]. A. Sngle-level BLDPC & qldpc Smulaton Results In Fg. 6, we compare SER of the followng LDPC codes, each constructed by PEG wth a QC constrant: A 4550,4095 GF2-LDPC, of column weght 5, and crculant sze 9 bts. The channel detector s a 2 state bnary BCJR. A 570,50 GF2 8 -LDPC, of codeword length 4560 bts, column weght 2, and crculant sze of 5 symbols. The channel detector s a symbol-bcjr wth 256 branches emanatng from each of 2 states. A 760,684 GF2 6 -LDPC, of codeword length 4560 bts, column weght 2, and crculant sze of 9 symbols. The channel detector s a symbol-bcjr wth 64 branches emanatng from each of 2 states. A 775,700 GF2 6 -LDPC, of codeword length 4650 bts, column weght 3, and crculant sze of 25 symbols. The channel detector s a symbol-bcjr wth 64 branches emanatng from each of 2 states. For the bnary LDPC turbo equalzer, we run a maxmum of 0 50 teratons, 0 global, and 50 LDPC BP teratons. For the q-ary turbo equalzers, on the other hand, we run a maxmum of 3 50 teratons. A column weght of 2 gves the best waterfall performance of q-ary LDPC. However, GF2 6 -LDPC exhbts an error floor as early as at SER 6 0 4, whereas a hgher order feld of GF2 8 does not show such a tendency down to 0 5. Nevertheless, the prohbtve complexty of GF2 8 symbol-bcjr makes GF2 6 -LDPC a more attractve choce. Stll, we need to sacrfcegf2 6 -LDPC s waterfall performance gans to guarantee a lower error floor. For that purpose, we move to a column weght 3 GF2 6 -LDPC that s.37 db away at 0 5 from the ndependent unformly-dstrbuted capacty C I.U.D. of the channel [28], and 0.37 db away from GF2 8 - LDPC a the same SER. In ths smulaton study, we have observed that whle bnary LDPC can gan up to 0.4 db through 0 channel teratons before gan saturates, GF2 8 - LDPC and GF2 6 -LDPC acheve very lttle teratve gan by gong back to the channel, between 0.09 to 0.2 db through 3 channel teratons. One way to explan ths phenomenon, s that symbol-level LDPC decodng dvdes the bt stream nto LDPC symbols that capture the error events ntroduced by the channel detector, renderng the bnary nter-symbol nterference lmted channel nto a memoryless mult-level AWGN lmted channel. Nonetheless, error events spannng symbol boundares rentroduce correlatons between LDPC symbols that are broken only by gong back to the channel. In other words, f t was not due to such boundary effects, a q-ary LDPC equalzer would not exhbt any teratve turbo gan whatsoever. Nonetheless, full-blown symbol BCJR s stll too complex to ustfy salvagng the small teratve gan by performng extra channel teratons [33]. Ths s where error event matched decodng comes nto the pcture, whch leads us to the results of the next secton. B. T-EPCC-qLDPC Smulaton Results We frst construct two T-EPCC-qLDPC codes of rate 0.9, the same rate as the competng sngle-level qldpc. These TPPC s are based on EPCC 2,6 of example. The codes constructed are: TPPC-A: A /2 KB sector, bnary 4680,422 TPPC, of rate 0.9, and 468 party bts, based on a component 390,32 PEG-optmzed QC GF2 6 -LDPC, of rate 0.8, column weght 3, and crculant sze 26. TPPC-B: A KB sector, bnary 9360,8424 TPPC, of rate 0.9, and 936 party bts, based on a component 780,624 PEG-optmzed QC GF2 6 -LDPC, of rate 0.8, column weght 3, and crculant sze 52. Frst, we study the SER of T-EPCC-qLDPC ust up to the component GF2 6 LDPC decoder, and only at the frst channel pass. Ths SER s functon of the Vterb symbol error rate, and the accuracy of generatng sgnature mlllrs, n addton to the component LDPC employed. Ths SER represents the best that the TPPC code can do, under the assumpton of perfect component EPCC,.e., as long as qldpc generates a clean codeword of sgnature-symbols, then EPCC generates

11 ALHUSSIEN and MOON: AN ITERATIVELY DECODABLE TENSOR PRODUCT CODE WITH APPLICATION TO DATA STORAGE Sector Error Rate C I.U.D Bnary LDPC, 0 50, Col wt 5. GF2 6 LDPC, 3 50, Col wt 3. GF2 6 LDPC, 3 50, Col wt 2. GF2 8 LDPC, 3 50, Col wt E / N db b o Fg. 6. Comparng SER of: 0 50 teratons of bnary LDPC, 3 50 teratons of GF2 8 -LDPC of column weght 2, and 3 50 teratons of GF2 6 -LDPC of column weghts 2 and 3. Mnmum SNR to acheve relable recordng at codng rate 0.9 s 3.9 db for +0.85D. a clean codeword of data-symbols. Fg. 7 shows the deal SER of these two TPPC codes, assumng perfect EPCC, compared to sngle-level GF2 6 -LDPC and GF2 8 -LDPC. Ideal /2 KB TPPC has about the same SER as sngle level GF2 6 - LDPC at SER. In /2 KB TPPC, the component GF2 6 -LDPC has half the codeword length of the sngle level counterpart, savng 50% of the decoder complexty, whle delverng smlar SER performance. The TPPC component qldpc faces a harsher channel than sngle-level qldpc, because the symbol error probablty of 6-bt data symbols s strctly less than the symbol error probablty of 6-bt sgnature symbols, where sgnature symbols are compressed down from 2-bt data symbols. Also, the shorter codeword length of component qldpc hurts ts mnmum dstance. Stll, these mparments are effectvely compensated for by an.4% ncrease n the redundancy of the TPPC component LDPC. On the other hand, f we match the codeword length of TPPC s component LDPC to sngle-level LDPC, as part of constructng KB TPPC, then, KB TPPC wll have smlar decoder complexty to /2 KB sngle-level LDPC wth about 0.2 db SNR advantage for KB deal TPPC at SER. Due to the mperfectons of EPCC desgn, ncludng mscorrecton due to one-to-many syndrome to error poston mappng, and undetected errors due to EPCC s small mnmum dstance, achevng the deal performance n Fg. 7 s not possble n one channel pass. In addton, an outer code s necessary to protect aganst undetected errors and provde a stoppng flag for the teratve decoder. Hence, one can thnk of an mplementaton of the full T-EPCC-GF2 6 LDPC decoder that ncludes an outer t 6 42,409 RS for the /2 KB case, and an outer t 2 842,88 RS for the KB case, so as to protect aganst EPCC resdual errors. These outer RS codes are defned on GF2 0 and have rate However, ths concatenaton setup wll run at a lower code rate of 0.875, whch can ncur an SNR degradaton larger than 0.25 db for a nose envronment characterzed by the rate penalty 0log 0 /R δ, δ 2. In a more thoughtful approach, one can preserve the nomnal code rate of 0.9 and redstrbute the redundancy between the nner TPPC and outer RS to acheve an mproved tradeoff between mscorrecton probablty and the nner TPPC s component LDPC code strength. In that sprt, we construct the followng concatenated codes: TPPC-C: A /2 KB sector, bnary 4560,428 TPPC, of rate 0.925, and 342 party bts, based on a component 380,323 PEG-optmzed QC GF2 6 -LDPC, of rate 0.85, column weght 3, and crculant sze 9. An outer t 6 422, 40 RS code of rate s ncluded, resultng n a total system rate of 0.9. TPPC-D: A KB sector, bnary 920,8436 TPPC, of rate 0.925, and 684 party bts, based on a component 760,646 PEG-optmzed QC GF2 6 -LDPC, of rate 0.85, column weght 3, and crculant sze 38. An outer t 2 844,820 RS code of rate s ncluded, resultng n a total system rate of 0.9. The control mechansm of teratve decodng for these codes s as follows: f EPCC results n less than 6 RS symbol errors for the /2 KB desgn or less than 2 for the KB desgn, or f EPCC generates more errors than ths, but declares less than 2 erasures for /2 KB or 24 erasures for KB, then, decodng halts and decsons are released. Otherwse, one more channel teraton s done by passng EPCC soft bt-level LLR s to Vterb detecton and the bank of error-matched correlators. Smulaton results n Fg. 8, for a nose envronment of rate penalty 0log 0 /R, demonstrate that after 3 channel teratons, the deal and practcal performances of the new TPPC codes almost lock, whle ncurrng mnmal SNR degradaton. Also, /2 KB TPPC saves 50% of decoder complexty whle achevng the same SER performance as sngle level LDPC for an addtonal SNR cost of 0.04 db at SER 0 5. Hence, TPPC-C represents a tradeoff between the lower complexty of GF2-LDPC and performance advantage of GF2 6 -LDPC, whereas KB TPPC has the same decodng complexty as sngle-level LDPC whle furnshng 0.8 db gan at SER. In terms of channel detector mplementaton complexty, the complexty and latency of GF2 6 -BCJR n the sngle level code far exceeds the overall complexty of the non-ldpc parts of two level T-EPCC-GF2 6 LDPC ncludng Vterb detecton. At the same tme, sgnature mlllr generaton, EPCC decodng, and bt-llr generaton are all mplemented tensor-symbol by tensor-symbol, achevng full parallelsm on the tensor-symbol level. Furthermore, t s only when qldpc fnds a syndrome error that EPCC decodng s turned on for each tensor symbol. To elmnate redundant computatons n the teratve decoder, branch metrc computaton n Vterb and 5 s only requred at the frst pass. For all subsequent teratons, however, only the a pror bas s updated n the second term of 5, and the branch update of Vterb [34]. One very mportant feature of the TPPC setup, that snglelevel LDPC lacks, s ts robustness to boundary error events.

12 2 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 28, NO. 2, FEBRUARY 200 Sector Error Rate Bnary LDPC, 0 50, Col wt 5. GF2 6 LDPC, 3 50, Col wt 3. GF2 8 LDPC, 3 50, Col wt 2. TPPC-A: Ideal /2 KB, TPPC-B: Ideal KB, Sector Error Rate Bnary LDPC, 0 50, Col wt 5. GF2 6 LDPC, 3 50, Col wt 3. GF2 8 LDPC, 3 50, Col wt 2. TPPC-C: Ideal /2 KB, TPPC-C: Real /2 KB, TPPC-D: Ideal KB, TPPC-D: Real KB, E b / N o db E b / N o db Fg. 7. Comparng SER of: 0 50 teratons of bnary LDPC, 3 50 teratons ofgf2 8 -LDPC of column weght2,3 50 teratons ofgf2 6 - LDPC of column weght 3, and 0 50 teratons of deal /2 KB and KB T-EPCC-GF2 6 LDPC based on column weght 3 LDPC. The presence of a syndrome-constrant means that errors spannng boundares are broken by EPCC when attemptng to ndependently satsfy the adacent tensor symbol syndromes, then, n the next turbo teraton, adacent tensor-symbols are decorrelated. Ths mechansm enables TPPC to recover from these errors by teratve decodng. However, for errors wth a zero error-syndrome whch go undetected by EPCC, outer RS protecton becomes handy. Based on the fact that TPPC enables an ncrease n the redundancy of ts component LDPC, n addton to smulaton results demonstratng the utlty of such lowered rate n combatng the harsher compressed channel, we conecture that as the sector length of both TPPC and sngle-level LDPC s drven to nfnty, TPPC wll acheve strct error rate SNR gans. Ths s manly because of ts surplus of redundancy compared to the sngle level code at the same rate penalty, whereas channel condtons and EPCC correcton power do not change wth replcaton of tensor symbols, and the error rate performance of LDPC asymptotcally approaches the nose threshold n the lmt of nfnte codeword length. Therefore, wthn a channel-capacty achevng argument, n the lmt of nfnte codeword length, we take the vew that TPPC wll brdge the gap to capacty further than any sngle level system could. Moreover, the advantage of TPPC for larger sector szes s more tmely than ever as the ndustry moves to the larger 4 KB sector format [33]. VI. CONCLUSIONS In a tensor product setup, codes of short codeword length and low rate can be combned nto hgh rate codes of nce algebrac propertes. We showed that encodng of tensor product codes s lnear tme f the component codes are lnear Fg. 8. Comparng SER n envronment of rate penalty0log 0 /R: 0 50 teratons of bnary LDPC, 3 50 teratons of GF2 8 -LDPC of column weght 2, 3 50 teratons of GF2 6 -LDPC of column weght 3, and 3 50 teratons of practcal /2 KB T-EPCC-GF2 6 LDPC+RSt 6, and KB T-EPCC-GF2 6 LDPC+RSt 2, both based on column weght 3 LDPC. tme encodable. We also demonstrated how the codeword length and rate of channel matched EPCC can be substantally ncreased by combnng wth a strong RS or LDPC of short codeword length. We also ncorporated an outer RS code of low correcton power to clean out the resdual errors of T-EPCC-RS or T-EPCC-LDPC TPPCs. In concluson, ths work establshed T-EPCC-qLDPC as a reasonable complexty approach to ntroducng non-bnary LDPC to the perpendcular recordng read channel archtecture, pavng the way to relable hgher recordng denstes. ACKNOWLEDGMENT The authors would lke to thank the anonymous revewers for ther constructve comments that helped enhance the techncal qualty and presentaton of ths paper. REFERENCES [] Hao Zhong, We Xu, Nngde Xe, and Tong Zhang, Area-effcent mnsum decoder desgn for hgh-rate quas-cyclc low-densty party-check codes n magnetc recordng, IEEE Transactons on Magnetcs, vol. 43, no. 2, pp , Dec [2] Hao Zhong, Tong Zhong, and Erch F. Haratsch, Quas-cyclc LDPC codes for the magnetc recordng channel: Code desgn and VLSI mplementaton, IEEE Transactons on Magnetcs, vol. 43, no. 3, pp. 8-23, Mar [3] N. Varnca, and A. Kavcc, Optmzed low-densty party-check codes for partal response channels, IEEE Communcatons Letters, vol. 7, no. 4, pp , Apr [4] S. Sankaranarayanan, B. Vasc, and E. M.Kurtas, Irregular low-densty party-check codes: Constructon and performance on perpendcular magnetc recordng channels, IEEE Transactons on Magnetcs, vol. 39, no. 5, pp , Sept [5] B. Vasc, and O. Mlenkovc, Combnatoral constructons of lowdensty party-check codes for teratve decodng, IEEE Transactons on Informaton Theory, vol. 50, no. 6, pp , June 2004.

13 ALHUSSIEN and MOON: AN ITERATIVELY DECODABLE TENSOR PRODUCT CODE WITH APPLICATION TO DATA STORAGE 3 [6] G. Lva, G., W. E. Ryan, and M. Chan, Quas-cyclc generalzed LDPC codes wth low error floors, IEEE Transactons on Communcatons, vol. 56, no., pp , Jan [7] M. Yang, W. E. Ryan, and Y. L, Desgn of effcently encodable moderate-length hgh-rate rregular LDPC codes, IEEE Transactons on Communcatons, pp , Apr [8] L. Zongwang, L. Chen, L. Zeng, S. Ln, and W. H. Fong, Effcent encodng of quas-cyclc low-densty party-check codes, IEEE Transactons on Communcatons, vol. 54, no., pp. 7-8, Jan [9] Y. Han, and W. E. Ryan, Concatenatng a structured LDPC code and an RLL code to preserve soft-decodng, structure, and burst correcton, IEEE Trans. Magnetcs, pp , Oct [0] B. M. Kurkosk, P. H. Segel, and J. K. Wolf, Jont message-passng decodng of LDPC codes and partal-response channels, IEEE Transactons on Informaton Theory, vol. 49, no. 8, pp , Aug [] Y. Han, and W. E. Ryan, LDPC decoder strateges for achevng low error floors, Informaton Theory and Applcatons Workshop, 2008, pp , Jan Feb [2] R. D. Cdecyan, E. Eleftherou, and T. Mttelholzer, Perpendcular and longtudnal recordng: A sgnal-processng and codng perspectve, IEEE Trans. Magn., vol. 38, no. 4, pp , Jul [3] X. Hu, and B. V. K. V. Kumar, Evaluaton of low-densty party-check codes on perpendcular magnetc recordng model, IEEE Transactons on Magnetcs, vol. 43, no. 2, pp , Feb [4] Hongxn Song, R. M. Todd, and J. R. Cruz, Applcatons of low-densty party-check codes to magnetc recordng channels, IEEE Journal on Selected Areas n Communcatons, vol. 9, no. 5, pp , May 200. [5] J. Moon, and J. Park, Detecton of prescrbed error events: Applcaton to perpendcular recordng, n Proc. IEEE ICC, vol. 3, pp , May [6] J. Park, and J. Moon, Hgh-rate error correcton codes targetng domnant error patterns, IEEE Trans. Magn., vol. 42, no. 0, pp , Oct [7] J. Park, and J. Moon, A new class of error-pattern-correctng codes capable of handlng multple error occurrences, IEEE Trans. Magn., vol. 43., no. 6, pp , Jun [8] J. Park, and J. Moon, Error-pattern-correctng cyclc codes talored to a prescrbed set of error cluster patterns, IEEE Trans. Inform. Theory, vol. 55, no. 4, pp , Apr [9] H. AlHussen, J. Park and J. Moon, Iteratve decodng based on errorpattern correcton, IEEE Trans. Magn., vol. 44, no., pp. 8-86, Jan [20] J. K. Wolf, An ntroducton to tensor product codes and applcatons to dgtal storage systems, Informaton Theory Workshop, ITW 06 Chengdu. IEEE, pp. 6-0, Oct [2] J. Wolf, On codes dervable from the tensor product of check matrces, IEEE Transactons on Informaton Theory, vol., no. 2, pp , Apr [22] P. Chachanavong, and P. H. Segel, Tensor-product party code for magnetc recordng, IEEE Transactons on Magnetcs, vol. 42, no. 2, pp , Feb [23] P. Chachanavong, and P. H. Segel, Tensor-product Party codes: Combnaton wth constraned codes and applcaton to perpendcular recordng, IEEE Transactons on Magnetcs, vol. 42, no. 2, pp , Feb [24] J. Wolf, and B. Elspas, Error-locatng codes A new concept n error control, IEEE Transactons on Informaton Theory, vol. 9, no. 2, pp. 3-7, Apr [25] A. Fahrner, H. Greßer, R. Klarer, and V. V. Zyablov, Low-complexty GEL codes for dgtal magnetc storage systems, IEEE Transactons on Magnetcs, vol. 40, no. 4, pp , Jul [26] H. Ima, and H. Fuya, Generalzed tensor product codes, IEEE Transactons on Informaton Theory, vol. 27, no. 2, pp. 8-87, Mar. 98. [27] R. M. Pyndah, Near-optmum decodng of product codes: Block turbo codes, IEEE Transactons on Communcatons, vol. 46, pp , Aug [28] D. Arnold, and H. A. Loelger, On the nformaton rate of bnary-nput channels wth memory, n Proc. IEEE Int. Conf. Communcatons 200, Helsnk, Fnland, Jun [29] M. C. Davey, and D. J. C. MacKay, Low densty party check codes over GFq, Informaton Theory Workshop, 998, pp. 70-7, Jun [30] X. Y. Hu, E. Eleftherou, and D. M. Arnold, Regular and rregular progressve edge-growth tanner graphs, IEEE Transactons on Informaton Theory, vol. 5, no., pp , Jan [3] L. Zongwang, and B. V. K. V. Kumar, A class of good quas-cyclc low-densty party check codes based on progressve edge growth graph, Conference Record of the Thrty-Eghth Aslomar Conference on Sgnals, Systems and Computers, 2004, pp vol. 2, 7-0 Nov [32] W. Chang, and J. R. Cruz, Performance and decodng complexty of nonbnary LDPC codes for magnetc recordng, IEEE Trans. Magn., vol. 44, no., pp. 2-26, Jan [33] W. Chang, and J. R. Cruz, Nonbnary LDPC codes for 4-kB sectors, IEEE Trans. Magn., vol. 44, no., pp , Nov [34] W. Chang, and J. R. Cruz, Comments on Performance and decodng complexty of nonbnary LDPC codes for magnetc recordng [Jan ], Magnetcs, IEEE Transactons on, vol. 44, no. 0, pp , Oct [35] S. Hongxn, and J. R. Cruz, Reduced-complexty decodng of Q-ary LDPC codes for magnetc recordng, IEEE Trans. Magn., vol. 39, no. 2, pp , Mar [36] R. Ldl, and H. Nederreter, Introducton to fnte felds and ther applcatons, 2nd ed. New York: Cambrdge Unversty Press, 994. Hakm Alhussen receved the B.S. and M.S. degrees n electrcal engneerng wth hgh honors from Jordan Unversty of Scence and Technology JUST, Irbd, n 200 and 2003, respectvely, and the MSEE and Ph.D. degrees n electrcal engneerng from the Unversty of Mnnesota Twn- Ctes, Mnneapols, n 2008 and 2009, respectvely. From 2003 to 2004, he was an Instructor wth the Department of Electrcal Engneerng at the Unversty of Yarmouk, Jordan. From 2004 to 2008, he held Research and Teachng Assstant postons wth the Department of Electrcal and Computer Engneerng, the Unversty of Mnnesota. Snce September 2008, he has been a Systems Desgn Engneer wth Lnk-A-Meda Devces, Santa Clara. Hs man research nterest are n the applcatons of codng, sgnal processng, and nformaton theory to data storage systems. Jaekyun Moon s a Professor of Electrcal Engneerng at KAIST. Prof. Moon receved a BSEE degree wth hgh honor from SUNY Stony Brook and then M.S. and Ph.D. degrees n Electrcal and Computer Engneerng at Carnege Mellon Unversty. From 990 through early 2009, he was wth the faculty of the Department of Electrcal and Computer Engneerng at the Unversty of Mnnesota, Twn Ctes. Prof. Moon s research nterests are n the area of channel characterzaton, sgnal processng and codng for data storage and dgtal communcaton. He receved the McKnght Land-Grant Professorshp from the Unversty of Mnnesota. He also receved the IBM Faculty Development Awards as well as the IBM Partnershp Awards. He was awarded the Natonal Storage Industry Consortum NSIC Techncal Achevement Award for the nventon of the maxmum transton run MTR code, a wdely-used errorcontrol/modulaton code n commercal storage systems. He served as Program Char for the 997 IEEE Magnetc Recordng Conference. He s also Past Char of the Sgnal Processng for Storage Techncal Commttee of the IEEE Communcatons Socety. In 200, he co-founded Berma, Inc., a fabless semconductor start-up, and served as foundng Presdent and CTO. He served as a guest Edtor for the 200 IEEE J-SAC ssue on Sgnal Processng for Hgh Densty Recordng. He also served as an Edtor for IEEE Transactons on Magnetcs n the area of sgnal processng and codng for He worked as consultng Chef Scentst at DSPG, Inc. from 2004 to He also worked as Chef Technology Offcer at Lnk-A-Meda Devces Corp. n He s an IEEE Fellow.

Application of Nonbinary LDPC Codes for Communication over Fading Channels Using Higher Order Modulations

Application of Nonbinary LDPC Codes for Communication over Fading Channels Using Higher Order Modulations Applcaton of Nonbnary LDPC Codes for Communcaton over Fadng Channels Usng Hgher Order Modulatons Rong-Hu Peng and Rong-Rong Chen Department of Electrcal and Computer Engneerng Unversty of Utah Ths work

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Low Complexity Soft-Input Soft-Output Hamming Decoder

Low Complexity Soft-Input Soft-Output Hamming Decoder Low Complexty Soft-Input Soft-Output Hammng Der Benjamn Müller, Martn Holters, Udo Zölzer Helmut Schmdt Unversty Unversty of the Federal Armed Forces Department of Sgnal Processng and Communcatons Holstenhofweg

More information

Lecture 5 Decoding Binary BCH Codes

Lecture 5 Decoding Binary BCH Codes Lecture 5 Decodng Bnary BCH Codes In ths class, we wll ntroduce dfferent methods for decodng BCH codes 51 Decodng the [15, 7, 5] 2 -BCH Code Consder the [15, 7, 5] 2 -code C we ntroduced n the last lecture

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

DC-Free Turbo Coding Scheme Using MAP/SOVA Algorithms

DC-Free Turbo Coding Scheme Using MAP/SOVA Algorithms Proceedngs of the 5th WSEAS Internatonal Conference on Telecommuncatons and Informatcs, Istanbul, Turkey, May 27-29, 26 (pp192-197 DC-Free Turbo Codng Scheme Usng MAP/SOVA Algorthms Prof. Dr. M. Amr Mokhtar

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Tornado and Luby Transform Codes. Ashish Khisti Presentation October 22, 2003

Tornado and Luby Transform Codes. Ashish Khisti Presentation October 22, 2003 Tornado and Luby Transform Codes Ashsh Khst 6.454 Presentaton October 22, 2003 Background: Erasure Channel Elas[956] studed the Erasure Channel β x x β β x 2 m x 2 k? Capacty of Noseless Erasure Channel

More information

Consider the following passband digital communication system model. c t. modulator. t r a n s m i t t e r. signal decoder.

Consider the following passband digital communication system model. c t. modulator. t r a n s m i t t e r. signal decoder. PASSBAND DIGITAL MODULATION TECHNIQUES Consder the followng passband dgtal communcaton system model. cos( ω + φ ) c t message source m sgnal encoder s modulator s () t communcaton xt () channel t r a n

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering / Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Lecture 3: Shannon s Theorem

Lecture 3: Shannon s Theorem CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013 ISSN: 2277-375 Constructon of Trend Free Run Orders for Orthogonal rrays Usng Codes bstract: Sometmes when the expermental runs are carred out n a tme order sequence, the response can depend on the run

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k) ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of

More information

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique Outlne and Readng Dynamc Programmng The General Technque ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Matrx Chan-Product ( 5.3.1) Dynamc Programmng verson 1.4 1 Dynamc Programmng verson 1.4 2 Dynamc Programmng

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Temperature. Chapter Heat Engine

Temperature. Chapter Heat Engine Chapter 3 Temperature In prevous chapters of these notes we ntroduced the Prncple of Maxmum ntropy as a technque for estmatng probablty dstrbutons consstent wth constrants. In Chapter 9 we dscussed the

More information

Min Cut, Fast Cut, Polynomial Identities

Min Cut, Fast Cut, Polynomial Identities Randomzed Algorthms, Summer 016 Mn Cut, Fast Cut, Polynomal Identtes Instructor: Thomas Kesselhem and Kurt Mehlhorn 1 Mn Cuts n Graphs Lecture (5 pages) Throughout ths secton, G = (V, E) s a mult-graph.

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Tracking with Kalman Filter

Tracking with Kalman Filter Trackng wth Kalman Flter Scott T. Acton Vrgna Image and Vdeo Analyss (VIVA), Charles L. Brown Department of Electrcal and Computer Engneerng Department of Bomedcal Engneerng Unversty of Vrgna, Charlottesvlle,

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

FREQUENCY DISTRIBUTIONS Page 1 of The idea of a frequency distribution for sets of observations will be introduced,

FREQUENCY DISTRIBUTIONS Page 1 of The idea of a frequency distribution for sets of observations will be introduced, FREQUENCY DISTRIBUTIONS Page 1 of 6 I. Introducton 1. The dea of a frequency dstrbuton for sets of observatons wll be ntroduced, together wth some of the mechancs for constructng dstrbutons of data. Then

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

Boostrapaggregating (Bagging)

Boostrapaggregating (Bagging) Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

Supporting Information

Supporting Information Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to

More information

High-Speed Decoding of the Binary Golay Code

High-Speed Decoding of the Binary Golay Code Hgh-Speed Decodng of the Bnary Golay Code H. P. Lee *1, C. H. Chang 1, S. I. Chu 2 1 Department of Computer Scence and Informaton Engneerng, Fortune Insttute of Technology, Kaohsung 83160, Tawan *hpl@fotech.edu.tw

More information

The Concept of Beamforming

The Concept of Beamforming ELG513 Smart Antennas S.Loyka he Concept of Beamformng Generc representaton of the array output sgnal, 1 where w y N 1 * = 1 = w x = w x (4.1) complex weghts, control the array pattern; y and x - narrowband

More information

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming EEL 6266 Power System Operaton and Control Chapter 3 Economc Dspatch Usng Dynamc Programmng Pecewse Lnear Cost Functons Common practce many utltes prefer to represent ther generator cost functons as sngle-

More information

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k.

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k. THE CELLULAR METHOD In ths lecture, we ntroduce the cellular method as an approach to ncdence geometry theorems lke the Szemeréd-Trotter theorem. The method was ntroduced n the paper Combnatoral complexty

More information

NP-Completeness : Proofs

NP-Completeness : Proofs NP-Completeness : Proofs Proof Methods A method to show a decson problem Π NP-complete s as follows. (1) Show Π NP. (2) Choose an NP-complete problem Π. (3) Show Π Π. A method to show an optmzaton problem

More information

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud Resource Allocaton wth a Budget Constrant for Computng Independent Tasks n the Cloud Wemng Sh and Bo Hong School of Electrcal and Computer Engneerng Georga Insttute of Technology, USA 2nd IEEE Internatonal

More information

Formulas for the Determinant

Formulas for the Determinant page 224 224 CHAPTER 3 Determnants e t te t e 2t 38 A = e t 2te t e 2t e t te t 2e 2t 39 If 123 A = 345, 456 compute the matrx product A adj(a) What can you conclude about det(a)? For Problems 40 43, use

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

S Advanced Digital Communication (4 cr) Targets today

S Advanced Digital Communication (4 cr) Targets today S.72-3320 Advanced Dtal Communcaton (4 cr) Convolutonal Codes Tarets today Why to apply convolutonal codn? Defnn convolutonal codes Practcal encodn crcuts Defnn qualty of convolutonal codes Decodn prncples

More information

TOPICS MULTIPLIERLESS FILTER DESIGN ELEMENTARY SCHOOL ALGORITHM MULTIPLICATION

TOPICS MULTIPLIERLESS FILTER DESIGN ELEMENTARY SCHOOL ALGORITHM MULTIPLICATION 1 2 MULTIPLIERLESS FILTER DESIGN Realzaton of flters wthout full-fledged multplers Some sldes based on support materal by W. Wolf for hs book Modern VLSI Desgn, 3 rd edton. Partly based on followng papers:

More information

Chapter 13: Multiple Regression

Chapter 13: Multiple Regression Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD he Gaussan classfer Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world X observatons g decson functon L[g,y] loss of predctng y wth g Bayes decson rule s

More information

On the Stopping Distance and the Stopping Redundancy of Codes

On the Stopping Distance and the Stopping Redundancy of Codes On the Stoppng Dstance and the Stoppng Redundancy of Codes Moshe Schwartz Unversty of Calforna San Dego La Jolla, CA 92093, U.S.A. moosh@everest.ucsd.edu Abstract It s now well known that the performance

More information

Refined Coding Bounds for Network Error Correction

Refined Coding Bounds for Network Error Correction Refned Codng Bounds for Network Error Correcton Shenghao Yang Department of Informaton Engneerng The Chnese Unversty of Hong Kong Shatn, N.T., Hong Kong shyang5@e.cuhk.edu.hk Raymond W. Yeung Department

More information

Probability-Theoretic Junction Trees

Probability-Theoretic Junction Trees Probablty-Theoretc Juncton Trees Payam Pakzad, (wth Venkat Anantharam, EECS Dept, U.C. Berkeley EPFL, ALGO/LMA Semnar 2/2/2004 Margnalzaton Problem Gven an arbtrary functon of many varables, fnd (some

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

A Robust Method for Calculating the Correlation Coefficient

A Robust Method for Calculating the Correlation Coefficient A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal

More information

Negative Binomial Regression

Negative Binomial Regression STATGRAPHICS Rev. 9/16/2013 Negatve Bnomal Regresson Summary... 1 Data Input... 3 Statstcal Model... 3 Analyss Summary... 4 Analyss Optons... 7 Plot of Ftted Model... 8 Observed Versus Predcted... 10 Predctons...

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Lecture 4. Instructor: Haipeng Luo

Lecture 4. Instructor: Haipeng Luo Lecture 4 Instructor: Hapeng Luo In the followng lectures, we focus on the expert problem and study more adaptve algorthms. Although Hedge s proven to be worst-case optmal, one may wonder how well t would

More information

Speeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem

Speeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem H.K. Pathak et. al. / (IJCSE) Internatonal Journal on Computer Scence and Engneerng Speedng up Computaton of Scalar Multplcaton n Ellptc Curve Cryptosystem H. K. Pathak Manju Sangh S.o.S n Computer scence

More information

5 The Rational Canonical Form

5 The Rational Canonical Form 5 The Ratonal Canoncal Form Here p s a monc rreducble factor of the mnmum polynomal m T and s not necessarly of degree one Let F p denote the feld constructed earler n the course, consstng of all matrces

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

A linear imaging system with white additive Gaussian noise on the observed data is modeled as follows:

A linear imaging system with white additive Gaussian noise on the observed data is modeled as follows: Supplementary Note Mathematcal bacground A lnear magng system wth whte addtve Gaussan nose on the observed data s modeled as follows: X = R ϕ V + G, () where X R are the expermental, two-dmensonal proecton

More information

Time-Varying Systems and Computations Lecture 6

Time-Varying Systems and Computations Lecture 6 Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy

More information

Iterative Multiuser Receiver Utilizing Soft Decoding Information

Iterative Multiuser Receiver Utilizing Soft Decoding Information teratve Multuser Recever Utlzng Soft Decodng nformaton Kmmo Kettunen and Tmo Laaso Helsn Unversty of Technology Laboratory of Telecommuncatons Technology emal: Kmmo.Kettunen@hut.f, Tmo.Laaso@hut.f Abstract

More information

Lecture 4: Universal Hash Functions/Streaming Cont d

Lecture 4: Universal Hash Functions/Streaming Cont d CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected

More information

Chapter 6. BCH Codes

Chapter 6. BCH Codes Wreless Informaton Transmsson System Lab Chapter 6 BCH Codes Insttute of Communcatons Engneerng Natonal Sun Yat-sen Unversty Outlne Bnary Prmtve BCH Codes Decodng of the BCH Codes Implementaton of Galos

More information

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence) /24/27 Prevew Fbonacc Sequence Longest Common Subsequence Dynamc programmng s a method for solvng complex problems by breakng them down nto smpler sub-problems. It s applcable to problems exhbtng the propertes

More information

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.

More information

Annexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances

Annexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances ec Annexes Ths Annex frst llustrates a cycle-based move n the dynamc-block generaton tabu search. It then dsplays the characterstcs of the nstance sets, followed by detaled results of the parametercalbraton

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The

More information

= z 20 z n. (k 20) + 4 z k = 4

= z 20 z n. (k 20) + 4 z k = 4 Problem Set #7 solutons 7.2.. (a Fnd the coeffcent of z k n (z + z 5 + z 6 + z 7 + 5, k 20. We use the known seres expanson ( n+l ( z l l z n below: (z + z 5 + z 6 + z 7 + 5 (z 5 ( + z + z 2 + z + 5 5

More information

Split alignment. Martin C. Frith April 13, 2012

Split alignment. Martin C. Frith April 13, 2012 Splt algnment Martn C. Frth Aprl 13, 2012 1 Introducton Ths document s about algnng a query sequence to a genome, allowng dfferent parts of the query to match dfferent parts of the genome. Here are some

More information

Finite Mixture Models and Expectation Maximization. Most slides are from: Dr. Mario Figueiredo, Dr. Anil Jain and Dr. Rong Jin

Finite Mixture Models and Expectation Maximization. Most slides are from: Dr. Mario Figueiredo, Dr. Anil Jain and Dr. Rong Jin Fnte Mxture Models and Expectaton Maxmzaton Most sldes are from: Dr. Maro Fgueredo, Dr. Anl Jan and Dr. Rong Jn Recall: The Supervsed Learnng Problem Gven a set of n samples X {(x, y )},,,n Chapter 3 of

More information

Lecture 2: Prelude to the big shrink

Lecture 2: Prelude to the big shrink Lecture 2: Prelude to the bg shrnk Last tme A slght detour wth vsualzaton tools (hey, t was the frst day... why not start out wth somethng pretty to look at?) Then, we consdered a smple 120a-style regresson

More information

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM An elastc wave s a deformaton of the body that travels throughout the body n all drectons. We can examne the deformaton over a perod of tme by fxng our look

More information

18.1 Introduction and Recap

18.1 Introduction and Recap CS787: Advanced Algorthms Scrbe: Pryananda Shenoy and Shjn Kong Lecturer: Shuch Chawla Topc: Streamng Algorthmscontnued) Date: 0/26/2007 We contnue talng about streamng algorthms n ths lecture, ncludng

More information

Error Probability for M Signals

Error Probability for M Signals Chapter 3 rror Probablty for M Sgnals In ths chapter we dscuss the error probablty n decdng whch of M sgnals was transmtted over an arbtrary channel. We assume the sgnals are represented by a set of orthonormal

More information

Entropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or

Entropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or Sgnal Compresson Sgnal Compresson Entropy Codng Entropy codng s also known as zero-error codng, data compresson or lossless compresson. Entropy codng s wdely used n vrtually all popular nternatonal multmeda

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

SIO 224. m(r) =(ρ(r),k s (r),µ(r))

SIO 224. m(r) =(ρ(r),k s (r),µ(r)) SIO 224 1. A bref look at resoluton analyss Here s some background for the Masters and Gubbns resoluton paper. Global Earth models are usually found teratvely by assumng a startng model and fndng small

More information

The Second Anti-Mathima on Game Theory

The Second Anti-Mathima on Game Theory The Second Ant-Mathma on Game Theory Ath. Kehagas December 1 2006 1 Introducton In ths note we wll examne the noton of game equlbrum for three types of games 1. 2-player 2-acton zero-sum games 2. 2-player

More information

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Chapter 7 Channel Capacty and Codng Contents 7. Channel models and channel capacty 7.. Channel models Bnary symmetrc channel Dscrete memoryless channels Dscrete-nput, contnuous-output channel Waveform

More information

Lecture Space-Bounded Derandomization

Lecture Space-Bounded Derandomization Notes on Complexty Theory Last updated: October, 2008 Jonathan Katz Lecture Space-Bounded Derandomzaton 1 Space-Bounded Derandomzaton We now dscuss derandomzaton of space-bounded algorthms. Here non-trval

More information

ORIGIN 1. PTC_CE_BSD_3.2_us_mp.mcdx. Mathcad Enabled Content 2011 Knovel Corp.

ORIGIN 1. PTC_CE_BSD_3.2_us_mp.mcdx. Mathcad Enabled Content 2011 Knovel Corp. Clck to Vew Mathcad Document 2011 Knovel Corp. Buldng Structural Desgn. homas P. Magner, P.E. 2011 Parametrc echnology Corp. Chapter 3: Renforced Concrete Slabs and Beams 3.2 Renforced Concrete Beams -

More information

Computing Correlated Equilibria in Multi-Player Games

Computing Correlated Equilibria in Multi-Player Games Computng Correlated Equlbra n Mult-Player Games Chrstos H. Papadmtrou Presented by Zhanxang Huang December 7th, 2005 1 The Author Dr. Chrstos H. Papadmtrou CS professor at UC Berkley (taught at Harvard,

More information

The Synchronous 8th-Order Differential Attack on 12 Rounds of the Block Cipher HyRAL

The Synchronous 8th-Order Differential Attack on 12 Rounds of the Block Cipher HyRAL The Synchronous 8th-Order Dfferental Attack on 12 Rounds of the Block Cpher HyRAL Yasutaka Igarash, Sej Fukushma, and Tomohro Hachno Kagoshma Unversty, Kagoshma, Japan Emal: {garash, fukushma, hachno}@eee.kagoshma-u.ac.jp

More information