On the Construction of Polar Codes

Size: px
Start display at page:

Download "On the Construction of Polar Codes"

Transcription

1 On the Constructon of Polar Codes Ratn Pedarsan School of Coputer and Councaton Systes, Lausanne, Swtzerland. S. Haed Hassan School of Coputer and Councaton Systes, Lausanne, Swtzerland. Ido Tal Inforaton Theory and Applcatons, UCSD La Jolla, CA, USA. Ere Telatar School of Coputer and Councaton Systes, Lausanne, Swtzerland. arxv: v1 [cs.it] 20 Sep 2012 Abstract We consder the proble of effcently constructng polar codes over bnary eoryless syetrc BMS) channels. The coplexty of desgnng polar codes va an exact evaluaton of the polarzed channels to fnd whch ones are good appears to be exponental n the block length. In [3], Tal and Vardy show that f nstead the evaluaton f perfored approxately, the constructon has only lnear coplexty. In ths paper, we follow ths approach and present a fraework where the algorths of [3] and new related algorths can be analyzed for coplexty and accuracy. We provde nuercal and analytcal results on the effcency of such algorths, n partcular we show that one can fnd all the good channels except a vanshng fracton) wth alost lnear coplexty n block-length except a polylogarthc factor). A. Polar Codes I. INTRODUCTION Polar codng, ntroduced by Arıkan n [1], s an encodng/decodng schee that provably acheves the capacty of the class of BMS channels. Let W be a BMS channel. Gven the rate R < IW), polar codng s based on choosng a set of 2 n R rows of the atrx G n = [ 1 0 n 1 1] to for a 2 n R 2 n atrx whch s used as the generator atrx n the encodng procedure 1. The way ths set s chosen s dependent on the channel W and uses a phenoenon called channel polarzaton: Consder an nfnte bnary tree and place the underlyng channel W on the root node and contnue recursvely as follows. Havng the channel P : {0,1} Y on a node of the tree, defne the channels P : {0,1} Y 2 and P + : {0,1} {0,1} Y 2 P y 1,y 2 x 1 ) = x 2 {0,1} 1 2 Py 1 x 1 x 2 )Py 2 x 2 ) 1) P + y 1,y 2,x 1 x 2 ) = 1 2 Py 1 x 1 x 2 )Py 2 x 2 ), 2) and place P and P + as the left and rght chldren of ths node. As a result, at level n there are N = 2 n channels whch we denote fro left to rght by W 1 N to WN N. In [1], Arıkan proved that as n, a fracton approachng IW) of the channels at level n have capacty close to 1 call the noseless channels) and a fracton approachng 1 IW) have capacty close to 0 call the copletely 1 There are extensons of polar codes gven n [2] whch use dfferent knds of atrces. nosy channels). Gven the rate R, the ndces of the atrx G n are chosen as follows: choose a subset of the channels {W ) N } 1 N wth the ost utual nforaton and choose the rows G n wth the sae ndces as these channels. For exaple, f the channel W j) N s chosen, then the j-th row of G n s selected, up to the bt-reversal perutaton. In the followng, gven n, we call the set of ndces of NR channels wth the ost utual nforaton, the set of good ndces. We can equvalently say that as n the fracton of channels wth Bhattacharyya constant near 0 approaches IW) and the fracton of channels wth Bhattacharyya constant near 1 approaches 1 IW). The Bhattacharyya constant of a channel P : {0,1} Y s gven by ZP) = y Y Py 0)Py 1). 3) Therefore, we can alternatvely call the set of ndces of NR channels wth least Bhattacharyya paraeters, the set of good ndces. It s also nterestng to enton that the su of the Bhattacharyya paraeters of the chosen channels s an upper bound on the block error probablty of polar codes when we use the successve cancellaton decoder. B. Proble Forulaton Desgnng a polar code s equvalent to fndng the set of good ndces. The an dffculty n ths task s that, snce the output alphabet of W ) N s YN {0,1}, the cardnalty of the output alphabet of the channels at the level n of the bnary tree s doubly exponental n n or s exponental n the block-length. So coputng the exact transton probabltes of these channels sees to be ntractable and hence we need soe effcent ethods to approxate these channels. In [1], t s suggested to use a Monte-Carlo ethod for estatng the Bhattacharyya paraeters. Another ethod n ths regard s by quantzaton [3], [4], [5], [6, Appendx B]: approxatng the gven channel wth a channel that has fewer output sybols. More precsely, gven a nuber k, the task s to coe up wth effcent ethods to replace channels that have ore that k outputs wth close channels that have at ost k outputs. Few coents n ths regard are the followng: The ter close above depends on the defnton of the quantzaton error whch can be dfferent dependng on the context. In our proble, n ts ost general settng

2 we can defne the quantzaton error as the dfference between the true set of good ndces and the approxate set of good ndces. However, t sees that analyzng ths type of error ay be dffcult and n the sequel we consder types of errors that are easer to analyze. Thus, as a coprose, wll ntutvely thnk of two channels as beng close f they are close wth respect to soe gven etrc; typcally utual nforaton but soetes probablty of error. More so, we requre that ths closeness s n the rght drecton: the approxated channel ust be a pessstc verson of the true channel. Thus, the approxated set of good channels wll be a subset of the true set. Intutvely, we expect that as k ncreases the overall error due to quantzaton decreases; the an art n desgnng the quantzaton ethods s to have a sall error whle usng relatvely sall values of k. However, for any quantzaton algorth an portant property s that as k grows large, the approxate set of good ndces usng the quantzaton algorth wth k fxed approaches the true set of good ndces. We gve a precse atheatcal defnton n the sequel. Takng the above entoned factors nto account, a sutable forulaton of the quantzaton proble s to fnd procedures to replace each channel P at each level of the bnary tree wth another syetrc channel P wth the nuber of output sybols lted to k such that frstly, the set of good ndces obtaned wth ths procedure s a subset of the true good ndces obtaned fro the channel polarzaton.e. channel P s polar degraded wth respect to P, and secondly the rato of these good ndces s axzed. More precsely, we start fro channel W at the root node of the bnary tree, quantze t to W and obtan W and W + accordng to 1) and 2). Then, we quantze the two new channels and contnue the procedure to coplete the tree. To state thngs atheatcally, let Q k be a quantzaton procedure that assgns to each channel P a bnary syetrc channel P such that the output alphabet of P s lted to a constant k. We call Q k adssble f for any and n ) I W N ) IW) N ). 4) One can alternatvely call Q k adssble f for any and n Z W ) N ) ZW) N ). 5) Note that 4) and 5) are essentally equvalent as N grows large. Gven an adssble procedure Q k and a BMS channel W, let ρq k,w) be 2 ) { : I W N ρq k,w) = l ) > 1 2 } 6) n N So the quantzaton proble s that gven a nuber k N and a channel W, how can we fnd adssble procedures Q k such that ρq k,w) s axzed and s close to the capacty of W. Can we reach the capacty of W as k goes to nfnty? 2 Instead of 1 n 6) we can use any nuber n 0,1). 2 Are such schees unversal n the sense that they work well for all the BMS channels? It s worth entonng that f we frst let k tend to nfnty and then n to nfnty then the lt s ndeed the capacty, but we are addressng a dfferent queston here, naely we frst let n tend to nfnty and then k or perhaps couple k to n). In Secton IV, we ndeed prove that such schees exst. A. Prelnares II. ALGORITHMS FOR QUANTIZATION Any dscrete BMS channel can be represented as a collecton of bnary syetrc channels BSC s). The bnary nput s gven to one of these BSC s at rando such that the -th BSC s chosen wth probablty p. The output of ths BSC together wth ts cross over probablty x s consdered as the output of the channel. Therefore, a dscrete BMS channel W can be copletely descrbed by a rando varable χ [0,1/2]. The pdf of χ wll be of the for: P χ x) = p δx x ) 7) =1 such that =1 p = 1 and 0 x 1/2. Note that ZW) and 1 IW) are expectatons of the functons fx) = 2 x1 x) and gx) = xlogx) 1 x)log1 x) over the dstrbuton P χ, respectvely. Therefore, n the quantzaton proble we want to replace the ass dstrbuton P χ wth another ass dstrbuton P χ such that the nuber of output sybols of χ s at ost k, and the channel W s polar degraded wth respect to W. We know that the followng two operatons ply polar degradaton: Stochastcally degradng the channel. Replacng the channel wth a BEC channel wth the sae Bhattacharyya paraeter. Furtherore, note that the stochastc donance of rando varable χ wth respect to χ ples W s stochastcally degraded wth respect to W. But the reverse s not true.) In the followng, we propose dfferent algorths based on dfferent ethods of polar degradaton of the channel. The frst algorth s a nave algorth called the ass transportaton algorth based on the stochastc donance of the rando varable χ, and the second one whch outperfors the frst s called greedy ass ergng algorth. For both of the algorths the quantzed channel s stochastcally degraded wth respect to the orgnal one. B. Greedy Mass Transportaton Algorth In the ost general for of ths algorth we bascally look at the proble as a ass transport proble. In fact, we have non-negatve asses p at locatons x, = 1,,,x 1 < < x. What s requred s to ove the asses, by only oves to the rght, to concentrate the on k < locatons, and try to nze p d where d = x +1 x s the aount th ass has oved. Later, we wll show that ths ethod s not optal but useful n the theoretcal analyss of the algorths that follow.

3 Algorth 1 Mass Transportaton Algorth 1: Start fro the lst p 1,x 1 ),,p,x ). 2: Repeat k tes 3: Fnd j = argn{p d : } 4: Add p j to p j+1.e. ove p j to x j+1 ) 5: Delete p j,x j ) fro the lst. Note that Algorth 1 s based on the stochastc donance of rando varable χ wth respect to χ. Furtherore, n general, we can let d = fx +1 ) fx ), for an arbtrary ncreasng functon f. C. Mass Mergng Algorth The second algorth erges the asses. Two asses p 1 and p 2 at postons x 1 and x 2 would be erged nto one ass p 1 +p 2 at poston x 1 = p1 p 1+p 2 x 1 + p2 p 1+p 2 x 2. Ths algorth s based on the stochastc degradaton of the channel, but the rando varable χ s not stochastcally donated by χ. The greedy algorth for the ergng of the asses would be the followng: Algorth 2 Mergng Masses Algorth 1: Start fro the lst p 1,x 1 ),,p,x ). 2: Repeat k tes 3: Fnd j = argn{p f x ) fx )) p +1 fx +1 ) f x )) : } x = p p +p +1 x + p+1 p +p +1 x +1 4: Replace the two asses p j,x j ) and p j+1,x j+1 ) wth a sngle ass p j +p j+1, x j ). Note that n practce, the functon f can be any ncreasng concave functon, for exaple, the entropy functon or the Bhattacharyya functon. In fact, snce the algorth s greedy and suboptal, t s hard to nvestgate explctly how changng the functon f wll affect the total error of the algorth n the end.e., how far W s fro W ). III. BOUNDS ON THE APPROXIMATION LOSS In ths secton, we provde soe bounds on the axu approxaton loss we have n the algorths. We defne the approxaton loss to be the dfference between the expectaton of the functon f under the true dstrbuton P χ and the approxated dstrbuton P χ. Note that the knd of error that s analyzed n ths secton s dfferent fro what was defned n Secton I-B. The connecton of the approxaton loss wth the quantzaton error s ade clear n Theore 1. For convenence, we wll sply stck to the word error nstead of approxaton loss fro now on. We frst fnd an upper bound on the error ade n Algorths 1 and 2 and then use t to provde bounds on the error ade whle perforng operatons 1) and 2). Lea 1. The axu error ade by Algorths 1 and 2 s upper bounded by O 1 k ). Proof: Frst, we derve an upper bound on the error of Algorths 1 and 2 n each teraton, and therefore a bound on the error of the whole process. Let us consder Algorth 1. The proble can be reduced to the followng optzaton proble: such that e = ax p,x n p d ) 8) p = 1, d 1, 9) where d = fx +1 ) fx ), and f 1 2 ) f0) = 1 s assued w.l.o.g. We prove the lea by Cauchy-Schwarz nequalty. n n p d = p d ) 2 = Now by applyng Cauchy-Schwarz we have n p d ) 2 10) ) 1/2 ) 1/2 p d p d 1 11) =1 =1 Snce the su of ters p d s less than 1, the nu of the ters wll certanly be less than 1. Therefore, e = n ) 2 1 p d 2. 12) For Algorth 2, achevng the sae bound as Algorth 1 s trval. Denote e 1) the error ade n Algorth 1 and e 2) the error ade n Algorth 2. Then, =1 e 2) = p f x ) fx )) p +1 fx +1 ) f x )) 13) p f x ) fx )) 14) p fx +1 ) fx )) = e 1). 15) Consequently, the error generated by runnng the whole algorth can be upper bounded by =k whch so 1 k ). What s stated n Lea 1 s a loose upper bound on the error of Algorth 2. To acheve better bounds, we upper bound the error ade n each teraton of the Algorth 2 as the followng: e = p f x ) fx )) p +1 fx +1 ) f x )) 16) p +1 p x f x ) p +1 x f x +1 ) p +p +1 p +p +1 p 17) = p p +1 p +p +1 x f x ) f x +1 )) 18) p +p +1 x 2 4 f c ), 19) where x = x +1 x and 17) s due to concavty of functon f. Furtherore, 19) s by the ean value theore, where x c x +1. If f x) s bounded for x 0,1), then we can prove that n e O 1 ) slarly to Lea 1. Therefore the error 3

4 of the whole algorth would be O 1 k 2 ). Unfortunately, ths s not the case for ether of entropy functon or Bhattacharyya functon. However, we can stll acheve a better upper bound for the error of Algorth 2. Lea 2. The axu error ade by Algorth 2 for the entropy functon hx) can be upper bounded by the order of O logk) k 1.5 ). Proof: See Appendx. We can see that the error s proved by a factor of logk k n coparson wth Algorth 1. Now we use the result of Lea 1 to provde bounds on the total error ade n estatng the utual nforaton of a channel after n levels of operatons 1) and 2). Theore 1. Assue W s a BMS channel and usng Algorth 1 or 2 we quantze the channel W to a channel W. Takng k = n 2 s suffcent to gve an approxaton error that decays to zero. Proof: Frst notce that for any two BMS channelsw and V, dong the polarzaton operatons 1) and 2), the followng s true: IW ) IV ))+IW + ) IV + )) = 2IW) IV)) 20) Replacng V wth W n 20) and usng the result of Lea 1, we conclude that after n levels of polarzaton the su of the errors n approxatng the utual nforaton of the 2 n channels s upper-bounded by O n2n k ). In partcular, takng k = n 2, one can say that the average approxaton error of the 2 n channels at level n s upper-bounded by O 1 n ). Therefore, at least a fracton 1 1 n of the channels are 1 dstorted by at ost n.e., except for a neglgble fracton of the channels the error n approxatng the utual nforaton decays to zero. As a result, snce the overall coplexty of the encoder constructon s Ok 2 N), ths leads to alost lnear algorths for encoder constructon wth arbtrary accuracy n dentfyng good channels. IV. EXCHANGE OF LIMITS In ths secton, we show that there are adssble schees such that as k, the lt n 6) approaches IW) for any BMS channel W. We use the defnton stated n 5) for the adssblty of the quantzaton procedure. Theore 2. Gven a BMS channel W and for large enough k, there exst adssble quantzaton schees Q k such that ρq k,w) s arbtrarly close to IW). Proof: Consder the followng algorth: The algorth starts wth a quantzed verson of W and t does the noral channel splttng transforaton followed by quantzaton accordng to Algorth 1 or 2, but once a sub-channel s suffcently good, n the sense that ts Bhattacharyya paraeter s less than an approprately chosen paraeter δ, the algorth replaces the sub-channel wth a bnary erasure channel whch s degraded polar degradaton) wth respect to t As the operatons 1) and 2) over an erasure channel also yelds and erasure channel, no further quantzaton s need for the chldren of ths sub-channel). Snce the rato of the total good ndces of BECZP)) s 1 ZP), then the total error that we ake by replacngp wth BECZP)) s at ost ZP) whch n the above algorth s less that the paraeter δ. Now, for a fxed level n, accordng to Theore 1 f we ake k large enough, the rato of the quantzed sub-channels that ther Bhattacharyya value s less that δ approaches to ts orgnal value wth no quantzaton), and for these subchannels as explaned above the total error ade wth the algorth s δ. Now fro the polarzaton theore and by sendng δ to zero we deduce that as k the nuber of good ndces approaches the capacty of the orgnal channel. V. SIMULATION RESULTS In order to evaluate the perforance of our quantzaton algorth, slarly to [3], we copare the perforance of the degraded quantzed channel wth the perforance of an upgraded quantzed channel. An algorth slar to Algorth 2 for upgradng a channel s the followng. Consder three neghborng asses n postons x 1,x,x +1 ) wth probabltes p 1,p,p +1 ). Let t = x x 1 x +1 x 1. Then, we splt the ddle ass at x to the other two asses such that the fnal probabltes wll be p 1 +1 t)p,p +1 +tp ) at postons x 1,x +1 ). The greedy channel upgradng procedure s descrbed n Algorth 3. Algorth 3 Splttng Masses Algorth 1: Start fro the lst p 1,x 1 ),,p,x ). 2: Repeat k tes 3: Fnd j = argn{p fx ) tfx +1 ) 1 t)fx 1 )) : 1,} 4: Add 1 t)p j to p j 1 and tp j to p j+1. 5: Delete p j,x j ) fro the lst. The sae upper bounds on the error of ths algorth can be provded slarly to Secton III wth a lttle bt of odfcaton. In the sulatons, we easure the axu achevable rate whle keepng the probablty of error less than 10 3 by fndng axu possble nuber of channels wth the sallest Bhattacharyya paraeters such that the su of ther Bhattacharyya paraeters s upper bounded by The channel s a bnary syetrc channel wth capacty 0.5. Usng Algorths 2 and 3 for degradng and upgradng the channels wth the Bhattacharyya functonfx) = 2 x1 x), we obtan the followng results: It s worth restatng that the algorth runs n coplexty Ok 2 N). Table I shows the achevable rates for Algorths

5 k degrade upgrade TABLE I: Achevable rate wth error probablty at ost 10 3 vs. axu nuber of output sybols k for block-length N = and 3 when the block-length s fxed to N = 2 15 and k changes n the range of 2 to 64. It can be seen fro Table I that the dfference of achevable rates wthn the upgraded and degraded verson of the schee s as sall as 10 4 for k = 64. We expect that for a fxed k, as the block-length ncreases the dfference wll also ncrease see Table II). n degrade upgrade TABLE II: Achevable rate wth error probablty at ost 10 3 vs. block-length N = 2 n for k = 16 However, n our schee ths dfference wll rean sall even as N grows arbtrarly large as predcted by Theore 2. see Table III). n degrade upgrade TABLE III: Achevable rate wth error probablty at ost 10 3 vs. block-length N = 2 n for k = 16 We see that the dfference between the rate achevable n the degraded channel and upgraded channel gets constant even after 25 levels of polarzatons for k = 16. A. Proof of Lea 2 APPENDIX Proof: Let us frst fnd an upper bound for the second dervatve of the entropy functon. Suppose that hx) = xlogx) 1 x)log1 x). Then, for 0 x 1 2, we have h 1 x) = x1 x)ln2) 2 xln2). 21) Usng 21) the nu error can further be upper bounded by ne np +p +1 ) x 2 1 x ln4). 22) Now suppose that we have l ass ponts wth x 1 and l ass ponts wth x 1. For the frst l ass ponts we use the upper bound obtaned for Algorth 1. Hence, for 1 l we have ne np hx ) 23) ) log) O l 2, 24) where 23) s due to 15) and 24) can be derved agan by applyng Cauchy-Schwarz nequalty. Note that ths te l hx ) h 1 ) log) ) O. 25) =1 For the l ass ponts one can wrte ne np +p +1 ) x 2 1 x ln4) np +p +1 ) x 2 ln4) O l) 3 26) 27) ), 28) where 28) s due to Hölder s nequalty as follows: Let q = p + p +1. Therefore, p + p +1 ) 2 and x 1/2. n q x 2 = ) 1/3 3 nq x) 2 = Now by applyng Hölder s nequalty we have n q x 2 ) 1/3 ) 3 29) ) 1/3 ) 2/3 q x 2 ) 1/3 q x 1 30) Therefore, ne ) nq x 2 )1/3) 3 O l) 3. 31) Overall, the error ade n the frst step of the algorth would be { ) )} log) ne n O l 2,O l) 3 32) ) log) O. 33) 2.5 Thus, the error generated by runnng the whole algorth can be upper bounded by log) =k+1 O logk) 2.5 k ). 1.5 ACKNOWLEDGMENTS authors are grateful to Rüdger Urbanke for helpful dscussons. Ths work was supported n part by grant nuber of the Swss Natonal Scence Foundaton. REFERENCES [1] E. Arıkan, Channel Polarzaton: A Method for Constructng Capacty- Achevng Codes for Syetrc Bnary-Input Meoryless Channels, IEEE Trans. Inf. Theory, vol. 55, no. 7, pp , Jul [2] S. B. Korada, Polar Codes for Channel and Source Codng, Ph.D. dssertaton,, Lausanne, Swtzerland, Jul [3] I. Tal and A. Vardy, How to Construct Polar Codes, [Onlne]. Avalable: [4] S. H. Hassan, S. B. Korada, and R. Urbanke, The Copound Capacty of Polar Codes, Proceedngs of Allerton Conference on Councaton, Control and Coputng, Allerton, Sep [5] R. Mor and T. Tanaka, Perforance and Constructon of Polar Codes on Syetrc Bnary-Input Meoryless Channels, Proceedngs of ISIT, Seoul, South Korea, Jul. 2009, pp [6] T. Rchardson and R. Urbanke, Modern Codng Theory, Cabrdge Unversty Press, 2008.

On the Construction of Polar Codes

On the Construction of Polar Codes On the Constructon of Polar Codes Ratn Pedarsan School of Coputer and Councaton Systes, Lausanne, Swtzerland. ratn.pedarsan@epfl.ch S. Haed Hassan School of Coputer and Councaton Systes, Lausanne, Swtzerland.

More information

Excess Error, Approximation Error, and Estimation Error

Excess Error, Approximation Error, and Estimation Error E0 370 Statstcal Learnng Theory Lecture 10 Sep 15, 011 Excess Error, Approxaton Error, and Estaton Error Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton So far, we have consdered the fnte saple

More information

COS 511: Theoretical Machine Learning

COS 511: Theoretical Machine Learning COS 5: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture #0 Scrbe: José Sões Ferrera March 06, 203 In the last lecture the concept of Radeacher coplexty was ntroduced, wth the goal of showng that

More information

System in Weibull Distribution

System in Weibull Distribution Internatonal Matheatcal Foru 4 9 no. 9 94-95 Relablty Equvalence Factors of a Seres-Parallel Syste n Webull Dstrbuton M. A. El-Dacese Matheatcs Departent Faculty of Scence Tanta Unversty Tanta Egypt eldacese@yahoo.co

More information

Computational and Statistical Learning theory Assignment 4

Computational and Statistical Learning theory Assignment 4 Coputatonal and Statstcal Learnng theory Assgnent 4 Due: March 2nd Eal solutons to : karthk at ttc dot edu Notatons/Defntons Recall the defnton of saple based Radeacher coplexty : [ ] R S F) := E ɛ {±}

More information

XII.3 The EM (Expectation-Maximization) Algorithm

XII.3 The EM (Expectation-Maximization) Algorithm XII.3 The EM (Expectaton-Maxzaton) Algorth Toshnor Munaata 3/7/06 The EM algorth s a technque to deal wth varous types of ncoplete data or hdden varables. It can be appled to a wde range of learnng probles

More information

1 Definition of Rademacher Complexity

1 Definition of Rademacher Complexity COS 511: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture #9 Scrbe: Josh Chen March 5, 2013 We ve spent the past few classes provng bounds on the generalzaton error of PAClearnng algorths for the

More information

Least Squares Fitting of Data

Least Squares Fitting of Data Least Squares Fttng of Data Davd Eberly Geoetrc Tools, LLC http://www.geoetrctools.co/ Copyrght c 1998-2014. All Rghts Reserved. Created: July 15, 1999 Last Modfed: February 9, 2008 Contents 1 Lnear Fttng

More information

Xiangwen Li. March 8th and March 13th, 2001

Xiangwen Li. March 8th and March 13th, 2001 CS49I Approxaton Algorths The Vertex-Cover Proble Lecture Notes Xangwen L March 8th and March 3th, 00 Absolute Approxaton Gven an optzaton proble P, an algorth A s an approxaton algorth for P f, for an

More information

1 Review From Last Time

1 Review From Last Time COS 5: Foundatons of Machne Learnng Rob Schapre Lecture #8 Scrbe: Monrul I Sharf Aprl 0, 2003 Revew Fro Last Te Last te, we were talkng about how to odel dstrbutons, and we had ths setup: Gven - exaples

More information

BAYESIAN CURVE FITTING USING PIECEWISE POLYNOMIALS. Dariusz Biskup

BAYESIAN CURVE FITTING USING PIECEWISE POLYNOMIALS. Dariusz Biskup BAYESIAN CURVE FITTING USING PIECEWISE POLYNOMIALS Darusz Bskup 1. Introducton The paper presents a nonparaetrc procedure for estaton of an unknown functon f n the regresson odel y = f x + ε = N. (1) (

More information

Applied Mathematics Letters

Applied Mathematics Letters Appled Matheatcs Letters 2 (2) 46 5 Contents lsts avalable at ScenceDrect Appled Matheatcs Letters journal hoepage: wwwelseverco/locate/al Calculaton of coeffcents of a cardnal B-splne Gradr V Mlovanovć

More information

Least Squares Fitting of Data

Least Squares Fitting of Data Least Squares Fttng of Data Davd Eberly Geoetrc Tools, LLC http://www.geoetrctools.co/ Copyrght c 1998-2015. All Rghts Reserved. Created: July 15, 1999 Last Modfed: January 5, 2015 Contents 1 Lnear Fttng

More information

Two Conjectures About Recency Rank Encoding

Two Conjectures About Recency Rank Encoding Internatonal Journal of Matheatcs and Coputer Scence, 0(205, no. 2, 75 84 M CS Two Conjectures About Recency Rank Encodng Chrs Buhse, Peter Johnson, Wlla Lnz 2, Matthew Spson 3 Departent of Matheatcs and

More information

Fermi-Dirac statistics

Fermi-Dirac statistics UCC/Physcs/MK/EM/October 8, 205 Fer-Drac statstcs Fer-Drac dstrbuton Matter partcles that are eleentary ostly have a type of angular oentu called spn. hese partcles are known to have a agnetc oent whch

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

On Syndrome Decoding of Punctured Reed-Solomon and Gabidulin Codes 1

On Syndrome Decoding of Punctured Reed-Solomon and Gabidulin Codes 1 Ffteenth Internatonal Workshop on Algebrac and Cobnatoral Codng Theory June 18-24, 2016, Albena, Bulgara pp. 35 40 On Syndroe Decodng of Punctured Reed-Soloon and Gabduln Codes 1 Hannes Bartz hannes.bartz@tu.de

More information

On Pfaff s solution of the Pfaff problem

On Pfaff s solution of the Pfaff problem Zur Pfaff scen Lösung des Pfaff scen Probles Mat. Ann. 7 (880) 53-530. On Pfaff s soluton of te Pfaff proble By A. MAYER n Lepzg Translated by D. H. Delpenc Te way tat Pfaff adopted for te ntegraton of

More information

The Parity of the Number of Irreducible Factors for Some Pentanomials

The Parity of the Number of Irreducible Factors for Some Pentanomials The Party of the Nuber of Irreducble Factors for Soe Pentanoals Wolfra Koepf 1, Ryul K 1 Departent of Matheatcs Unversty of Kassel, Kassel, F. R. Gerany Faculty of Matheatcs and Mechancs K Il Sung Unversty,

More information

Designing Fuzzy Time Series Model Using Generalized Wang s Method and Its application to Forecasting Interest Rate of Bank Indonesia Certificate

Designing Fuzzy Time Series Model Using Generalized Wang s Method and Its application to Forecasting Interest Rate of Bank Indonesia Certificate The Frst Internatonal Senar on Scence and Technology, Islac Unversty of Indonesa, 4-5 January 009. Desgnng Fuzzy Te Seres odel Usng Generalzed Wang s ethod and Its applcaton to Forecastng Interest Rate

More information

On the number of regions in an m-dimensional space cut by n hyperplanes

On the number of regions in an m-dimensional space cut by n hyperplanes 6 On the nuber of regons n an -densonal space cut by n hyperplanes Chungwu Ho and Seth Zeran Abstract In ths note we provde a unfor approach for the nuber of bounded regons cut by n hyperplanes n general

More information

arxiv: v2 [math.co] 3 Sep 2017

arxiv: v2 [math.co] 3 Sep 2017 On the Approxate Asyptotc Statstcal Independence of the Peranents of 0- Matrces arxv:705.0868v2 ath.co 3 Sep 207 Paul Federbush Departent of Matheatcs Unversty of Mchgan Ann Arbor, MI, 4809-043 Septeber

More information

What is LP? LP is an optimization technique that allocates limited resources among competing activities in the best possible manner.

What is LP? LP is an optimization technique that allocates limited resources among competing activities in the best possible manner. (C) 998 Gerald B Sheblé, all rghts reserved Lnear Prograng Introducton Contents I. What s LP? II. LP Theor III. The Splex Method IV. Refneents to the Splex Method What s LP? LP s an optzaton technque that

More information

Multipoint Analysis for Sibling Pairs. Biostatistics 666 Lecture 18

Multipoint Analysis for Sibling Pairs. Biostatistics 666 Lecture 18 Multpont Analyss for Sblng ars Bostatstcs 666 Lecture 8 revously Lnkage analyss wth pars of ndvduals Non-paraetrc BS Methods Maxu Lkelhood BD Based Method ossble Trangle Constrant AS Methods Covered So

More information

Chapter 12 Lyes KADEM [Thermodynamics II] 2007

Chapter 12 Lyes KADEM [Thermodynamics II] 2007 Chapter 2 Lyes KDEM [Therodynacs II] 2007 Gas Mxtures In ths chapter we wll develop ethods for deternng therodynac propertes of a xture n order to apply the frst law to systes nvolvng xtures. Ths wll be

More information

Gadjah Mada University, Indonesia. Yogyakarta State University, Indonesia Karangmalang Yogyakarta 55281

Gadjah Mada University, Indonesia. Yogyakarta State University, Indonesia Karangmalang Yogyakarta 55281 Reducng Fuzzy Relatons of Fuzzy Te Seres odel Usng QR Factorzaton ethod and Its Applcaton to Forecastng Interest Rate of Bank Indonesa Certfcate Agus aan Abad Subanar Wdodo 3 Sasubar Saleh 4 Ph.D Student

More information

Finite Fields and Their Applications

Finite Fields and Their Applications Fnte Felds and Ther Applcatons 5 009 796 807 Contents lsts avalable at ScenceDrect Fnte Felds and Ther Applcatons www.elsever.co/locate/ffa Typcal prtve polynoals over nteger resdue rngs Tan Tan a, Wen-Feng

More information

Denote the function derivatives f(x) in given points. x a b. Using relationships (1.2), polynomials (1.1) are written in the form

Denote the function derivatives f(x) in given points. x a b. Using relationships (1.2), polynomials (1.1) are written in the form SET OF METHODS FO SOUTION THE AUHY POBEM FO STIFF SYSTEMS OF ODINAY DIFFEENTIA EUATIONS AF atypov and YuV Nulchev Insttute of Theoretcal and Appled Mechancs SB AS 639 Novosbrs ussa Introducton A constructon

More information

COMP th April, 2007 Clement Pang

COMP th April, 2007 Clement Pang COMP 540 12 th Aprl, 2007 Cleent Pang Boostng Cobnng weak classers Fts an Addtve Model Is essentally Forward Stagewse Addtve Modelng wth Exponental Loss Loss Functons Classcaton: Msclasscaton, Exponental,

More information

Finite Vector Space Representations Ross Bannister Data Assimilation Research Centre, Reading, UK Last updated: 2nd August 2003

Finite Vector Space Representations Ross Bannister Data Assimilation Research Centre, Reading, UK Last updated: 2nd August 2003 Fnte Vector Space epresentatons oss Bannster Data Asslaton esearch Centre, eadng, UK ast updated: 2nd August 2003 Contents What s a lnear vector space?......... 1 About ths docuent............ 2 1. Orthogonal

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

Introducing Entropy Distributions

Introducing Entropy Distributions Graubner, Schdt & Proske: Proceedngs of the 6 th Internatonal Probablstc Workshop, Darstadt 8 Introducng Entropy Dstrbutons Noel van Erp & Peter van Gelder Structural Hydraulc Engneerng and Probablstc

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

y new = M x old Feature Selection: Linear Transformations Constraint Optimization (insertion)

y new = M x old Feature Selection: Linear Transformations Constraint Optimization (insertion) Feature Selecton: Lnear ransforatons new = M x old Constrant Optzaton (nserton) 3 Proble: Gven an objectve functon f(x) to be optzed and let constrants be gven b h k (x)=c k, ovng constants to the left,

More information

Slobodan Lakić. Communicated by R. Van Keer

Slobodan Lakić. Communicated by R. Van Keer Serdca Math. J. 21 (1995), 335-344 AN ITERATIVE METHOD FOR THE MATRIX PRINCIPAL n-th ROOT Slobodan Lakć Councated by R. Van Keer In ths paper we gve an teratve ethod to copute the prncpal n-th root and

More information

An Optimal Bound for Sum of Square Roots of Special Type of Integers

An Optimal Bound for Sum of Square Roots of Special Type of Integers The Sxth Internatonal Syposu on Operatons Research and Its Applcatons ISORA 06 Xnang, Chna, August 8 12, 2006 Copyrght 2006 ORSC & APORC pp. 206 211 An Optal Bound for Su of Square Roots of Specal Type

More information

Preference and Demand Examples

Preference and Demand Examples Dvson of the Huantes and Socal Scences Preference and Deand Exaples KC Border October, 2002 Revsed Noveber 206 These notes show how to use the Lagrange Karush Kuhn Tucker ultpler theores to solve the proble

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Three Algorithms for Flexible Flow-shop Scheduling

Three Algorithms for Flexible Flow-shop Scheduling Aercan Journal of Appled Scences 4 (): 887-895 2007 ISSN 546-9239 2007 Scence Publcatons Three Algorths for Flexble Flow-shop Schedulng Tzung-Pe Hong, 2 Pe-Yng Huang, 3 Gwoboa Horng and 3 Chan-Lon Wang

More information

Lecture 4: Universal Hash Functions/Streaming Cont d

Lecture 4: Universal Hash Functions/Streaming Cont d CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected

More information

LECTURE :FACTOR ANALYSIS

LECTURE :FACTOR ANALYSIS LCUR :FACOR ANALYSIS Rta Osadchy Based on Lecture Notes by A. Ng Motvaton Dstrbuton coes fro MoG Have suffcent aount of data: >>n denson Use M to ft Mture of Gaussans nu. of tranng ponts If

More information

Lecture 3: Shannon s Theorem

Lecture 3: Shannon s Theorem CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts

More information

CHAPTER 6 CONSTRAINED OPTIMIZATION 1: K-T CONDITIONS

CHAPTER 6 CONSTRAINED OPTIMIZATION 1: K-T CONDITIONS Chapter 6: Constraned Optzaton CHAPER 6 CONSRAINED OPIMIZAION : K- CONDIIONS Introducton We now begn our dscusson of gradent-based constraned optzaton. Recall that n Chapter 3 we looked at gradent-based

More information

AN ANALYSIS OF A FRACTAL KINETICS CURVE OF SAVAGEAU

AN ANALYSIS OF A FRACTAL KINETICS CURVE OF SAVAGEAU AN ANALYI OF A FRACTAL KINETIC CURE OF AAGEAU by John Maloney and Jack Hedel Departent of Matheatcs Unversty of Nebraska at Oaha Oaha, Nebraska 688 Eal addresses: aloney@unoaha.edu, jhedel@unoaha.edu Runnng

More information

04 - Treaps. Dr. Alexander Souza

04 - Treaps. Dr. Alexander Souza Algorths Theory 04 - Treaps Dr. Alexander Souza The dctonary proble Gven: Unverse (U,

More information

Randomness and Computation

Randomness and Computation Randomness and Computaton or, Randomzed Algorthms Mary Cryan School of Informatcs Unversty of Ednburgh RC 208/9) Lecture 0 slde Balls n Bns m balls, n bns, and balls thrown unformly at random nto bns usually

More information

CHAPTER 7 CONSTRAINED OPTIMIZATION 1: THE KARUSH-KUHN-TUCKER CONDITIONS

CHAPTER 7 CONSTRAINED OPTIMIZATION 1: THE KARUSH-KUHN-TUCKER CONDITIONS CHAPER 7 CONSRAINED OPIMIZAION : HE KARUSH-KUHN-UCKER CONDIIONS 7. Introducton We now begn our dscusson of gradent-based constraned optzaton. Recall that n Chapter 3 we looked at gradent-based unconstraned

More information

ECE 534: Elements of Information Theory. Solutions to Midterm Exam (Spring 2006)

ECE 534: Elements of Information Theory. Solutions to Midterm Exam (Spring 2006) ECE 534: Elements of Informaton Theory Solutons to Mdterm Eam (Sprng 6) Problem [ pts.] A dscrete memoryless source has an alphabet of three letters,, =,, 3, wth probabltes.4,.4, and., respectvely. (a)

More information

The Impact of the Earth s Movement through the Space on Measuring the Velocity of Light

The Impact of the Earth s Movement through the Space on Measuring the Velocity of Light Journal of Appled Matheatcs and Physcs, 6, 4, 68-78 Publshed Onlne June 6 n ScRes http://wwwscrporg/journal/jap http://dxdoorg/436/jap646 The Ipact of the Earth s Moeent through the Space on Measurng the

More information

VERIFICATION OF FE MODELS FOR MODEL UPDATING

VERIFICATION OF FE MODELS FOR MODEL UPDATING VERIFICATION OF FE MODELS FOR MODEL UPDATING G. Chen and D. J. Ewns Dynacs Secton, Mechancal Engneerng Departent Iperal College of Scence, Technology and Medcne London SW7 AZ, Unted Kngdo Eal: g.chen@c.ac.uk

More information

THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan.

THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan. THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY Wllam A. Pearlman 2002 References: S. Armoto - IEEE Trans. Inform. Thy., Jan. 1972 R. Blahut - IEEE Trans. Inform. Thy., July 1972 Recall

More information

Universal communication part II: channels with memory

Universal communication part II: channels with memory Unversal councaton part II: channels wth eory Yuval Lontz, Mer Feder Tel Avv Unversty, Dept. of EE-Systes Eal: {yuvall,er@eng.tau.ac.l arxv:202.047v2 [cs.it] 20 Mar 203 Abstract Consder councaton over

More information

Elastic Collisions. Definition: two point masses on which no external forces act collide without losing any energy.

Elastic Collisions. Definition: two point masses on which no external forces act collide without losing any energy. Elastc Collsons Defnton: to pont asses on hch no external forces act collde thout losng any energy v Prerequstes: θ θ collsons n one denson conservaton of oentu and energy occurs frequently n everyday

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Several generation methods of multinomial distributed random number Tian Lei 1, a,linxihe 1,b,Zhigang Zhang 1,c

Several generation methods of multinomial distributed random number Tian Lei 1, a,linxihe 1,b,Zhigang Zhang 1,c Internatonal Conference on Appled Scence and Engneerng Innovaton (ASEI 205) Several generaton ethods of ultnoal dstrbuted rando nuber Tan Le, a,lnhe,b,zhgang Zhang,c School of Matheatcs and Physcs, USTB,

More information

PROBABILITY AND STATISTICS Vol. III - Analysis of Variance and Analysis of Covariance - V. Nollau ANALYSIS OF VARIANCE AND ANALYSIS OF COVARIANCE

PROBABILITY AND STATISTICS Vol. III - Analysis of Variance and Analysis of Covariance - V. Nollau ANALYSIS OF VARIANCE AND ANALYSIS OF COVARIANCE ANALYSIS OF VARIANCE AND ANALYSIS OF COVARIANCE V. Nollau Insttute of Matheatcal Stochastcs, Techncal Unversty of Dresden, Gerany Keywords: Analyss of varance, least squares ethod, odels wth fxed effects,

More information

1. Statement of the problem

1. Statement of the problem Volue 14, 010 15 ON THE ITERATIVE SOUTION OF A SYSTEM OF DISCRETE TIMOSHENKO EQUATIONS Peradze J. and Tsklaur Z. I. Javakhshvl Tbls State Uversty,, Uversty St., Tbls 0186, Georga Georgan Techcal Uversty,

More information

FREQUENCY DISTRIBUTIONS Page 1 of The idea of a frequency distribution for sets of observations will be introduced,

FREQUENCY DISTRIBUTIONS Page 1 of The idea of a frequency distribution for sets of observations will be introduced, FREQUENCY DISTRIBUTIONS Page 1 of 6 I. Introducton 1. The dea of a frequency dstrbuton for sets of observatons wll be ntroduced, together wth some of the mechancs for constructng dstrbutons of data. Then

More information

Revision: December 13, E Main Suite D Pullman, WA (509) Voice and Fax

Revision: December 13, E Main Suite D Pullman, WA (509) Voice and Fax .9.1: AC power analyss Reson: Deceber 13, 010 15 E Man Sute D Pullan, WA 99163 (509 334 6306 Voce and Fax Oerew n chapter.9.0, we ntroduced soe basc quanttes relate to delery of power usng snusodal sgnals.

More information

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Wreless Informaton Transmsson System Lab. Chapter 7 Channel Capacty and Codng Insttute of Communcatons Engneerng atonal Sun Yat-sen Unversty Contents 7. Channel models and channel capacty 7.. Channel models

More information

One-Shot Quantum Information Theory I: Entropic Quantities. Nilanjana Datta University of Cambridge,U.K.

One-Shot Quantum Information Theory I: Entropic Quantities. Nilanjana Datta University of Cambridge,U.K. One-Shot Quantu Inforaton Theory I: Entropc Quanttes Nlanjana Datta Unversty of Cabrdge,U.K. In Quantu nforaton theory, ntally one evaluated: optal rates of nfo-processng tasks, e.g., data copresson, transsson

More information

ITERATIVE ESTIMATION PROCEDURE FOR GEOSTATISTICAL REGRESSION AND GEOSTATISTICAL KRIGING

ITERATIVE ESTIMATION PROCEDURE FOR GEOSTATISTICAL REGRESSION AND GEOSTATISTICAL KRIGING ESE 5 ITERATIVE ESTIMATION PROCEDURE FOR GEOSTATISTICAL REGRESSION AND GEOSTATISTICAL KRIGING Gven a geostatstcal regresson odel: k Y () s x () s () s x () s () s, s R wth () unknown () E[ ( s)], s R ()

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

The Feynman path integral

The Feynman path integral The Feynman path ntegral Aprl 3, 205 Hesenberg and Schrödnger pctures The Schrödnger wave functon places the tme dependence of a physcal system n the state, ψ, t, where the state s a vector n Hlbert space

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

CALCULUS CLASSROOM CAPSULES

CALCULUS CLASSROOM CAPSULES CALCULUS CLASSROOM CAPSULES SESSION S86 Dr. Sham Alfred Rartan Valley Communty College salfred@rartanval.edu 38th AMATYC Annual Conference Jacksonvlle, Florda November 8-, 202 2 Calculus Classroom Capsules

More information

Source-Channel-Sink Some questions

Source-Channel-Sink Some questions Source-Channel-Snk Soe questons Source Channel Snk Aount of Inforaton avalable Source Entro Generall nos and a be te varng Introduces error and lts the rate at whch data can be transferred ow uch nforaton

More information

Chapter 1. Theory of Gravitation

Chapter 1. Theory of Gravitation Chapter 1 Theory of Gravtaton In ths chapter a theory of gravtaton n flat space-te s studed whch was consdered n several artcles by the author. Let us assue a flat space-te etrc. Denote by x the co-ordnates

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Lecture 5 Decoding Binary BCH Codes

Lecture 5 Decoding Binary BCH Codes Lecture 5 Decodng Bnary BCH Codes In ths class, we wll ntroduce dfferent methods for decodng BCH codes 51 Decodng the [15, 7, 5] 2 -BCH Code Consder the [15, 7, 5] 2 -code C we ntroduced n the last lecture

More information

Final Exam Solutions, 1998

Final Exam Solutions, 1998 58.439 Fnal Exa Solutons, 1998 roble 1 art a: Equlbru eans that the therodynac potental of a consttuent s the sae everywhere n a syste. An exaple s the Nernst potental. If the potental across a ebrane

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Chapter 7 Channel Capacty and Codng Contents 7. Channel models and channel capacty 7.. Channel models Bnary symmetrc channel Dscrete memoryless channels Dscrete-nput, contnuous-output channel Waveform

More information

find (x): given element x, return the canonical element of the set containing x;

find (x): given element x, return the canonical element of the set containing x; COS 43 Sprng, 009 Dsjont Set Unon Problem: Mantan a collecton of dsjont sets. Two operatons: fnd the set contanng a gven element; unte two sets nto one (destructvely). Approach: Canoncal element method:

More information

Near Optimal Online Algorithms and Fast Approximation Algorithms for Resource Allocation Problems

Near Optimal Online Algorithms and Fast Approximation Algorithms for Resource Allocation Problems Near Optal Onlne Algorths and Fast Approxaton Algorths for Resource Allocaton Probles Nkhl R Devanur Kaal Jan Balasubraanan Svan Chrstopher A Wlkens Abstract We present algorths for a class of resource

More information

Determination of the Confidence Level of PSD Estimation with Given D.O.F. Based on WELCH Algorithm

Determination of the Confidence Level of PSD Estimation with Given D.O.F. Based on WELCH Algorithm Internatonal Conference on Inforaton Technology and Manageent Innovaton (ICITMI 05) Deternaton of the Confdence Level of PSD Estaton wth Gven D.O.F. Based on WELCH Algorth Xue-wang Zhu, *, S-jan Zhang

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem. Lecture 14 (03/27/18). Channels. Decodng. Prevew of the Capacty Theorem. A. Barg The concept of a communcaton channel n nformaton theory s an abstracton for transmttng dgtal (and analog) nformaton from

More information

Entropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or

Entropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or Sgnal Compresson Sgnal Compresson Entropy Codng Entropy codng s also known as zero-error codng, data compresson or lossless compresson. Entropy codng s wdely used n vrtually all popular nternatonal multmeda

More information

On the Finite-Length Performance of Universal Coding for k-ary Memoryless Sources

On the Finite-Length Performance of Universal Coding for k-ary Memoryless Sources Forty-ghth Annual Allerton Conference Allerton House, UIUC, Illnos, USA Septeber 9 - October, 00 On the Fnte-Length Perforance of Unversal Codng for -ary Meoryless Sources Ahad Bera and Faraarz Fer School

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

MAXIMUM A POSTERIORI TRANSDUCTION

MAXIMUM A POSTERIORI TRANSDUCTION MAXIMUM A POSTERIORI TRANSDUCTION LI-WEI WANG, JU-FU FENG School of Mathematcal Scences, Peng Unversty, Bejng, 0087, Chna Center for Informaton Scences, Peng Unversty, Bejng, 0087, Chna E-MIAL: {wanglw,

More information

Modified parallel multisplitting iterative methods for non-hermitian positive definite systems

Modified parallel multisplitting iterative methods for non-hermitian positive definite systems Adv Coput ath DOI 0.007/s0444-0-9262-8 odfed parallel ultsplttng teratve ethods for non-hertan postve defnte systes Chuan-Long Wang Guo-Yan eng Xue-Rong Yong Receved: Septeber 20 / Accepted: 4 Noveber

More information

Expected Value and Variance

Expected Value and Variance MATH 38 Expected Value and Varance Dr. Neal, WKU We now shall dscuss how to fnd the average and standard devaton of a random varable X. Expected Value Defnton. The expected value (or average value, or

More information

NP-Completeness : Proofs

NP-Completeness : Proofs NP-Completeness : Proofs Proof Methods A method to show a decson problem Π NP-complete s as follows. (1) Show Π NP. (2) Choose an NP-complete problem Π. (3) Show Π Π. A method to show an optmzaton problem

More information

FUZZY MODEL FOR FORECASTING INTEREST RATE OF BANK INDONESIA CERTIFICATE

FUZZY MODEL FOR FORECASTING INTEREST RATE OF BANK INDONESIA CERTIFICATE he 3 rd Internatonal Conference on Quanttatve ethods ISBN 979-989 Used n Econoc and Busness. June 6-8, 00 FUZZY ODEL FOR FORECASING INERES RAE OF BANK INDONESIA CERIFICAE Agus aan Abad, Subanar, Wdodo

More information

EGR 544 Communication Theory

EGR 544 Communication Theory EGR 544 Communcaton Theory. Informaton Sources Z. Alyazcoglu Electrcal and Computer Engneerng Department Cal Poly Pomona Introducton Informaton Source x n Informaton sources Analog sources Dscrete sources

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.

More information

Pulse Coded Modulation

Pulse Coded Modulation Pulse Coded Modulaton PCM (Pulse Coded Modulaton) s a voce codng technque defned by the ITU-T G.711 standard and t s used n dgtal telephony to encode the voce sgnal. The frst step n the analog to dgtal

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

A Radon-Nikodym Theorem for Completely Positive Maps

A Radon-Nikodym Theorem for Completely Positive Maps A Radon-Nody Theore for Copletely Postve Maps V P Belavn School of Matheatcal Scences, Unversty of Nottngha, Nottngha NG7 RD E-al: vpb@aths.nott.ac.u and P Staszews Insttute of Physcs, Ncholas Coperncus

More information

Block-error performance of root-ldpc codes. Author(s): Andriyanova, Iryna; Boutros, Joseph J.; Biglieri, Ezio; Declercq, David

Block-error performance of root-ldpc codes. Author(s): Andriyanova, Iryna; Boutros, Joseph J.; Biglieri, Ezio; Declercq, David Research Collecton Conference Paper Bloc-error perforance of root-ldpc codes Authors: Andryanova, Iryna; Boutros, Joseph J.; Bgler, Ezo; Declercq, Davd Publcaton Date: 00 Peranent Ln: https://do.org/0.399/ethz-a-00600396

More information

International Journal of Mathematical Archive-9(3), 2018, Available online through ISSN

International Journal of Mathematical Archive-9(3), 2018, Available online through   ISSN Internatonal Journal of Matheatcal Archve-9(3), 208, 20-24 Avalable onlne through www.ja.nfo ISSN 2229 5046 CONSTRUCTION OF BALANCED INCOMPLETE BLOCK DESIGNS T. SHEKAR GOUD, JAGAN MOHAN RAO M AND N.CH.

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

On the Calderón-Zygmund lemma for Sobolev functions

On the Calderón-Zygmund lemma for Sobolev functions arxv:0810.5029v1 [ath.ca] 28 Oct 2008 On the Calderón-Zygund lea for Sobolev functons Pascal Auscher october 16, 2008 Abstract We correct an naccuracy n the proof of a result n [Aus1]. 2000 MSC: 42B20,

More information

Discrete Memoryless Channels

Discrete Memoryless Channels Dscrete Meorless Channels Source Channel Snk Aount of Inforaton avalable Source Entro Generall nos, dstorted and a be te varng ow uch nforaton s receved? ow uch s lost? Introduces error and lts the rate

More information

Centroid Uncertainty Bounds for Interval Type-2 Fuzzy Sets: Forward and Inverse Problems

Centroid Uncertainty Bounds for Interval Type-2 Fuzzy Sets: Forward and Inverse Problems Centrod Uncertanty Bounds for Interval Type-2 Fuzzy Sets: Forward and Inverse Probles Jerry M. Mendel and Hongwe Wu Sgnal and Iage Processng Insttute Departent of Electrcal Engneerng Unversty of Southern

More information

P exp(tx) = 1 + t 2k M 2k. k N

P exp(tx) = 1 + t 2k M 2k. k N 1. Subgaussan tals Defnton. Say that a random varable X has a subgaussan dstrbuton wth scale factor σ< f P exp(tx) exp(σ 2 t 2 /2) for all real t. For example, f X s dstrbuted N(,σ 2 ) then t s subgaussan.

More information