On the Construction of Polar Codes
|
|
- Catherine Cameron
- 5 years ago
- Views:
Transcription
1 On the Constructon of Polar Codes Ratn Pedarsan School of Coputer and Councaton Systes, Lausanne, Swtzerland. S. Haed Hassan School of Coputer and Councaton Systes, Lausanne, Swtzerland. Ido Tal Inforaton Theory and Applcatons, UCSD La Jolla, CA, USA. Ere Telatar School of Coputer and Councaton Systes, Lausanne, Swtzerland. Abstract We consder the proble of effcently constructng polar codes over BMS channels. The coplexty of desgnng polar codes va an exact evaluaton of the polarzed channels to fnd whch ones are good appears to be exponental n the bloc length. In [3] Tal and Vardy show that f nstead the evaluaton f perfored approxately, the constructon has only lnear coplexty. In ths paper, we follow ths approach and present a fraewor where the algorths of [3] and new related algorths can be analyzed for coplexty and accuracy. We provde nuercal and analytcal results on the effcency of such algorths, n partcular we show that one can fnd all the good channels except a vanshng fracton) wth alost lnear coplexty n bloclength except a polylogarthc factor). A. Polar Codes I. ITRODUCTIO Polar codng, ntroduced by Arıan n [], s an encodng/decodng schee that provably acheves the capacty of the class of bnary eoryless syetrc BMS) channels. Let W be a BMS channel. Gven the rate R < IW ), polar codng s based on choosng a set of n R rows of the atrx G n = [ ] 0 n to for a n R n atrx whch s used as the generator atrx n the encodng procedure. The way ths set s chosen s dependent on the channel W and uses a phenoenon called channel polarzaton: Consder an nfnte bnary tree and place the underlyng channel W on the root node and contnue recursvely as follows. Havng the channel P : {0, } Y on a node of the tree, defne the channels P : {0, } Y and P + : {0, } {0, } Y P y, y x ) = x {0,} P y x x )P y x ) ) P + y, y, x x ) = P y x x )P y x ), ) and place P and P + as the left and rght chldren of ths node. As a result, at level n there are = n channels whch we denote fro left to rght by W to W. In [], Arıan proved that as n, a fracton approachng IW ) of the channels at level n have capacty close to call the noseless channels) and a fracton approachng IW ) have capacty close to 0 call the copletely nosy channels). Gven the rate R, the ndces of the atrx There are extensons of polar codes gven n [] whch use dfferent nds of atrces. G n are chosen as follows: gven a paraeter ɛ > 0, choose a subset of the channels {W ) } wth utual nforaton ore than ɛ, and choose the rows G n wth the sae ndces as these channels. For exaple, f the channel W j) s chosen, then the j-th row of G n s selected, up to the btreversal perutaton. In the followng, gven n, we call the set of ndces of R channels wth the ost utual nforaton, the set of good ndces. We can equvalently say that as n the fracton of channels wth Bhattacharyya constant near 0 approaches IW ) and the fracton of channels wth Bhattacharyya constant near approaches IW ). The Bhattacharyya constant of a channel P : {0, } Y s gven by ZP ) = y Y P y 0)P y ). 3) Therefore, we can alternatvely call the set of ndces of R channels wth least Bhattacharyya paraeters, the set of good ndces. It s also nterestng to enton that the su of the Bhattacharyya paraeters of the chosen channels s an upper bound on the bloc error probablty of polar codes when we use the successve cancellaton decoder. B. Proble Forulaton Desgnng a polar code s equvalent to fndng the set of good ndces. The an dffculty n ths tas s that, snce the output alphabet of W ) s Y {0, }, the cardnalty of the output alphabet of the channels at the level n of the bnary tree s doubly exponental n n or s exponental n the bloc-length. So coputng the exact transton probabltes of these channels sees to be ntractable and hence we need soe effcent ethods to approxate these channels. In [], t s suggested to use a Monte-Carlo ethod for estatng the Bhattacharyya paraeters. Another ethod n ths regard s by quantzaton [3], [4], [5], [6, Appendx B]: approxatng the gven channel wth a channel that has less output sybols. More precsely, gven a nuber, the tas s to coe up wth effcent ethods to replace channels that have ore that outputs wth close channels that have at ost outputs. Few coents n ths regard are the followng: The ter close above depends on the defnton of the quantzaton error whch can be dfferent dependng on the context. In our proble, n ts ost general settng
2 we can defne the quantzaton error as the dfference between the true set of good ndces and the approxate set of good ndces. However, t sees that analyzng ths type of error ay be dffcult and n the sequel we consder types of errors that are easer to analyze. Thus, as a coprose, wll ntutvely thn of two channels as beng close f they are close wth respect to soe gven etrc; typcally utual nforaton but soetes probablty of error. More so, we requre that ths closeness s n the rght drecton: the approxated channel ust be a pessstc verson of the true channel. Thus, the approxated set of good channels wll be a subset of the true set. Intutvely, we expect that as ncreases the overall error due to quantzaton decreases; the an art n desgnng the quantzaton ethods s to have a sall error whle usng relatvely sall values of. However, for any quantzaton algorth an portant property s that as grows large, the approxate set of good ndces usng the quantzaton algorth wth fxed approaches the true set of good ndces. We gve a precse atheatcal defnton n the sequel. Tang the above entoned factors nto account, a sutable forulaton of the quantzaton proble s to fnd procedures to replace each channel P at each level of the bnary tree wth another syetrc channel P wth the nuber of output sybols lted to such that frstly, the set of good ndces obtaned wth ths procedure s a subset of the true good ndces obtaned fro the channel polarzaton.e. channel P s polar degraded wth respect to P, and secondly the rato of these good ndces s axzed. More precsely, we start fro channel W at the root node of the bnary tree, quantze t to W and obtan W and W + accordng to ) and ). Then, we quantze the two new channels and contnue the procedure to coplete the tree. To state thngs atheatcally, let Q be a quantzaton procedure that assgns to each channel P a bnary syetrc channel P such that the output alphabet of P s lted to a constant. We call Q adssble f for any and n I W ) ) ) IW ). 4) Gven an adssble procedure Q and a BMS channel W, let ρq, W ) be ) { : I W ρq, W ) = l ) > } n So the quantzaton proble s that gven a nuber and a channel W, how can we fnd adssble procedures Q such that ρq, W ) s axzed and s close to the capacty of W. Can we reach the capacty of W as goes to nfnty? Are such schees unversal n the sense that they wor well for all the BMS channels? It s worth entonng that f we frst let tend to nfnty and then n to nfnty then the lt s ndeed the capacty, but we are addressng a dfferent queston here, Instead of n 5) we can use any nuber n 0, ). 5) naely we frst let n tend to nfnty and then. In Secton IV, we ndeed prove that such schees exst. A. Prelnares II. ALGORITHMS FOR QUATIZATIO Any dscrete BMS channel can be represented as a collecton of bnary syetrc channels BSC s). The bnary nput s gven to one of these BSC s at rando such that the -th BSC s chosen wth probablty p. The output of ths BSC together wth ts cross over probablty x s consdered as the output of the channel. Therefore, a dscrete BMS channel W can be copletely descrbed by a rando varable χ [0, /]. The pdf of χ wll be of the for: P χ x) = p δx x ) 6) = such that = p = and 0 x /. ote that ZW ) and IW ) are expectatons of the functons fx) = x x) and gx) = x logx) x) log x) over the dstrbuton P χ, respectvely. Therefore, n the quantzaton proble we want to replace the ass dstrbuton P χ wth another ass dstrbuton P χ such that the nuber of output sybols of χ s at ost, and the channel W s polar degraded wth respect to W. We now that the followng two operatons ply polar degradaton: Stochastcally degradng the channel. Replacng the channel wth a BEC channel wth the sae Bhattacharyya paraeter. Furtherore, note that the stochastc donance of rando varable χ wth respect to χ ples W s stochastcally degraded wth respect to W. But the reverse s not true.) In the followng, we propose dfferent algorths based on dfferent ethods of polar degradaton of the channel. The frst algorth s a nave algorth called the ass transportaton algorth based on the stochastc donance of the rando varable χ, and the second one whch outperfors the frst s called greedy ass ergng algorth. For both of the algorths the quantzed channel s stochastcally degraded wth respect to the orgnal one. B. Greedy Mass Transportaton Algorth In the ost general for of ths algorth we bascally loo at the proble as a ass transport proble. In fact, we have non-negatve asses p at locatons x, =,,, x < < x. What s requred s to ove the asses, by only oves to the rght, to concentrate the on < locatons, and try to nze p d where d = x + x s the aount th ass has oved. Later, we wll show that ths ethod s not optal but useful n the theoretcal analyss of the algorths that follow. ote that Algorth s based on the stochastc donance of rando varable χ wth respect to χ. Furtherore, n general, we can let d = fx + ) fx ), for an arbtrary ncreasng functon f.
3 Algorth Mass Transportaton Algorth : Start fro the lst p, x ),, p, x ). : Repeat tes 3: Fnd j = argn{p d : } 4: Add p j to p j+.e. ove p j to x j+ ) 5: Delete p j, x j ) fro the lst. C. Mass Mergng Algorth The second algorth erges the asses. Two asses p and p at postons x and x would be erged nto one ass p + p at poston x = p p +p x + p p +p x. Ths algorth s based on the stochastc degradaton of the channel, but the rando varable χ s not stochastcally donated by χ. The greedy algorth for the ergng of the asses would be the followng: Algorth Mergng Masses Algorth : Start fro the lst p, x ),, p, x ). : Repeat tes 3: Fnd j = argn{p f x ) fx )) p + fx + ) f x )) : } x = p p +p + x + p+ p +p + x + 4: Replace the two asses p j, x j ) and p j+, x j+ ) wth a sngle ass p j + p j+, x j ). ote that n practce, the functon f can be any ncreasng concave functon, for exaple, the entropy functon or the Bhattacharyya functon. In fact, snce the algorth s greedy and suboptal, t s hard to nvestgate explctly how changng the functon f wll affect the total error of the algorth n the end.e., how far W s fro W ). III. BOUDS O THE APPROXIMATIO LOSS In ths secton, we provde soe bounds on the axu approxaton loss we have n the algorths. We defne the approxaton loss to be the dfference between the expectaton of the functon f under the true dstrbuton P χ and the approxated dstrbuton P χ. ote that the nd of error that s analyzed n ths secton s dfferent fro what was defned n Secton I-B. The connecton of the approxaton loss wth the quantzaton error s ade clear n Theore. For convenence, we wll sply stc to the word error nstead of approxaton loss fro now on. We frst fnd an upper bound on the error ade n Algorths and and then use t to provde bounds on the error ade whle perforng operatons ) and ). Lea. The axu error ade by Algorths and s upper bounded by O ). Proof: Frst, we derve an upper bound on the error of Algorths and n each teraton, and therefore a bound on the error of the whole process. Let us consder Algorth. The proble can be reduced to the followng optzaton proble: such that e = ax p,x n p d ) 7) p =, d, 8) where d = fx + ) fx ), and f/) f0) = s assued w.l.o.g. We prove the lea by Cauchy-Schwarz nequalty. n ) ) n p d = p d = n p d 9) ow by applyng Cauchy-Schwarz we have ) / ) / p d p d 0) = = Snce the su of ters p d s less than, the nu of the ters wll certanly be less than. Therefore, e = n ) p d. ) For Algorth, achevng the sae bound as Algorth s trval. Denote e ) the error ade n Algorth and e ) the error ade n Algorth. Then, = e ) = p f x ) fx )) p + fx + ) f x )) ) p f x ) fx )) 3) p fx + ) fx )) = e ). 4) Consequently, the error generated by runnng the whole algorth can be upper bounded by =+ whch s O ). What s stated n Lea s a loose upper bound on the error of Algorth. To acheve better bounds, we upper bound the error ade n each teraton of the Algorth as the followng: e = p f x ) fx )) p + fx + ) f x )) 5) p + p x f x ) p + x f x + ) p + p + p + p + p 6) = p p + p + p + x f x ) f x + )) 7) p + p + x f c ), 8) 4 where x = x + x and 6) s due to concavty of functon f. Furtherore, 8) s by the ean value theore, where x c x +. If f x) s bounded for x 0, ), then we can prove that n e ) slarly to Lea. Therefore the error 3 of the whole algorth would be O ). Unfortunately, ths s not the case for ether of entropy functon or Bhattacharyya functon. However, we can stll acheve a better upper bound for the error of Algorth.
4 Lea. The axu error ade by Algorth for the entropy functon hx) can be upper bounded by the order of O log).5 ). Proof: See Appendx. We can see that the error s proved by a factor of log n coparson wth Algorth. ow we use the result of Lea to provde bounds on the total error ade n estatng the utual nforaton of a channel after n levels of operatons ) and ). Theore. Assue W s a BMS channel and usng Algorth or we quantze the channel W to a channel W. Tang = n s suffcent to gve an approxaton error that decays to zero. Proof: Frst notce that for any two BMS channels W and V, dong the polarzaton operatons ) and ), the followng s true: IW ) IV )) + IW + ) IV + )) = IW ) IV )) 9) Replacng V wth W n 9) and usng the result of Lea, we conclude that after n levels of polarzaton the su of the errors n approxatng the utual nforaton of the n channels s upper-bounded by O nn ). In partcular, tang = n, one can say that the average approxaton error of the n channels at level n s upper-bounded by O n ). Therefore, at least a fracton n of the channels are dstorted by at ost n.e., except for a neglgble fracton of the channels the error n approxatng the utual nforaton decays to zero. As a result, snce the overall coplexty of the encoder constructon s O ), ths leads to alost lnear algorths for encoder constructon wth arbtrary accuracy n dentfyng good channels. IV. EXCHAGE OF LIMITS In ths secton, we show that there are adssble schees such that as, the lt n 5) approaches IW ) for any BMS channel W. Theore. Gven a BMS channel W and for large enough, there exst adssble quantzaton schees Q such that ρq, W ) s arbtrarly close to IW ). Proof: Consder the followng algorth: The algorth starts wth a quantzed verson of W and t does the noral channel splttng transforaton followed by quantzaton accordng to Algorth or, but once a sub-channel s suffcently good, n the sense that ts Bhattacharyya paraeter s less than an approprately chosen paraeter δ, the algorth replaces the sub-channel wth a bnary erasure channel whch s degraded polar degradaton) wth respect to t As the operatons ) and ) over an erasure channel also yelds and erasure channel, no further quantzaton s need for the chldren of ths sub-channel). Snce the rato of the total good ndces of BECZP )) s ZP ), then the total error that we ae by replacng P wth BECZP )) s at ost ZP ) whch n the above algorth s less that the paraeter δ. ow, for a fxed level n, accordng to Theore f we ae large enough, the rato of the quantzed sub-channels that ther Bhattacharyya value s less that δ approaches to ts orgnal value wth no quantzaton), and for these subchannels as explaned above the total error ade wth the algorth s δ. ow fro the polarzaton theore and by sendng δ to zero we deduce that as the nuber of good ndces approaches the capacty of the orgnal channel. V. SIMULATIO RESULTS In order to evaluate the perforance of our quantzaton algorth, slarly to [3], we copare the perforance of the degraded quantzed channel wth the perforance of an upgraded quantzed channel. An algorth slar to Algorth for upgradng a channel s the followng. Consder three neghborng asses n postons x, x, x + ) wth probabltes p, p, p + ). Let t = x x x + x. Then, we splt the ddle ass at x to the other two asses such that the fnal probabltes wll be p + t)p, p + + tp ) at postons x, x + ). The greedy channel upgradng procedure s descrbed n Algorth 3. Algorth 3 Splttng Masses Algorth : Start fro the lst p, x ),, p, x ). : Repeat tes 3: Fnd j = argn{p fx ) tfx + ) t)fx )) :, } 4: Add t)p j to p j and tp j to p j+. 5: Delete p j, x j ) fro the lst. The sae upper bounds on the error of ths algorth can be provded slarly to Secton III wth a lttle bt of odfcaton. In the sulatons, we easure the axu achevable rate whle eepng the probablty of error less than 0 3 by fndng axu possble nuber of channels wth the sallest Bhattacharyya paraeters such that the su of ther Bhattacharyya paraeters s upper bounded by 0 3. The channel s a bnary syetrc channel wth capacty 0.5. Usng Algorths and 3 for degradng and upgradng the channels wth the Bhattacharyya functon fx) = x x), we obtan the followng results: degrade upgrade TABLE I: Achevable rate wth error probablty at ost 0 3 vs. axu nuber of output sybols for bloc-length = 5 It s worth restatng that the algorth runs n coplexty O ). Table I shows the achevable rates for Algorths
5 and 3 when the bloc-length s fxed to = 5 and changes n the range of to 64. It can be seen fro Table I that the dfference of achevable rates wthn the upgraded and degraded verson of the schee s as sall as 0 4 for = 64. We expect that for a fxed, as the bloc-length ncreases the dfference wll also ncrease see Table II). n degrade upgrade TABLE II: Achevable rate wth error probablty at ost 0 3 vs. bloc-length = n for = 6 However, n our schee ths dfference wll rean sall even as grows arbtrarly large as predcted by Theore. see Table III). n degrade upgrade TABLE III: Achevable rate wth error probablty at ost 0 3 vs. bloc-length = n for = 6 We see that the dfference between the rate achevable n the degraded channel and upgraded channel gets constant 0 3 even after 5 levels of polarzatons for = 6. A. Proof of Lea APPEDIX Proof: Let us frst fnd an upper bound for the second dervatve of the entropy functon. Suppose that hx) = x logx) x) log x). Then, For 0 x h x) = we have h x) x x) ln). 0) x ln). ) ow we are ready to prove the lea. Usng ) the nu error can be further upper bounded by n e np + p + ) x x ln4). ) ow suppose that we have l ass ponts wth x and l ass ponts wth x. For the frst l ass ponts we use the upper bound obtaned for Algorth. Hence, for l we have n e n p hx ) 3) ) log) l, 4) where 3) s due to 4) and 4) can be derved agan by applyng Cauchy-Schwarz nequalty. ote that ths te l hx ) h ) log) ). 5) = For the l ass ponts one can wrte n e np + p + ) x x ln4) np + p + ) x ln4) 6) 7) ), 8) l) 3 where 8) s due to Hölder s nequalty as follows: Let q = p + p +. Therefore, p + p + ) and x /. ) ) /3 3 n q x = n q x = n ow by applyng Hölder s nequalty we have q x ) /3 ) 3 9) ) /3 ) /3 q x ) /3 q x 30) Therefore, n e nq x ) /3) ) 3 l) 3. 3) Overall, the error ade n the frst step of the algorth would be { ) )} log) n e n O l, O l) 3 3) ) log), 33).5 snce we ust have that l. Thus, the error generated by runnng the whole algorth can be upper bounded by log) =+ log).5 )..5 ACKOWLEDGMETS authors are grateful to Rüdger Urbane for helpful dscussons. Ths wor was supported n part by grant nuber of the Swss atonal Scence Foundaton. REFERECES [] E. Arıan, Channel Polarzaton: A Method for Constructng Capacty- Achevng Codes for Syetrc Bnary-Input Meoryless Channels, IEEE Trans. Inf. Theory, vol. 55, no. 7, pp , Jul [] S. B. Korada, Polar Codes for Channel and Source Codng, Ph.D. dssertaton,, Lausanne, Swtzerland, Jul [3] I. Tal and A. Vardy, How to Construct Polar Codes, tal gven n Inforaton Theory Worshop, Dubln, Aug. 00. [4] S. H. Hassan, S. B. Korada, and R. Urbane, The Copound Capacty of Polar Codes, Proceedngs of Allerton Conference on Councaton, Control and Coputng, Allerton, Sep [5] R. Mor and T. Tanaa, Perforance and Constructon of Polar Codes on Syetrc Bnary-Input Meoryless Channels, Proceedngs of ISIT, Seoul, South Korea, Jul. 009, pp [6] T. Rchardson and R. Urbane, Modern Codng Theory, Cabrdge Unversty Press, 008.
On the Construction of Polar Codes
On the Constructon of Polar Codes Ratn Pedarsan School of Coputer and Councaton Systes, Lausanne, Swtzerland. ratn.pedarsan@epfl.ch S. Haed Hassan School of Coputer and Councaton Systes, Lausanne, Swtzerland.
More informationExcess Error, Approximation Error, and Estimation Error
E0 370 Statstcal Learnng Theory Lecture 10 Sep 15, 011 Excess Error, Approxaton Error, and Estaton Error Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton So far, we have consdered the fnte saple
More informationCOS 511: Theoretical Machine Learning
COS 5: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture #0 Scrbe: José Sões Ferrera March 06, 203 In the last lecture the concept of Radeacher coplexty was ntroduced, wth the goal of showng that
More informationSystem in Weibull Distribution
Internatonal Matheatcal Foru 4 9 no. 9 94-95 Relablty Equvalence Factors of a Seres-Parallel Syste n Webull Dstrbuton M. A. El-Dacese Matheatcs Departent Faculty of Scence Tanta Unversty Tanta Egypt eldacese@yahoo.co
More informationXII.3 The EM (Expectation-Maximization) Algorithm
XII.3 The EM (Expectaton-Maxzaton) Algorth Toshnor Munaata 3/7/06 The EM algorth s a technque to deal wth varous types of ncoplete data or hdden varables. It can be appled to a wde range of learnng probles
More informationComputational and Statistical Learning theory Assignment 4
Coputatonal and Statstcal Learnng theory Assgnent 4 Due: March 2nd Eal solutons to : karthk at ttc dot edu Notatons/Defntons Recall the defnton of saple based Radeacher coplexty : [ ] R S F) := E ɛ {±}
More information1 Definition of Rademacher Complexity
COS 511: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture #9 Scrbe: Josh Chen March 5, 2013 We ve spent the past few classes provng bounds on the generalzaton error of PAClearnng algorths for the
More informationXiangwen Li. March 8th and March 13th, 2001
CS49I Approxaton Algorths The Vertex-Cover Proble Lecture Notes Xangwen L March 8th and March 3th, 00 Absolute Approxaton Gven an optzaton proble P, an algorth A s an approxaton algorth for P f, for an
More informationLeast Squares Fitting of Data
Least Squares Fttng of Data Davd Eberly Geoetrc Tools, LLC http://www.geoetrctools.co/ Copyrght c 1998-2014. All Rghts Reserved. Created: July 15, 1999 Last Modfed: February 9, 2008 Contents 1 Lnear Fttng
More informationBAYESIAN CURVE FITTING USING PIECEWISE POLYNOMIALS. Dariusz Biskup
BAYESIAN CURVE FITTING USING PIECEWISE POLYNOMIALS Darusz Bskup 1. Introducton The paper presents a nonparaetrc procedure for estaton of an unknown functon f n the regresson odel y = f x + ε = N. (1) (
More informationLeast Squares Fitting of Data
Least Squares Fttng of Data Davd Eberly Geoetrc Tools, LLC http://www.geoetrctools.co/ Copyrght c 1998-2015. All Rghts Reserved. Created: July 15, 1999 Last Modfed: January 5, 2015 Contents 1 Lnear Fttng
More information1 Review From Last Time
COS 5: Foundatons of Machne Learnng Rob Schapre Lecture #8 Scrbe: Monrul I Sharf Aprl 0, 2003 Revew Fro Last Te Last te, we were talkng about how to odel dstrbutons, and we had ths setup: Gven - exaples
More informationApplied Mathematics Letters
Appled Matheatcs Letters 2 (2) 46 5 Contents lsts avalable at ScenceDrect Appled Matheatcs Letters journal hoepage: wwwelseverco/locate/al Calculaton of coeffcents of a cardnal B-splne Gradr V Mlovanovć
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationTwo Conjectures About Recency Rank Encoding
Internatonal Journal of Matheatcs and Coputer Scence, 0(205, no. 2, 75 84 M CS Two Conjectures About Recency Rank Encodng Chrs Buhse, Peter Johnson, Wlla Lnz 2, Matthew Spson 3 Departent of Matheatcs and
More informationOn Pfaff s solution of the Pfaff problem
Zur Pfaff scen Lösung des Pfaff scen Probles Mat. Ann. 7 (880) 53-530. On Pfaff s soluton of te Pfaff proble By A. MAYER n Lepzg Translated by D. H. Delpenc Te way tat Pfaff adopted for te ntegraton of
More informationChapter 12 Lyes KADEM [Thermodynamics II] 2007
Chapter 2 Lyes KDEM [Therodynacs II] 2007 Gas Mxtures In ths chapter we wll develop ethods for deternng therodynac propertes of a xture n order to apply the frst law to systes nvolvng xtures. Ths wll be
More informationFermi-Dirac statistics
UCC/Physcs/MK/EM/October 8, 205 Fer-Drac statstcs Fer-Drac dstrbuton Matter partcles that are eleentary ostly have a type of angular oentu called spn. hese partcles are known to have a agnetc oent whch
More informationOn Syndrome Decoding of Punctured Reed-Solomon and Gabidulin Codes 1
Ffteenth Internatonal Workshop on Algebrac and Cobnatoral Codng Theory June 18-24, 2016, Albena, Bulgara pp. 35 40 On Syndroe Decodng of Punctured Reed-Soloon and Gabduln Codes 1 Hannes Bartz hannes.bartz@tu.de
More informationarxiv: v2 [math.co] 3 Sep 2017
On the Approxate Asyptotc Statstcal Independence of the Peranents of 0- Matrces arxv:705.0868v2 ath.co 3 Sep 207 Paul Federbush Departent of Matheatcs Unversty of Mchgan Ann Arbor, MI, 4809-043 Septeber
More informationThe Parity of the Number of Irreducible Factors for Some Pentanomials
The Party of the Nuber of Irreducble Factors for Soe Pentanoals Wolfra Koepf 1, Ryul K 1 Departent of Matheatcs Unversty of Kassel, Kassel, F. R. Gerany Faculty of Matheatcs and Mechancs K Il Sung Unversty,
More informationOn the number of regions in an m-dimensional space cut by n hyperplanes
6 On the nuber of regons n an -densonal space cut by n hyperplanes Chungwu Ho and Seth Zeran Abstract In ths note we provde a unfor approach for the nuber of bounded regons cut by n hyperplanes n general
More informationWhat is LP? LP is an optimization technique that allocates limited resources among competing activities in the best possible manner.
(C) 998 Gerald B Sheblé, all rghts reserved Lnear Prograng Introducton Contents I. What s LP? II. LP Theor III. The Splex Method IV. Refneents to the Splex Method What s LP? LP s an optzaton technque that
More informationDesigning Fuzzy Time Series Model Using Generalized Wang s Method and Its application to Forecasting Interest Rate of Bank Indonesia Certificate
The Frst Internatonal Senar on Scence and Technology, Islac Unversty of Indonesa, 4-5 January 009. Desgnng Fuzzy Te Seres odel Usng Generalzed Wang s ethod and Its applcaton to Forecastng Interest Rate
More informationMultipoint Analysis for Sibling Pairs. Biostatistics 666 Lecture 18
Multpont Analyss for Sblng ars Bostatstcs 666 Lecture 8 revously Lnkage analyss wth pars of ndvduals Non-paraetrc BS Methods Maxu Lkelhood BD Based Method ossble Trangle Constrant AS Methods Covered So
More informationDenote the function derivatives f(x) in given points. x a b. Using relationships (1.2), polynomials (1.1) are written in the form
SET OF METHODS FO SOUTION THE AUHY POBEM FO STIFF SYSTEMS OF ODINAY DIFFEENTIA EUATIONS AF atypov and YuV Nulchev Insttute of Theoretcal and Appled Mechancs SB AS 639 Novosbrs ussa Introducton A constructon
More informationGadjah Mada University, Indonesia. Yogyakarta State University, Indonesia Karangmalang Yogyakarta 55281
Reducng Fuzzy Relatons of Fuzzy Te Seres odel Usng QR Factorzaton ethod and Its Applcaton to Forecastng Interest Rate of Bank Indonesa Certfcate Agus aan Abad Subanar Wdodo 3 Sasubar Saleh 4 Ph.D Student
More informationy new = M x old Feature Selection: Linear Transformations Constraint Optimization (insertion)
Feature Selecton: Lnear ransforatons new = M x old Constrant Optzaton (nserton) 3 Proble: Gven an objectve functon f(x) to be optzed and let constrants be gven b h k (x)=c k, ovng constants to the left,
More informationCOMP th April, 2007 Clement Pang
COMP 540 12 th Aprl, 2007 Cleent Pang Boostng Cobnng weak classers Fts an Addtve Model Is essentally Forward Stagewse Addtve Modelng wth Exponental Loss Loss Functons Classcaton: Msclasscaton, Exponental,
More informationAn Optimal Bound for Sum of Square Roots of Special Type of Integers
The Sxth Internatonal Syposu on Operatons Research and Its Applcatons ISORA 06 Xnang, Chna, August 8 12, 2006 Copyrght 2006 ORSC & APORC pp. 206 211 An Optal Bound for Su of Square Roots of Specal Type
More informationIntroducing Entropy Distributions
Graubner, Schdt & Proske: Proceedngs of the 6 th Internatonal Probablstc Workshop, Darstadt 8 Introducng Entropy Dstrbutons Noel van Erp & Peter van Gelder Structural Hydraulc Engneerng and Probablstc
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More informationStructure and Drive Paul A. Jensen Copyright July 20, 2003
Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.
More informationLECTURE :FACTOR ANALYSIS
LCUR :FACOR ANALYSIS Rta Osadchy Based on Lecture Notes by A. Ng Motvaton Dstrbuton coes fro MoG Have suffcent aount of data: >>n denson Use M to ft Mture of Gaussans nu. of tranng ponts If
More informationSlobodan Lakić. Communicated by R. Van Keer
Serdca Math. J. 21 (1995), 335-344 AN ITERATIVE METHOD FOR THE MATRIX PRINCIPAL n-th ROOT Slobodan Lakć Councated by R. Van Keer In ths paper we gve an teratve ethod to copute the prncpal n-th root and
More informationFinite Vector Space Representations Ross Bannister Data Assimilation Research Centre, Reading, UK Last updated: 2nd August 2003
Fnte Vector Space epresentatons oss Bannster Data Asslaton esearch Centre, eadng, UK ast updated: 2nd August 2003 Contents What s a lnear vector space?......... 1 About ths docuent............ 2 1. Orthogonal
More informationLecture 3: Shannon s Theorem
CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts
More informationThree Algorithms for Flexible Flow-shop Scheduling
Aercan Journal of Appled Scences 4 (): 887-895 2007 ISSN 546-9239 2007 Scence Publcatons Three Algorths for Flexble Flow-shop Schedulng Tzung-Pe Hong, 2 Pe-Yng Huang, 3 Gwoboa Horng and 3 Chan-Lon Wang
More informationUniversal communication part II: channels with memory
Unversal councaton part II: channels wth eory Yuval Lontz, Mer Feder Tel Avv Unversty, Dept. of EE-Systes Eal: {yuvall,er@eng.tau.ac.l arxv:202.047v2 [cs.it] 20 Mar 203 Abstract Consder councaton over
More informationTHE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan.
THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY Wllam A. Pearlman 2002 References: S. Armoto - IEEE Trans. Inform. Thy., Jan. 1972 R. Blahut - IEEE Trans. Inform. Thy., July 1972 Recall
More informationProblem Set 9 Solutions
Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem
More informationFinite Fields and Their Applications
Fnte Felds and Ther Applcatons 5 009 796 807 Contents lsts avalable at ScenceDrect Fnte Felds and Ther Applcatons www.elsever.co/locate/ffa Typcal prtve polynoals over nteger resdue rngs Tan Tan a, Wen-Feng
More informationLecture 4: Universal Hash Functions/Streaming Cont d
CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected
More informationLimited Dependent Variables
Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages
More informationPreference and Demand Examples
Dvson of the Huantes and Socal Scences Preference and Deand Exaples KC Border October, 2002 Revsed Noveber 206 These notes show how to use the Lagrange Karush Kuhn Tucker ultpler theores to solve the proble
More informationCHAPTER 6 CONSTRAINED OPTIMIZATION 1: K-T CONDITIONS
Chapter 6: Constraned Optzaton CHAPER 6 CONSRAINED OPIMIZAION : K- CONDIIONS Introducton We now begn our dscusson of gradent-based constraned optzaton. Recall that n Chapter 3 we looked at gradent-based
More informationCHAPTER 7 CONSTRAINED OPTIMIZATION 1: THE KARUSH-KUHN-TUCKER CONDITIONS
CHAPER 7 CONSRAINED OPIMIZAION : HE KARUSH-KUHN-UCKER CONDIIONS 7. Introducton We now begn our dscusson of gradent-based constraned optzaton. Recall that n Chapter 3 we looked at gradent-based unconstraned
More informationAN ANALYSIS OF A FRACTAL KINETICS CURVE OF SAVAGEAU
AN ANALYI OF A FRACTAL KINETIC CURE OF AAGEAU by John Maloney and Jack Hedel Departent of Matheatcs Unversty of Nebraska at Oaha Oaha, Nebraska 688 Eal addresses: aloney@unoaha.edu, jhedel@unoaha.edu Runnng
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationVERIFICATION OF FE MODELS FOR MODEL UPDATING
VERIFICATION OF FE MODELS FOR MODEL UPDATING G. Chen and D. J. Ewns Dynacs Secton, Mechancal Engneerng Departent Iperal College of Scence, Technology and Medcne London SW7 AZ, Unted Kngdo Eal: g.chen@c.ac.uk
More informationECE 534: Elements of Information Theory. Solutions to Midterm Exam (Spring 2006)
ECE 534: Elements of Informaton Theory Solutons to Mdterm Eam (Sprng 6) Problem [ pts.] A dscrete memoryless source has an alphabet of three letters,, =,, 3, wth probabltes.4,.4, and., respectvely. (a)
More information04 - Treaps. Dr. Alexander Souza
Algorths Theory 04 - Treaps Dr. Alexander Souza The dctonary proble Gven: Unverse (U,
More informationOne-Shot Quantum Information Theory I: Entropic Quantities. Nilanjana Datta University of Cambridge,U.K.
One-Shot Quantu Inforaton Theory I: Entropc Quanttes Nlanjana Datta Unversty of Cabrdge,U.K. In Quantu nforaton theory, ntally one evaluated: optal rates of nfo-processng tasks, e.g., data copresson, transsson
More informationValuated Binary Tree: A New Approach in Study of Integers
Internatonal Journal of Scentfc Innovatve Mathematcal Research (IJSIMR) Volume 4, Issue 3, March 6, PP 63-67 ISS 347-37X (Prnt) & ISS 347-34 (Onlne) wwwarcournalsorg Valuated Bnary Tree: A ew Approach
More informationRandomness and Computation
Randomness and Computaton or, Randomzed Algorthms Mary Cryan School of Informatcs Unversty of Ednburgh RC 208/9) Lecture 0 slde Balls n Bns m balls, n bns, and balls thrown unformly at random nto bns usually
More informationfind (x): given element x, return the canonical element of the set containing x;
COS 43 Sprng, 009 Dsjont Set Unon Problem: Mantan a collecton of dsjont sets. Two operatons: fnd the set contanng a gven element; unte two sets nto one (destructvely). Approach: Canoncal element method:
More informationThe Impact of the Earth s Movement through the Space on Measuring the Velocity of Light
Journal of Appled Matheatcs and Physcs, 6, 4, 68-78 Publshed Onlne June 6 n ScRes http://wwwscrporg/journal/jap http://dxdoorg/436/jap646 The Ipact of the Earth s Moeent through the Space on Measurng the
More informationOn the Finite-Length Performance of Universal Coding for k-ary Memoryless Sources
Forty-ghth Annual Allerton Conference Allerton House, UIUC, Illnos, USA Septeber 9 - October, 00 On the Fnte-Length Perforance of Unversal Codng for -ary Meoryless Sources Ahad Bera and Faraarz Fer School
More informationSource-Channel-Sink Some questions
Source-Channel-Snk Soe questons Source Channel Snk Aount of Inforaton avalable Source Entro Generall nos and a be te varng Introduces error and lts the rate at whch data can be transferred ow uch nforaton
More informationChapter 7 Channel Capacity and Coding
Wreless Informaton Transmsson System Lab. Chapter 7 Channel Capacty and Codng Insttute of Communcatons Engneerng atonal Sun Yat-sen Unversty Contents 7. Channel models and channel capacty 7.. Channel models
More informationLecture 3. Ax x i a i. i i
18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest
More informationANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)
Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of
More informationSeveral generation methods of multinomial distributed random number Tian Lei 1, a,linxihe 1,b,Zhigang Zhang 1,c
Internatonal Conference on Appled Scence and Engneerng Innovaton (ASEI 205) Several generaton ethods of ultnoal dstrbuted rando nuber Tan Le, a,lnhe,b,zhgang Zhang,c School of Matheatcs and Physcs, USTB,
More informationElastic Collisions. Definition: two point masses on which no external forces act collide without losing any energy.
Elastc Collsons Defnton: to pont asses on hch no external forces act collde thout losng any energy v Prerequstes: θ θ collsons n one denson conservaton of oentu and energy occurs frequently n everyday
More informationA Radon-Nikodym Theorem for Completely Positive Maps
A Radon-Nody Theore for Copletely Postve Maps V P Belavn School of Matheatcal Scences, Unversty of Nottngha, Nottngha NG7 RD E-al: vpb@aths.nott.ac.u and P Staszews Insttute of Physcs, Ncholas Coperncus
More information1. Statement of the problem
Volue 14, 010 15 ON THE ITERATIVE SOUTION OF A SYSTEM OF DISCRETE TIMOSHENKO EQUATIONS Peradze J. and Tsklaur Z. I. Javakhshvl Tbls State Uversty,, Uversty St., Tbls 0186, Georga Georgan Techcal Uversty,
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationPROBABILITY AND STATISTICS Vol. III - Analysis of Variance and Analysis of Covariance - V. Nollau ANALYSIS OF VARIANCE AND ANALYSIS OF COVARIANCE
ANALYSIS OF VARIANCE AND ANALYSIS OF COVARIANCE V. Nollau Insttute of Matheatcal Stochastcs, Techncal Unversty of Dresden, Gerany Keywords: Analyss of varance, least squares ethod, odels wth fxed effects,
More information3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X
Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number
More informationBlock-error performance of root-ldpc codes. Author(s): Andriyanova, Iryna; Boutros, Joseph J.; Biglieri, Ezio; Declercq, David
Research Collecton Conference Paper Bloc-error perforance of root-ldpc codes Authors: Andryanova, Iryna; Boutros, Joseph J.; Bgler, Ezo; Declercq, Davd Publcaton Date: 00 Peranent Ln: https://do.org/0.399/ethz-a-00600396
More informationEntropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or
Sgnal Compresson Sgnal Compresson Entropy Codng Entropy codng s also known as zero-error codng, data compresson or lossless compresson. Entropy codng s wdely used n vrtually all popular nternatonal multmeda
More informationCentroid Uncertainty Bounds for Interval Type-2 Fuzzy Sets: Forward and Inverse Problems
Centrod Uncertanty Bounds for Interval Type-2 Fuzzy Sets: Forward and Inverse Probles Jerry M. Mendel and Hongwe Wu Sgnal and Iage Processng Insttute Departent of Electrcal Engneerng Unversty of Southern
More informationRevision: December 13, E Main Suite D Pullman, WA (509) Voice and Fax
.9.1: AC power analyss Reson: Deceber 13, 010 15 E Man Sute D Pullan, WA 99163 (509 334 6306 Voce and Fax Oerew n chapter.9.0, we ntroduced soe basc quanttes relate to delery of power usng snusodal sgnals.
More informationChapter 1. Theory of Gravitation
Chapter 1 Theory of Gravtaton In ths chapter a theory of gravtaton n flat space-te s studed whch was consdered n several artcles by the author. Let us assue a flat space-te etrc. Denote by x the co-ordnates
More informationStanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011
Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected
More informationFinal Exam Solutions, 1998
58.439 Fnal Exa Solutons, 1998 roble 1 art a: Equlbru eans that the therodynac potental of a consttuent s the sae everywhere n a syste. An exaple s the Nernst potental. If the potental across a ebrane
More informationMultiplicative Functions and Möbius Inversion Formula
Multplcatve Functons and Möbus Inverson Forula Zvezdelna Stanova Bereley Math Crcle Drector Mlls College and UC Bereley 1. Multplcatve Functons. Overvew Defnton 1. A functon f : N C s sad to be arthetc.
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationFREQUENCY DISTRIBUTIONS Page 1 of The idea of a frequency distribution for sets of observations will be introduced,
FREQUENCY DISTRIBUTIONS Page 1 of 6 I. Introducton 1. The dea of a frequency dstrbuton for sets of observatons wll be ntroduced, together wth some of the mechancs for constructng dstrbutons of data. Then
More informationScattering by a perfectly conducting infinite cylinder
Scatterng by a perfectly conductng nfnte cylnder Reeber that ths s the full soluton everywhere. We are actually nterested n the scatterng n the far feld lt. We agan use the asyptotc relatonshp exp exp
More informationFUZZY MODEL FOR FORECASTING INTEREST RATE OF BANK INDONESIA CERTIFICATE
he 3 rd Internatonal Conference on Quanttatve ethods ISBN 979-989 Used n Econoc and Busness. June 6-8, 00 FUZZY ODEL FOR FORECASING INERES RAE OF BANK INDONESIA CERIFICAE Agus aan Abad, Subanar, Wdodo
More informationITERATIVE ESTIMATION PROCEDURE FOR GEOSTATISTICAL REGRESSION AND GEOSTATISTICAL KRIGING
ESE 5 ITERATIVE ESTIMATION PROCEDURE FOR GEOSTATISTICAL REGRESSION AND GEOSTATISTICAL KRIGING Gven a geostatstcal regresson odel: k Y () s x () s () s x () s () s, s R wth () unknown () E[ ( s)], s R ()
More informationDiscrete Memoryless Channels
Dscrete Meorless Channels Source Channel Snk Aount of Inforaton avalable Source Entro Generall nos, dstorted and a be te varng ow uch nforaton s receved? ow uch s lost? Introduces error and lts the rate
More informationPulse Coded Modulation
Pulse Coded Modulaton PCM (Pulse Coded Modulaton) s a voce codng technque defned by the ITU-T G.711 standard and t s used n dgtal telephony to encode the voce sgnal. The frst step n the analog to dgtal
More informationThe Feynman path integral
The Feynman path ntegral Aprl 3, 205 Hesenberg and Schrödnger pctures The Schrödnger wave functon places the tme dependence of a physcal system n the state, ψ, t, where the state s a vector n Hlbert space
More informationDifference Equations
Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1
More informationFoundations of Arithmetic
Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an
More informationFeature Selection: Part 1
CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?
More informationModified parallel multisplitting iterative methods for non-hermitian positive definite systems
Adv Coput ath DOI 0.007/s0444-0-9262-8 odfed parallel ultsplttng teratve ethods for non-hertan postve defnte systes Chuan-Long Wang Guo-Yan eng Xue-Rong Yong Receved: Septeber 20 / Accepted: 4 Noveber
More informationEPR Paradox and the Physical Meaning of an Experiment in Quantum Mechanics. Vesselin C. Noninski
EPR Paradox and the Physcal Meanng of an Experment n Quantum Mechancs Vesseln C Nonnsk vesselnnonnsk@verzonnet Abstract It s shown that there s one purely determnstc outcome when measurement s made on
More informationNear Optimal Online Algorithms and Fast Approximation Algorithms for Resource Allocation Problems
Near Optal Onlne Algorths and Fast Approxaton Algorths for Resource Allocaton Probles Nkhl R Devanur Kaal Jan Balasubraanan Svan Chrstopher A Wlkens Abstract We present algorths for a class of resource
More informationDetermination of the Confidence Level of PSD Estimation with Given D.O.F. Based on WELCH Algorithm
Internatonal Conference on Inforaton Technology and Manageent Innovaton (ICITMI 05) Deternaton of the Confdence Level of PSD Estaton wth Gven D.O.F. Based on WELCH Algorth Xue-wang Zhu, *, S-jan Zhang
More informationEstimation: Part 2. Chapter GREG estimation
Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the
More informationNumerical Heat and Mass Transfer
Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and
More informationNP-Completeness : Proofs
NP-Completeness : Proofs Proof Methods A method to show a decson problem Π NP-complete s as follows. (1) Show Π NP. (2) Choose an NP-complete problem Π. (3) Show Π Π. A method to show an optmzaton problem
More informationImplicit scaling of linear least squares problems
RAL-TR-98-07 1 Iplct scalng of lnear least squares probles by J. K. Red Abstract We consder the soluton of weghted lnear least squares probles by Householder transforatons wth plct scalng, that s, wth
More informationLecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.
Lecture 14 (03/27/18). Channels. Decodng. Prevew of the Capacty Theorem. A. Barg The concept of a communcaton channel n nformaton theory s an abstracton for transmttng dgtal (and analog) nformaton from
More informationChapter 7 Channel Capacity and Coding
Chapter 7 Channel Capacty and Codng Contents 7. Channel models and channel capacty 7.. Channel models Bnary symmetrc channel Dscrete memoryless channels Dscrete-nput, contnuous-output channel Waveform
More informationFall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede. ) with a symmetric Pcovariance matrix of the y( x ) measurements V
Fall Analyss o Experental Measureents B Esensten/rev S Errede General Least Squares wth General Constrants: Suppose we have easureents y( x ( y( x, y( x,, y( x wth a syetrc covarance atrx o the y( x easureents
More informationPerceptual Organization (IV)
Perceptual Organzaton IV Introducton to Coputatonal and Bologcal Vson CS 0--56 Coputer Scence Departent BGU Ohad Ben-Shahar Segentaton Segentaton as parttonng Gven: I - a set of age pxels H a regon hoogenety
More information