Lossy Compression via Sparse Linear Regression: Computationally Efficient Encoding and Decoding

Size: px
Start display at page:

Download "Lossy Compression via Sparse Linear Regression: Computationally Efficient Encoding and Decoding"

Transcription

1 ossy Compressio via Sparse iear Regressio: Computatioally Efficiet Ecodig ad Decodig Ramji Vekataramaa Dept of Egieerig, Uiv of Cambridge Tuhi Sarkar Dept of EE, IIT Bombay Sekhar Tatikoda Dept of EE, Yale Uiversity Abstract We propose computatioally efficiet ecoders ad decoders for lossy compressio usig a Sparse Regressio Code Codewords are structured liear combiatios of colums of a desig matrix The proposed ecodig algorithm sequetially chooses colums of the desig matrix to successively approximate the source sequece It is show to achieve the optimal distortiorate fuctio for iid Gaussia sources with squared-error distortio For a give rate, the parameters of the desig matrix ca be varied to trade off distortio performace with ecodig complexity A example of such a trade-off is: computatioal resource space or time) per source sample of O/ log ) 2 ) ad probability of excess distortio decayig expoetially i / log, where is the block legth The Sparse Regressio Code is robust i the followig sese: for ay ergodic source, the proposed ecoder achieves the optimal distortio-rate fuctio of a iid Gaussia source with the same variace Simulatios show that the ecoder has very good empirical performace, especially at low ad moderate rates I INTRODUCTION Developig efficiet codes for lossy compressio at rates approachig the Shao rate-distortio limit has log bee a importat goal of iformatio theory Efficiecy is measured i terms of the storage complexity of the codebook as well the computatioal complexity of ecodig ad decodig The Shao-style iid radom codebook achieves the optimal distortio-rate trade-off but its storage ad computatioal complexities grow expoetially with the block legth I this paper, we study a class of codes called Sparse Superpositio or Sparse Regressio Codes SPARCs) for lossy compressio with a squared-error distortio criterio We preset computatioally efficiet ecodig ad decodig algorithms that attai the optimal rate-distortio fuctio for iid Gaussia sources Sparse Regressio codes were recetly itroduced by Barro ad Joseph for commuicatio over the AWGN chael ad show to approach the Shao capacity with feasible decodig [1, [2, [3 The codebook costructio is based the statistical framework of high-dimesioal liear regressio The codewords are sparse liear combiatios of colums of a N desig matrix or dictioary, where is the blocklegth ad N is a low-order polyomial i This structure eables the desig of computatioally efficiet compressio ecoders based o sparse approximatio ideas eg, [4, [5) We propose oe such ecoder ad aalyze it performace SPARCs for lossy compressio were first cosidered i [6 where some prelimiary results were preseted The ratedistortio ad error expoet performace of these codes uder miimum-distace optimal) ecodig was characterized i [7 The mai cotributios of this paper are the followig We propose a computatioally efficiet ecodig algorithm for SPARCs which achieves the optimal distortiorate fuctio for iid Gaussia sources with growig block legth The algorithm is based o successive approximatio of the source sequece by colums of the desig matrix The parameters of the desig matrix ca be chose to trade off performace with complexity For example, oe choice of parameters discussed i Sectio IV yields a O 2 ) desig matrix, per-sample ecodig complexity proportioal to log )2, ad probability of log excess distortio decayig expoetially i To the best of our kowledge, this is the fastest kow rate of decay amog lossy compressio codes with feasible ecodig ad decodig With the proposed ecoder, SPARCs share the followig robustess property of radom iid Gaussia codebooks [8, [9: for a give rate R, ay ergodic source with variace σ 2 ca be compressed with distortio close to the iid Gaussia distortio-rate fuctio σ 2 e 2R We briefly review related work i developig computatioally efficiet codes for lossy compressio It was show i [10 that the optimal rate-distortio fuctio of memoryless sources ca be approached by cocateatig optimal codes over sub-blocks of legth much smaller tha the overall block legth Nearest eighbor ecodig is used over each of these sub-blocks, which is feasible due to their short legth For this scheme, it is ot kow how rapidly the probability of excess distortio decays to zero with the overall block legth For sources with fiite alphabet, various codig techiques have bee proposed recetly to approach the rate-distortio boud with computatioally feasible ecodig ad decodig, eg [11, [12, [13 The rates of decay of the probability of excess distortio for these schemes vary, but i geeral they are slower tha expoetial i the block legth The survey paper by Gray ad Neuhoff [14 cotais a extesive discussio of various compressio techiques ad their performace versus complexity trade-offs These iclude scalar quatizatio with etropy codig, tree-structured vector quatizatio, multi-stage vector quatizatio, ad trellis-coded quatizatio Though these techiques have good empirical performace, they have ot bee prove to attai the op-

2 A: β: 0, Sectio 1 Sectio 2 Sectio M colums M colums M colums 0, c 1, 0, c 2, 0, c, 0,, 0 Fig 1 A is a M matrix ad β is a M 1 vector The positios of the o-zeros i β correspod to the gray colums of A which combie to form the codeword Aβ timal rate-distortio trade-off with computatioally feasible ecoders ad decoders Notatio: Upper-case letters are used to deote radom variables, lower-case for their realizatios, ad bold-face letters for radom vectors ad matrices All vectors have legth The source sequece is deoted by S S 1,, S ), ad the recostructio sequece by Ŝ Ŝ1,, Ŝ) X is the l 2 -orm of vector X, ad X = X / is the ormalized versio N µ, σ 2 ) deotes the Gaussia distributio with mea µ ad variace σ 2 a, b deotes the ier product i a ib i All logarithms are with base e II THE SPARSE REGRESSION CODEBOOK A sparse regressio code SPARC) is defied i terms of a desig matrix A of dimesio M whose etries are iid N 0, 1) Here is the block legth ad M ad are itegers whose values will be specified shortly i terms of ad the rate R As show i Figure 1, oe ca thik of the matrix A as composed of sectios with M colums each Each codeword is a liear combiatio of colums, with oe colum from each sectio Formally, a codeword ca be expressed as Aβ, where β is a M 1 vector β 1,, β M ) with the followig property: there is exactly oe o-zero β i for i {1,, M}, oe o-zero β i for i {M + 1,, 2M}, ad so forth The o-zero value of β i sectio i is set to c i where the value of c i will be specified i the ext sectio Deote the set of all β s that satisfy this property by B M, Sice there are M colums i each of the sectios, the total umber of codewords is M To obtai a compressio rate of R ats/sample, we eed M = e R or log M = R 1) Ecoder: This is defied by a mappig g : R B M, Give the source sequece S ad target distortio D, the ecoder attempts to fid a ˆβ B M, such that S A ˆβ 2 D If such a codeword is ot foud, a error is declared Decoder: This is a mappig h : B M, R O receivig ˆβ B M, from the ecoder, the decoder produces recostructio h ˆβ) = A ˆβ Storage Complexity: The storage complexity of the dictioary is proportioal to M There are several choices for T the pair M, ) which satisfy 1) For example, = 1 ad M = e R recovers the Shao-style radom codebook i which the umber of colums i A is e R, ie, the storage complexity is expoetial i For our costructios, we choose M to be a low-order polyomial i The is Θ/log ), ad the umber of colums M i the dictioary is a low-order polyomial i This reductio i storage complexity ca be haressed to develop computatioally efficiet ecoders for the SPARC The results i Sectio IV show that this choice of M, ) offers a good trade-off betwee complexity ad error performace III COMPUTATIONAY EFFICIENT ENCODER The source sequece S is geerated by a ergodic source with mea 0 ad variace σ 2 The SPARC is defied by the M desig matrix A The jth colum of A is deoted A j, 1 j M The o-zero value of β i sectio i is chose to be c i = 2Rσ 2 1 2R ) i 1, i = 1,, 2) Give source sequece S, the ecoder determies ˆβ B M, accordig to the followig algorithm Step 0: Set R 0 = S Step i, i = 1,, : Pick Set m i = argmax j: i 1)M+1 j im A j, 3) R i = c i 4) where c i is give by 2) Step + 1: The codeword ˆβ has o-zero values i positios m i, 1 i The value of the o-zero i sectio i give by c i I summary, the algorithm sequetially chooses the m i s, sectio by sectio, to miimize a residue i each step A Computatioal Complexity Each of the stages of the ecodig algorithm ivolves computig M ier products ad fidig the maximum amog them Therefore the umber of operatios per source sample is proportioal to M If we choose M = b for some b > 0, 1) implies = Θ/log ), ad the umber of operatios per source sample is of the order / log ) b+1 We ote that due to the sequetial ature of the algorithm, oly oe sectio of the desig matrix eeds to be kept i memory at each step Whe we have several source sequeces to be ecoded i successio, the ecoder ca have a pipelied architecture which requires computatioal space memory) of the order M ad has costat computatio time per source symbol The code structure automatically yields low decodig complexity The ecoder ca represet the chose β with biary sequeces of log 2 M bits each The ith biary sequece idicates the positio of the o-zero elemet i sectio i Hece the decoder complexity correspodig to locatig the

3 o-zero elemets usig the received bits is log 2 M, which is O1) per source sample Recostructig the codeword the requires additios per source sample IV MAIN RESUT Theorem 1 Cosider a legth source sequece S geerated by a ergodic source havig mea 0 ad variace σ 2 et δ 0, δ 1, δ 2 be ay positive costats such that δ 0 + 5Rδ 1 + δ 2 ) < 05 5) et A be a M desig matrix with iid N 0, 1) etries ad M, satisfyig 1) O the SPARC defied by A, the proposed ecodig algorithm produces a codeword A ˆβ that satisfies the followig for sufficietly large M, P S A ˆβ 2 > σ 2 e 2R 1 + e R ) 2) < p 0 + p 1 + p 2 6) where S p 0 = P σ 1 > δ 0 ), p 1 = 2M exp δ 2 1/8 ), M 2δ 2 ) p 2 = 8 log M Proof A sketch of the proof is give i Sectio V The full versio ca be foud i [15 Corollary 1 If the source sequece S geerated accordig to a iid N 0, σ 2 ) distributio, p 0 < 2 exp 3δ 2 0/4), ad the SPARC with the proposed ecoder attais the optimal distortio-rate fuctio σ 2 e 2R, with probability of excess distortio decayig expoetially i Remarks: 1) The probability measure i 6) is over the space of source sequeces ad desig matrices 2) Ergodicity of the source is oly eeded to esure that p 0 0 as 3) For a iid N 0, σ 2 ) source, Corollary 1 says that with the choice M = b b > 0) we ca achieve a distortio withi ay costat gap of the optimal distortio-rate fuctio σ 2 e 2R with the probability of excess distortio fallig expoetially i = Θ/log ) 4) For a give rate R, Theorem 1 guaratees that the proposed ecoder achieves a squared-error distortio close to the Gaussia D R) for all ergodic sources with variace σ 2 apidoth [8 also shows that for ay ergodic source of a give variace, oe caot attai a squared-error distortio smaller tha this usig a iid Gaussia codebook with miimum-distace ecodig Gap from D R): To achieve distortios close to the Gaussia D R) with high probability, we eed p 0, p 1, p 2 to all go 7) to 0 I particular, for p 2 0 with growig, from 7) we require that M 2δ2 > 8 log M Or, δ 2 > log log M 2 log M + log 8 2 log M 8) To approach D R), ote that we eed,, M to all go to while satisfyig 1):, for the probability of error i 7) to be small, ad M i order to allow δ 2 to be small accordig to 8) Whe,, M are sufficietly large, 8) dictates how log log M log M small ca be: the distortio is approximately higher tha the optimal value D R) = σ 2 e 2R Performace versus Complexity Trade-off : Recall that the ecodig complexity is OM ) operatios per source sample The performace of the ecoder improves as M, icrease both i terms of the gap from the optimal distortio 8) ad the probability of error 7) Choosig M = b for b > 0 yields /log ad the resultig ecodig complexity is Θ /log ) b+1) ; the gap from D R) govered by 8) is approximately log log b log At the other extreme, the Shao codebook has = 1, M = e R Here the SPARC cosists of oly oe sectio, ad the proposed algorithm essetially performs miimum-distace ecodig The ecodig complexity is Oe R ) expoetial) From 8), δ 2 is approximately log The gap from D R) is ow domiated by δ 0 ad δ 1 whose typical values for the iid Gaussia case are Θ1/ ) from 7) ad Corollary 1) Successive Refiemet Iterpretatio: The proposed ecoder may be iterpreted i terms of successive refiemet [16 We ca thik of each sectio of the desig matrix A as a codebook of rate R/ For step i, i = 1,,, the residue acts as the source sequece, ad the algorithm attempts to fid the colum withi Sectio i that miimizes the distortio The distortio after step i is the variace of the ew residue R i The miimum mea-squared distortio with a Gaussia codebook [8 at rate R/ is D i = 2 exp 2R/) 2 1 2R/) 9) for R/ 1 The typical value of the distortio i Sectio i is close to Di sice the algorithm is equivalet to maximumlikelihood ecodig withi each sectio see 10) i Sectio V) Sice the rate R/ is ifiitesimal, the deviatios from Di i each sectio ca be quite large However, sice the umber of sectios is very large, the fial distortio R 2 is close to the typical value σ 2 e 2R with excess distortio probability that falls expoetially i We emphasize that the successive refiemet iterpretatio is oly true for the proposed ecoder, ad is ot a iheret feature of the sparse regressio codebook Figure 2 shows the performace of the proposed ecoder o a uit variace iid Gaussia source The dimesio of A is M with M = b The curves show the average distortio obtaied at various rates for b = 2 ad b = 3 The value of was icreased with rate i order to keep the total computatioal

4 Distortio D*R) b = 3 b = Rate bits/sample) Fig 2 Average distortio of the proposed ecoder for iid N 0, 1) source at various rates With M = b, distortio-rate curves are show for b = 2 ad b = 3 alog with D R) = e 2R complexity b+1 ) similar across differet rates Recall that block legth is determied by 1)) The reductio i distortio obtaied by icreasig b from 2 to 3 comes at the expese of a icrease i computatioal complexity by a factor of Simulatios were also performed for a uit variace aplacia source The resultig distortio-rate curve was virtually idetical to Figure 2 which is cosistet with Theorem 1 V PROOF OF THEOREM 1 We first preset a o-rigorous aalysis of the proposed ecodig algorithm based o the followig observatios 1) A j 2 is approximately equal to 1 whe is large, for 1 j M This is because A j 2 is the ormalized sum of squares of iid N 0, 1) radom variables 2) Similarly, S 2 is approximately equal to σ 2 for large 3) If X 1, X 2, X M are iid N 0, 1) radom variables, the max{x 1,, X M } is approximately equal to 2 log M for large M [17 Step i, i = 1,, : We show that if 2 σ ) i 1, 2 1 2R the R i 2 σ 2 1 2R/) i 10) 10) is true for i = 0 the secod observatio above) For each j {i 1)M + 1,, im}, the statistic max T i) j = i 1)M+1 j im T i) j A j, / 11) is a N 0, 1) radom variable This is because it is the projectio of iid N 0, 1) radom vector A j i the directio of ad is idepedet of A j This idepedece holds because is a fuctio of the source sequece S ad the colums {A j } 1 j i 1)M, which are all idepedet of A j for i 1)M + 1 j im Further, the T i) j s are mutually idepedet for i 1)M + 1 j im This ca be see by coditioig o the realizatio of / We therefore have 2 log M 12) From 4), we have R i 2 = 2 + c 2 i A mi 2 2c i a) σ 2 1 2R ) i c 2 i 2c iσ 2R b) = σ 2 1 2R ) i ) i 1 2 log M 13) a) follows from 12) ad the iductio hypothesis b) is obtaied by substitutig for c i from 2) ad for from 1) Therefore, the fial residue after Step is R 2 = S A ˆβ 2 σ 2 1 2R ) σ 2 e 2R 14) where we have used 1 + x) e x for x R Sketch of Formal Proof: The essece of the proof is i aalyzig the deviatio from the typical values of the residual distortio at each step of the algorithm These deviatios arise from atypicality cocerig the source, the desig matrix ad the maximum computed i each step We itroduce some otatio to capture the deviatios The orm of the residue at stage i is expressed as R i 2 = σ 2 1 2R ) i 1 + i ) 2, i = 0,, 15) i [ 1, ) measures the deviatio of the residual distortio R i 2 from its typical value give i 13) The orm of the colum of A chose i step i, is writte as A mi 2 = 1 + γ i, i = 1,, 16) We express the maximum of the statistic T i) j i Step i as max T i) j = = 2 log M1+ɛ i ) i 1)M+1 j im 17) ɛ i measures the deviatio of the maximum computed i step i from 2 log M Armed with this otatio, we have from 4) R i 2 = 2 + c 2 i A mi 2 2c i = σ 2 1 2R/) i[ 1 + i 1 ) 2 + 2R/ 1 2R/ 2 i 1 + γ i 2ɛ i 1 + i 1 )) From 18) ad 15), we obtai 18) 1+ i) 2 = 1+ i 1) 2 + 2R/ 1 2R/ 2 i 1 +γ i 2ɛ i1+ i 1)) 19) for i = 1,, The goal is to boud the fial distortio R 2 = σ 2 1 2R ) 1 + ) 2 20)

5 We fid a upper boud for 1 + ) 2 that holds uder a evet whose probability is close to 1 Accordigly, defie A as the evet where all of the followig hold: 0 < δ 0, i=1 γ i < δ 1, i=1 ɛ i < δ 2 for δ 0, δ 1, δ 2 as specified i the statemet of Theorem 1 We upper boud the probability of the evet A c usig the followig lemmas proofs i [15) ) 1 emma 1 P i=1 γ i > δ < 2M exp δ 2 /8 ) for δ 0, 1 emma 2 For δ > 0, P ) 1 i=1 ɛ i > δ < ) M 2δ κ log M Usig these lemmas, we have P A c ) < p 0 +p 1 +p 2, where p 0, p 1, p 2 are give by 7) The remaider of the proof cosists of obtaiig a boud for 1+ ) 2 uder the coditio that A holds This is doe via the followig lemma, proved i [15 emma 3 Whe A is true ad is sufficietly large, i 0 w i + 4R/ 1 2R/ where w = ) 1 + R/ 1 2R/ i w i j γ j + ɛ j ), j=1 1 i 21) emma 3 implies that whe A holds ad is sufficietly large, w 4R 0 + γ j 1 2R/)w + ɛ j j=1 j=1 a) w [δ 4R R/) δ 1 + δ 2 ) ) [ b) exp δ 0 + R 1 2R/ e R δ 0 + 5Rδ 1 + δ 2 )) = e R 4R 1 R/) δ 1 + δ 2 ) 22) where is defied i the statemet of the theorem a) is true because A holds ad b) is obtaied usig 1 + x e x with x = The distortio ca the be bouded as R/ 1 2R/ R 2 = σ 2 e 2R 1 + ) 2 σ 2 e 2R 1 + e R ) 2 23) VI CONCUSION We showed that Sparse Regressio codes achieve the iid Gaussia distortio-rate fuctio with a successiveapproximatio ecoder I terms of block legth, the ecodig complexity is a low-order polyomial i ad the probability of excess distortio decays expoetially i /log The gap from the distortio-rate fuctio D R) is Olog log M/ log M), as give i 8) A importat directio for future work is desigig feasible ecoders for SPARCs with faster covergece to D R) as desig matrix dimesio or block legth icreases The results of [18, [19 show that the optimal gap from D R) amog all codes) is Θ1/ ) The fact that SPARCs achieve the optimal error-expoet with miimum-distace ecodig [20 suggests that it is possible to desig ecoders with faster covergece to D R) at the expese of slightly higher computatioal complexity The results of this paper together with those i [2, [3 show that SPARCs with computatioally efficiet ecodig ad decodig achieve rates close to the Shao-theoretic limits for both lossy compressio ad commuicatio Further, [21 demostrates how source ad chael codig SPARCs ca be ested to effect biig ad superpositio, which are key igrediets of multi-termial source ad chael codig schemes Sparse regressio codes therefore offer a promisig framework to develop fast, rate-optimal codes for a variety of models i etwork iformatio theory REFERENCES [1 A Barro ad A Joseph, east squares superpositio codes of moderate dictioary size are reliable at rates up to capacity, IEEE Tras o If Theory, vol 58, pp , Feb 2012 [2 A Barro ad A Joseph, Toward fast reliable commuicatio at rates ear capacity with Gaussia oise, i Proc 2010 IEEE ISIT [3 A Joseph ad A Barro, Fast sparse superpositio codes have expoetially small error probability for R < C, Submitted to IEEE Tras If Theory, [4 S Mallat ad Z Zhag, Matchig pursuits with time-frequecy dictioaries, IEEE Tras Sigal Processig, vol 41, pp , Dec 1993 [5 A R Barro, A Cohe, W Dahme, ad R A DeVore, Approximatio ad learig by greedy algorithms, Aals of Statistics, vol 36, pp 64 94, 2008 [6 I Kotoyiais, K Rad, ad S Gitzeis, Sparse superpositio codes for Gaussia vector quatizatio, i 2010 IEEE If Theory Workshop, p 1, Ja 2010 [7 R Vekataramaa, A Joseph, ad S Tatikoda, Gaussia ratedistortio via sparse liear regressio over compact dictioaries, i Proc 2012 IEEE ISIT [8 A apidoth, O the role of mismatch i rate distortio theory, IEEE Tras If Theory, vol 43, pp 38 47, Ja 1997 [9 D Sakriso, The rate of a class of radom processes, IEEE Tras If Theory, vol 16, pp 10 16, Ja 1970 [10 A Gupta, S Verdú, ad T Weissma, Rate-distortio i ear-liear time, i Proc 2008 IEEE ISIT, pp [11 I Kotoyiais ad C Giora, Efficiet radom codebooks ad databases for lossy compressio i ear-liear time, i IEEE If Theory Workshop o Networkig ad If Theory, pp , Jue 2009 [12 M Waiwright, E Maeva, ad E Martiia, ossy source compressio usig low-desity geerator matrix codes: Aalysis ad algorithms, IEEE Tras If Theory, vol 56, o 3, pp , 2010 [13 S Korada ad R Urbake, Polar codes are optimal for lossy source codig, IEEE Tras If Theory, vol 56, pp , April 2010 [14 R Gray ad D Neuhoff, Quatizatio, IEEE Tras If Theory, vol 44, pp , Oct 1998 [15 R Vekataramaa, T Sarkar, ad S Tatikoda, ossy compressio via sparse liear regressio: Computatioally efficiet ecoders ad decoders, [16 W Equitz ad T Cover, Successive refiemet of iformatio, IEEE Tras If Theory, vol 37, pp , Mar 1991 [17 H David ad H Nagaraja, Order Statistics Joh Wiley & Sos, 2003 [18 A Igber ad Y Kochma, The dispersio of lossy source codig, i Data Compressio Coferece DCC), pp 53 62, March 2011 [19 V Kostia ad S Verdú, Fixed-legth lossy compressio i the fiite blocklegth regime, IEEE Tras o If Theory, vol 58, o 6, pp , 2012 [20 R Vekataramaa, A Joseph, ad S Tatikoda, ossy compressio via sparse liear regressio: Performace uder miimum-distace ecodig, [21 R Vekataramaa ad S Tatikoda, Sparse regressio codes for multi-termial source ad chael codig, i 50th Allerto Cof o Commu, Cotrol, ad Computig, 2012

Information Theory Tutorial Communication over Channels with memory. Chi Zhang Department of Electrical Engineering University of Notre Dame

Information Theory Tutorial Communication over Channels with memory. Chi Zhang Department of Electrical Engineering University of Notre Dame Iformatio Theory Tutorial Commuicatio over Chaels with memory Chi Zhag Departmet of Electrical Egieerig Uiversity of Notre Dame Abstract A geeral capacity formula C = sup I(; Y ), which is correct for

More information

Lecture 11: Channel Coding Theorem: Converse Part

Lecture 11: Channel Coding Theorem: Converse Part EE376A/STATS376A Iformatio Theory Lecture - 02/3/208 Lecture : Chael Codig Theorem: Coverse Part Lecturer: Tsachy Weissma Scribe: Erdem Bıyık I this lecture, we will cotiue our discussio o chael codig

More information

The Maximum-Likelihood Decoding Performance of Error-Correcting Codes

The Maximum-Likelihood Decoding Performance of Error-Correcting Codes The Maximum-Lielihood Decodig Performace of Error-Correctig Codes Hery D. Pfister ECE Departmet Texas A&M Uiversity August 27th, 2007 (rev. 0) November 2st, 203 (rev. ) Performace of Codes. Notatio X,

More information

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss ECE 90 Lecture : Complexity Regularizatio ad the Squared Loss R. Nowak 5/7/009 I the previous lectures we made use of the Cheroff/Hoeffdig bouds for our aalysis of classifier errors. Hoeffdig s iequality

More information

Lecture 27. Capacity of additive Gaussian noise channel and the sphere packing bound

Lecture 27. Capacity of additive Gaussian noise channel and the sphere packing bound Lecture 7 Ageda for the lecture Gaussia chael with average power costraits Capacity of additive Gaussia oise chael ad the sphere packig boud 7. Additive Gaussia oise chael Up to this poit, we have bee

More information

ECE 564/645 - Digital Communication Systems (Spring 2014) Final Exam Friday, May 2nd, 8:00-10:00am, Marston 220

ECE 564/645 - Digital Communication Systems (Spring 2014) Final Exam Friday, May 2nd, 8:00-10:00am, Marston 220 ECE 564/645 - Digital Commuicatio Systems (Sprig 014) Fial Exam Friday, May d, 8:00-10:00am, Marsto 0 Overview The exam cosists of four (or five) problems for 100 (or 10) poits. The poits for each part

More information

On Evaluating the Rate-Distortion Function of Sources with Feed-Forward and the Capacity of Channels with Feedback.

On Evaluating the Rate-Distortion Function of Sources with Feed-Forward and the Capacity of Channels with Feedback. O Evaluatig the Rate-Distortio Fuctio of Sources with Feed-Forward ad the Capacity of Chaels with Feedback. Ramji Vekataramaa ad S. Sadeep Pradha Departmet of EECS, Uiversity of Michiga, A Arbor, MI 4805

More information

1 of 7 7/16/2009 6:06 AM Virtual Laboratories > 6. Radom Samples > 1 2 3 4 5 6 7 6. Order Statistics Defiitios Suppose agai that we have a basic radom experimet, ad that X is a real-valued radom variable

More information

Stochastic Simulation

Stochastic Simulation Stochastic Simulatio 1 Itroductio Readig Assigmet: Read Chapter 1 of text. We shall itroduce may of the key issues to be discussed i this course via a couple of model problems. Model Problem 1 (Jackso

More information

UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 17 Lecturer: David Wagner April 3, Notes 17 for CS 170

UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 17 Lecturer: David Wagner April 3, Notes 17 for CS 170 UC Berkeley CS 170: Efficiet Algorithms ad Itractable Problems Hadout 17 Lecturer: David Wager April 3, 2003 Notes 17 for CS 170 1 The Lempel-Ziv algorithm There is a sese i which the Huffma codig was

More information

Parallel Vector Algorithms David A. Padua

Parallel Vector Algorithms David A. Padua Parallel Vector Algorithms 1 of 32 Itroductio Next, we study several algorithms where parallelism ca be easily expressed i terms of array operatios. We will use Fortra 90 to represet these algorithms.

More information

Information Theory and Coding

Information Theory and Coding Sol. Iformatio Theory ad Codig. The capacity of a bad-limited additive white Gaussia (AWGN) chael is give by C = Wlog 2 ( + σ 2 W ) bits per secod(bps), where W is the chael badwidth, is the average power

More information

Basics of Probability Theory (for Theory of Computation courses)

Basics of Probability Theory (for Theory of Computation courses) Basics of Probability Theory (for Theory of Computatio courses) Oded Goldreich Departmet of Computer Sciece Weizma Istitute of Sciece Rehovot, Israel. oded.goldreich@weizma.ac.il November 24, 2008 Preface.

More information

Problem Set 4 Due Oct, 12

Problem Set 4 Due Oct, 12 EE226: Radom Processes i Systems Lecturer: Jea C. Walrad Problem Set 4 Due Oct, 12 Fall 06 GSI: Assae Gueye This problem set essetially reviews detectio theory ad hypothesis testig ad some basic otios

More information

Lecture 7: Channel coding theorem for discrete-time continuous memoryless channel

Lecture 7: Channel coding theorem for discrete-time continuous memoryless channel Lecture 7: Chael codig theorem for discrete-time cotiuous memoryless chael Lectured by Dr. Saif K. Mohammed Scribed by Mirsad Čirkić Iformatio Theory for Wireless Commuicatio ITWC Sprig 202 Let us first

More information

Non-Asymptotic Achievable Rates for Gaussian Energy-Harvesting Channels: Best-Effort and Save-and-Transmit

Non-Asymptotic Achievable Rates for Gaussian Energy-Harvesting Channels: Best-Effort and Save-and-Transmit 08 IEEE Iteratioal Symposium o Iformatio Theory ISIT No-Asymptotic Achievable Rates for Gaussia Eergy-Harvestig Chaels: Best-Effort ad Save-ad-Trasmit Silas L Fog Departmet of Electrical ad Computer Egieerig

More information

The multiplicative structure of finite field and a construction of LRC

The multiplicative structure of finite field and a construction of LRC IERG6120 Codig for Distributed Storage Systems Lecture 8-06/10/2016 The multiplicative structure of fiite field ad a costructio of LRC Lecturer: Keeth Shum Scribe: Zhouyi Hu Notatios: We use the otatio

More information

Vector Permutation Code Design Algorithm. Danilo SILVA and Weiler A. FINAMORE

Vector Permutation Code Design Algorithm. Danilo SILVA and Weiler A. FINAMORE Iteratioal Symposium o Iformatio Theory ad its Applicatios, ISITA2004 Parma, Italy, October 10 13, 2004 Vector Permutatio Code Desig Algorithm Dailo SILVA ad Weiler A. FINAMORE Cetro de Estudos em Telecomuicações

More information

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE Geeral e Image Coder Structure Motio Video (s 1,s 2,t) or (s 1,s 2 ) Natural Image Samplig A form of data compressio; usually lossless, but ca be lossy Redudacy Removal Lossless compressio: predictive

More information

Approximations and more PMFs and PDFs

Approximations and more PMFs and PDFs Approximatios ad more PMFs ad PDFs Saad Meimeh 1 Approximatio of biomial with Poisso Cosider the biomial distributio ( b(k,,p = p k (1 p k, k λ: k Assume that is large, ad p is small, but p λ at the limit.

More information

Lecture 15: Strong, Conditional, & Joint Typicality

Lecture 15: Strong, Conditional, & Joint Typicality EE376A/STATS376A Iformatio Theory Lecture 15-02/27/2018 Lecture 15: Strog, Coditioal, & Joit Typicality Lecturer: Tsachy Weissma Scribe: Nimit Sohoi, William McCloskey, Halwest Mohammad I this lecture,

More information

Fixed-Threshold Polar Codes

Fixed-Threshold Polar Codes Fixed-Threshold Polar Codes Jig Guo Uiversity of Cambridge jg582@cam.ac.uk Albert Guillé i Fàbregas ICREA & Uiversitat Pompeu Fabra Uiversity of Cambridge guille@ieee.org Jossy Sayir Uiversity of Cambridge

More information

CEE 522 Autumn Uncertainty Concepts for Geotechnical Engineering

CEE 522 Autumn Uncertainty Concepts for Geotechnical Engineering CEE 5 Autum 005 Ucertaity Cocepts for Geotechical Egieerig Basic Termiology Set A set is a collectio of (mutually exclusive) objects or evets. The sample space is the (collectively exhaustive) collectio

More information

Lecture 7: October 18, 2017

Lecture 7: October 18, 2017 Iformatio ad Codig Theory Autum 207 Lecturer: Madhur Tulsiai Lecture 7: October 8, 207 Biary hypothesis testig I this lecture, we apply the tools developed i the past few lectures to uderstad the problem

More information

An Efficient Lloyd-Max Quantizer for Matching Pursuit Decompositions

An Efficient Lloyd-Max Quantizer for Matching Pursuit Decompositions BRAZILVI ITERATIOAL TELECOMMUICATIOS SYMPOSIUM (ITS6), SEPTEMBER 3-6, 6, FORTALEZA-CE, BRAZIL A Efficiet Lloyd-Max Quatizer for Matchig Pursuit Decompositios Lisadro Lovisolo, Eduardo A B da Silva ad Paulo

More information

Shannon s noiseless coding theorem

Shannon s noiseless coding theorem 18.310 lecture otes May 4, 2015 Shao s oiseless codig theorem Lecturer: Michel Goemas I these otes we discuss Shao s oiseless codig theorem, which is oe of the foudig results of the field of iformatio

More information

Rank Modulation with Multiplicity

Rank Modulation with Multiplicity Rak Modulatio with Multiplicity Axiao (Adrew) Jiag Computer Sciece ad Eg. Dept. Texas A&M Uiversity College Statio, TX 778 ajiag@cse.tamu.edu Abstract Rak modulatio is a scheme that uses the relative order

More information

Sequences. Notation. Convergence of a Sequence

Sequences. Notation. Convergence of a Sequence Sequeces A sequece is essetially just a list. Defiitio (Sequece of Real Numbers). A sequece of real umbers is a fuctio Z (, ) R for some real umber. Do t let the descriptio of the domai cofuse you; it

More information

Chapter 6 Principles of Data Reduction

Chapter 6 Principles of Data Reduction Chapter 6 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 0 Chapter 6 Priciples of Data Reductio Sectio 6. Itroductio Goal: To summarize or reduce the data X, X,, X to get iformatio about a

More information

Infinite Sequences and Series

Infinite Sequences and Series Chapter 6 Ifiite Sequeces ad Series 6.1 Ifiite Sequeces 6.1.1 Elemetary Cocepts Simply speakig, a sequece is a ordered list of umbers writte: {a 1, a 2, a 3,...a, a +1,...} where the elemets a i represet

More information

OPTIMAL PIECEWISE UNIFORM VECTOR QUANTIZATION OF THE MEMORYLESS LAPLACIAN SOURCE

OPTIMAL PIECEWISE UNIFORM VECTOR QUANTIZATION OF THE MEMORYLESS LAPLACIAN SOURCE Joural of ELECTRICAL EGIEERIG, VOL. 56, O. 7-8, 2005, 200 204 OPTIMAL PIECEWISE UIFORM VECTOR QUATIZATIO OF THE MEMORYLESS LAPLACIA SOURCE Zora H. Perić Veljo Lj. Staović Alesadra Z. Jovaović Srdja M.

More information

Asymptotic Coupling and Its Applications in Information Theory

Asymptotic Coupling and Its Applications in Information Theory Asymptotic Couplig ad Its Applicatios i Iformatio Theory Vicet Y. F. Ta Joit Work with Lei Yu Departmet of Electrical ad Computer Egieerig, Departmet of Mathematics, Natioal Uiversity of Sigapore IMS-APRM

More information

Cooperative Communication Fundamentals & Coding Techniques

Cooperative Communication Fundamentals & Coding Techniques 3 th ICACT Tutorial Cooperative commuicatio fudametals & codig techiques Cooperative Commuicatio Fudametals & Codig Techiques 0..4 Electroics ad Telecommuicatio Research Istitute Kiug Jug 3 th ICACT Tutorial

More information

Journal of Multivariate Analysis. Superefficient estimation of the marginals by exploiting knowledge on the copula

Journal of Multivariate Analysis. Superefficient estimation of the marginals by exploiting knowledge on the copula Joural of Multivariate Aalysis 102 (2011) 1315 1319 Cotets lists available at ScieceDirect Joural of Multivariate Aalysis joural homepage: www.elsevier.com/locate/jmva Superefficiet estimatio of the margials

More information

EECS564 Estimation, Filtering, and Detection Hwk 2 Solns. Winter p θ (z) = (2θz + 1 θ), 0 z 1

EECS564 Estimation, Filtering, and Detection Hwk 2 Solns. Winter p θ (z) = (2θz + 1 θ), 0 z 1 EECS564 Estimatio, Filterig, ad Detectio Hwk 2 Sols. Witer 25 4. Let Z be a sigle observatio havig desity fuctio where. p (z) = (2z + ), z (a) Assumig that is a oradom parameter, fid ad plot the maximum

More information

Recursive Algorithm for Generating Partitions of an Integer. 1 Preliminary

Recursive Algorithm for Generating Partitions of an Integer. 1 Preliminary Recursive Algorithm for Geeratig Partitios of a Iteger Sug-Hyuk Cha Computer Sciece Departmet, Pace Uiversity 1 Pace Plaza, New York, NY 10038 USA scha@pace.edu Abstract. This article first reviews the

More information

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 +

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 + 62. Power series Defiitio 16. (Power series) Give a sequece {c }, the series c x = c 0 + c 1 x + c 2 x 2 + c 3 x 3 + is called a power series i the variable x. The umbers c are called the coefficiets of

More information

Algebra of Least Squares

Algebra of Least Squares October 19, 2018 Algebra of Least Squares Geometry of Least Squares Recall that out data is like a table [Y X] where Y collects observatios o the depedet variable Y ad X collects observatios o the k-dimesioal

More information

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4 MATH 30: Probability ad Statistics 9. Estimatio ad Testig of Parameters Estimatio ad Testig of Parameters We have bee dealig situatios i which we have full kowledge of the distributio of a radom variable.

More information

On Random Line Segments in the Unit Square

On Random Line Segments in the Unit Square O Radom Lie Segmets i the Uit Square Thomas A. Courtade Departmet of Electrical Egieerig Uiversity of Califoria Los Ageles, Califoria 90095 Email: tacourta@ee.ucla.edu I. INTRODUCTION Let Q = [0, 1] [0,

More information

Channel coding, linear block codes, Hamming and cyclic codes Lecture - 8

Channel coding, linear block codes, Hamming and cyclic codes Lecture - 8 Digital Commuicatio Chael codig, liear block codes, Hammig ad cyclic codes Lecture - 8 Ir. Muhamad Asial, MSc., PhD Ceter for Iformatio ad Commuicatio Egieerig Research (CICER) Electrical Egieerig Departmet

More information

Vector Quantization: a Limiting Case of EM

Vector Quantization: a Limiting Case of EM . Itroductio & defiitios Assume that you are give a data set X = { x j }, j { 2,,, }, of d -dimesioal vectors. The vector quatizatio (VQ) problem requires that we fid a set of prototype vectors Z = { z

More information

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n.

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n. Jauary 1, 2019 Resamplig Methods Motivatio We have so may estimators with the property θ θ d N 0, σ 2 We ca also write θ a N θ, σ 2 /, where a meas approximately distributed as Oce we have a cosistet estimator

More information

Finite Block-Length Gains in Distributed Source Coding

Finite Block-Length Gains in Distributed Source Coding Decoder Fiite Block-Legth Gais i Distributed Source Codig Farhad Shirai EECS Departmet Uiversity of Michiga A Arbor,USA Email: fshirai@umichedu S Sadeep Pradha EECS Departmet Uiversity of Michiga A Arbor,USA

More information

Definition 4.2. (a) A sequence {x n } in a Banach space X is a basis for X if. unique scalars a n (x) such that x = n. a n (x) x n. (4.

Definition 4.2. (a) A sequence {x n } in a Banach space X is a basis for X if. unique scalars a n (x) such that x = n. a n (x) x n. (4. 4. BASES I BAACH SPACES 39 4. BASES I BAACH SPACES Sice a Baach space X is a vector space, it must possess a Hamel, or vector space, basis, i.e., a subset {x γ } γ Γ whose fiite liear spa is all of X ad

More information

Lecture 10: Universal coding and prediction

Lecture 10: Universal coding and prediction 0-704: Iformatio Processig ad Learig Sprig 0 Lecture 0: Uiversal codig ad predictio Lecturer: Aarti Sigh Scribes: Georg M. Goerg Disclaimer: These otes have ot bee subjected to the usual scrutiy reserved

More information

REGRESSION WITH QUADRATIC LOSS

REGRESSION WITH QUADRATIC LOSS REGRESSION WITH QUADRATIC LOSS MAXIM RAGINSKY Regressio with quadratic loss is aother basic problem studied i statistical learig theory. We have a radom couple Z = X, Y ), where, as before, X is a R d

More information

A statistical method to determine sample size to estimate characteristic value of soil parameters

A statistical method to determine sample size to estimate characteristic value of soil parameters A statistical method to determie sample size to estimate characteristic value of soil parameters Y. Hojo, B. Setiawa 2 ad M. Suzuki 3 Abstract Sample size is a importat factor to be cosidered i determiig

More information

Are Slepian-Wolf Rates Necessary for Distributed Parameter Estimation?

Are Slepian-Wolf Rates Necessary for Distributed Parameter Estimation? Are Slepia-Wolf Rates Necessary for Distributed Parameter Estimatio? Mostafa El Gamal ad Lifeg Lai Departmet of Electrical ad Computer Egieerig Worcester Polytechic Istitute {melgamal, llai}@wpi.edu arxiv:1508.02765v2

More information

10-701/ Machine Learning Mid-term Exam Solution

10-701/ Machine Learning Mid-term Exam Solution 0-70/5-78 Machie Learig Mid-term Exam Solutio Your Name: Your Adrew ID: True or False (Give oe setece explaatio) (20%). (F) For a cotiuous radom variable x ad its probability distributio fuctio p(x), it

More information

Sieve Estimators: Consistency and Rates of Convergence

Sieve Estimators: Consistency and Rates of Convergence EECS 598: Statistical Learig Theory, Witer 2014 Topic 6 Sieve Estimators: Cosistecy ad Rates of Covergece Lecturer: Clayto Scott Scribe: Julia Katz-Samuels, Brado Oselio, Pi-Yu Che Disclaimer: These otes

More information

Rademacher Complexity

Rademacher Complexity EECS 598: Statistical Learig Theory, Witer 204 Topic 0 Rademacher Complexity Lecturer: Clayto Scott Scribe: Ya Deg, Kevi Moo Disclaimer: These otes have ot bee subjected to the usual scrutiy reserved for

More information

Entropy and Ergodic Theory Lecture 5: Joint typicality and conditional AEP

Entropy and Ergodic Theory Lecture 5: Joint typicality and conditional AEP Etropy ad Ergodic Theory Lecture 5: Joit typicality ad coditioal AEP 1 Notatio: from RVs back to distributios Let (Ω, F, P) be a probability space, ad let X ad Y be A- ad B-valued discrete RVs, respectively.

More information

Binary classification, Part 1

Binary classification, Part 1 Biary classificatio, Part 1 Maxim Ragisky September 25, 2014 The problem of biary classificatio ca be stated as follows. We have a radom couple Z = (X,Y ), where X R d is called the feature vector ad Y

More information

An Introduction to Randomized Algorithms

An Introduction to Randomized Algorithms A Itroductio to Radomized Algorithms The focus of this lecture is to study a radomized algorithm for quick sort, aalyze it usig probabilistic recurrece relatios, ad also provide more geeral tools for aalysis

More information

Slide Set 13 Linear Model with Endogenous Regressors and the GMM estimator

Slide Set 13 Linear Model with Endogenous Regressors and the GMM estimator Slide Set 13 Liear Model with Edogeous Regressors ad the GMM estimator Pietro Coretto pcoretto@uisa.it Ecoometrics Master i Ecoomics ad Fiace (MEF) Uiversità degli Studi di Napoli Federico II Versio: Friday

More information

Information-based Feature Selection

Information-based Feature Selection Iformatio-based Feature Selectio Farza Faria, Abbas Kazeroui, Afshi Babveyh Email: {faria,abbask,afshib}@staford.edu 1 Itroductio Feature selectio is a topic of great iterest i applicatios dealig with

More information

Lecture 19: Convergence

Lecture 19: Convergence Lecture 19: Covergece Asymptotic approach I statistical aalysis or iferece, a key to the success of fidig a good procedure is beig able to fid some momets ad/or distributios of various statistics. I may

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2016 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

Multiterminal source coding with complementary delivery

Multiterminal source coding with complementary delivery Iteratioal Symposium o Iformatio Theory ad its Applicatios, ISITA2006 Seoul, Korea, October 29 November 1, 2006 Multitermial source codig with complemetary delivery Akisato Kimura ad Tomohiko Uyematsu

More information

Increasing timing capacity using packet coloring

Increasing timing capacity using packet coloring 003 Coferece o Iformatio Scieces ad Systems, The Johs Hopkis Uiversity, March 4, 003 Icreasig timig capacity usig packet colorig Xi Liu ad R Srikat[] Coordiated Sciece Laboratory Uiversity of Illiois e-mail:

More information

Optimally Sparse SVMs

Optimally Sparse SVMs A. Proof of Lemma 3. We here prove a lower boud o the umber of support vectors to achieve geeralizatio bouds of the form which we cosider. Importatly, this result holds ot oly for liear classifiers, but

More information

Confidence interval for the two-parameter exponentiated Gumbel distribution based on record values

Confidence interval for the two-parameter exponentiated Gumbel distribution based on record values Iteratioal Joural of Applied Operatioal Research Vol. 4 No. 1 pp. 61-68 Witer 2014 Joural homepage: www.ijorlu.ir Cofidece iterval for the two-parameter expoetiated Gumbel distributio based o record values

More information

4.3 Growth Rates of Solutions to Recurrences

4.3 Growth Rates of Solutions to Recurrences 4.3. GROWTH RATES OF SOLUTIONS TO RECURRENCES 81 4.3 Growth Rates of Solutios to Recurreces 4.3.1 Divide ad Coquer Algorithms Oe of the most basic ad powerful algorithmic techiques is divide ad coquer.

More information

Summary and Discussion on Simultaneous Analysis of Lasso and Dantzig Selector

Summary and Discussion on Simultaneous Analysis of Lasso and Dantzig Selector Summary ad Discussio o Simultaeous Aalysis of Lasso ad Datzig Selector STAT732, Sprig 28 Duzhe Wag May 4, 28 Abstract This is a discussio o the work i Bickel, Ritov ad Tsybakov (29). We begi with a short

More information

A New Achievability Scheme for the Relay Channel

A New Achievability Scheme for the Relay Channel A New Achievability Scheme for the Relay Chael Wei Kag Seur Ulukus Departmet of Electrical ad Computer Egieerig Uiversity of Marylad, College Park, MD 20742 wkag@umd.edu ulukus@umd.edu October 4, 2007

More information

Linear Regression Demystified

Linear Regression Demystified Liear Regressio Demystified Liear regressio is a importat subject i statistics. I elemetary statistics courses, formulae related to liear regressio are ofte stated without derivatio. This ote iteds to

More information

A Hybrid Random-Structured Coding Scheme for the Gaussian Two-Terminal Source Coding Problem Under a Covariance Matrix Distortion Constraint

A Hybrid Random-Structured Coding Scheme for the Gaussian Two-Terminal Source Coding Problem Under a Covariance Matrix Distortion Constraint A Hybrid Radom-Structured Codig Scheme for the Gaussia Two-Termial Source Codig Problem Uder a Covariace Matrix Distortio Costrait Yag Yag ad Zixiag Xiog Dept of Electrical ad Computer Egieerig Texas A&M

More information

Lecture 7: Properties of Random Samples

Lecture 7: Properties of Random Samples Lecture 7: Properties of Radom Samples 1 Cotiued From Last Class Theorem 1.1. Let X 1, X,...X be a radom sample from a populatio with mea µ ad variace σ

More information

Lecture 9: Hierarchy Theorems

Lecture 9: Hierarchy Theorems IAS/PCMI Summer Sessio 2000 Clay Mathematics Udergraduate Program Basic Course o Computatioal Complexity Lecture 9: Hierarchy Theorems David Mix Barrigto ad Alexis Maciel July 27, 2000 Most of this lecture

More information

Problem Set 2 Solutions

Problem Set 2 Solutions CS271 Radomess & Computatio, Sprig 2018 Problem Set 2 Solutios Poit totals are i the margi; the maximum total umber of poits was 52. 1. Probabilistic method for domiatig sets 6pts Pick a radom subset S

More information

5.1 A mutual information bound based on metric entropy

5.1 A mutual information bound based on metric entropy Chapter 5 Global Fao Method I this chapter, we exted the techiques of Chapter 2.4 o Fao s method the local Fao method) to a more global costructio. I particular, we show that, rather tha costructig a local

More information

THE ASYMPTOTIC COMPLEXITY OF MATRIX REDUCTION OVER FINITE FIELDS

THE ASYMPTOTIC COMPLEXITY OF MATRIX REDUCTION OVER FINITE FIELDS THE ASYMPTOTIC COMPLEXITY OF MATRIX REDUCTION OVER FINITE FIELDS DEMETRES CHRISTOFIDES Abstract. Cosider a ivertible matrix over some field. The Gauss-Jorda elimiatio reduces this matrix to the idetity

More information

The Likelihood Encoder with Applications to Lossy Compression and Secrecy

The Likelihood Encoder with Applications to Lossy Compression and Secrecy The Likelihood Ecoder with Applicatios to Lossy Compressio ad Secrecy Eva C. Sog Paul Cuff H. Vicet Poor Dept. of Electrical Eg., Priceto Uiversity, NJ 8544 {csog, cuff, poor}@priceto.edu Abstract A likelihood

More information

Complexity Analysis of Highly Improved Hybrid Turbo Codes

Complexity Analysis of Highly Improved Hybrid Turbo Codes I J C T A, 8(5, 015, pp. 433-439 Iteratioal Sciece Press Complexity Aalysis of Highly Improved Hybrid Turbo Codes M. Jose Ra* ad Sharmii Eoch** Abstract: Moder digital commuicatio systems eed efficiet

More information

The method of types. PhD short course Information Theory and Statistics Siena, September, Mauro Barni University of Siena

The method of types. PhD short course Information Theory and Statistics Siena, September, Mauro Barni University of Siena PhD short course Iformatio Theory ad Statistics Siea, 15-19 September, 2014 The method of types Mauro Bari Uiversity of Siea Outlie of the course Part 1: Iformatio theory i a utshell Part 2: The method

More information

Kinetics of Complex Reactions

Kinetics of Complex Reactions Kietics of Complex Reactios by Flick Colema Departmet of Chemistry Wellesley College Wellesley MA 28 wcolema@wellesley.edu Copyright Flick Colema 996. All rights reserved. You are welcome to use this documet

More information

6. Kalman filter implementation for linear algebraic equations. Karhunen-Loeve decomposition

6. Kalman filter implementation for linear algebraic equations. Karhunen-Loeve decomposition 6. Kalma filter implemetatio for liear algebraic equatios. Karhue-Loeve decompositio 6.1. Solvable liear algebraic systems. Probabilistic iterpretatio. Let A be a quadratic matrix (ot obligatory osigular.

More information

Chapter 3. Strong convergence. 3.1 Definition of almost sure convergence

Chapter 3. Strong convergence. 3.1 Definition of almost sure convergence Chapter 3 Strog covergece As poited out i the Chapter 2, there are multiple ways to defie the otio of covergece of a sequece of radom variables. That chapter defied covergece i probability, covergece i

More information

Non-Asymptotic Achievable Rates for Gaussian Energy-Harvesting Channels: Best-Effort and Save-and-Transmit

Non-Asymptotic Achievable Rates for Gaussian Energy-Harvesting Channels: Best-Effort and Save-and-Transmit No-Asymptotic Achievable Rates for Gaussia Eergy-Harvestig Chaels: Best-Effort ad Save-ad-Trasmit Silas L. Fog, Jig Yag, ad Ayli Yeer arxiv:805.089v [cs.it] 30 May 08 Abstract A additive white Gaussia

More information

Random Variables, Sampling and Estimation

Random Variables, Sampling and Estimation Chapter 1 Radom Variables, Samplig ad Estimatio 1.1 Itroductio This chapter will cover the most importat basic statistical theory you eed i order to uderstad the ecoometric material that will be comig

More information

6.895 Essential Coding Theory October 20, Lecture 11. This lecture is focused in comparisons of the following properties/parameters of a code:

6.895 Essential Coding Theory October 20, Lecture 11. This lecture is focused in comparisons of the following properties/parameters of a code: 6.895 Essetial Codig Theory October 0, 004 Lecture 11 Lecturer: Madhu Suda Scribe: Aastasios Sidiropoulos 1 Overview This lecture is focused i comparisos of the followig properties/parameters of a code:

More information

Lecture 9: Expanders Part 2, Extractors

Lecture 9: Expanders Part 2, Extractors Lecture 9: Expaders Part, Extractors Topics i Complexity Theory ad Pseudoradomess Sprig 013 Rutgers Uiversity Swastik Kopparty Scribes: Jaso Perry, Joh Kim I this lecture, we will discuss further the pseudoradomess

More information

Double Stage Shrinkage Estimator of Two Parameters. Generalized Exponential Distribution

Double Stage Shrinkage Estimator of Two Parameters. Generalized Exponential Distribution Iteratioal Mathematical Forum, Vol., 3, o. 3, 3-53 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/.9/imf.3.335 Double Stage Shrikage Estimator of Two Parameters Geeralized Expoetial Distributio Alaa M.

More information

Geometry of LS. LECTURE 3 GEOMETRY OF LS, PROPERTIES OF σ 2, PARTITIONED REGRESSION, GOODNESS OF FIT

Geometry of LS. LECTURE 3 GEOMETRY OF LS, PROPERTIES OF σ 2, PARTITIONED REGRESSION, GOODNESS OF FIT OCTOBER 7, 2016 LECTURE 3 GEOMETRY OF LS, PROPERTIES OF σ 2, PARTITIONED REGRESSION, GOODNESS OF FIT Geometry of LS We ca thik of y ad the colums of X as members of the -dimesioal Euclidea space R Oe ca

More information

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d Liear regressio Daiel Hsu (COMS 477) Maximum likelihood estimatio Oe of the simplest liear regressio models is the followig: (X, Y ),..., (X, Y ), (X, Y ) are iid radom pairs takig values i R d R, ad Y

More information

Entropy Rates and Asymptotic Equipartition

Entropy Rates and Asymptotic Equipartition Chapter 29 Etropy Rates ad Asymptotic Equipartitio Sectio 29. itroduces the etropy rate the asymptotic etropy per time-step of a stochastic process ad shows that it is well-defied; ad similarly for iformatio,

More information

Lecture 14: Graph Entropy

Lecture 14: Graph Entropy 15-859: Iformatio Theory ad Applicatios i TCS Sprig 2013 Lecture 14: Graph Etropy March 19, 2013 Lecturer: Mahdi Cheraghchi Scribe: Euiwoog Lee 1 Recap Bergma s boud o the permaet Shearer s Lemma Number

More information

DISTRIBUTION LAW Okunev I.V.

DISTRIBUTION LAW Okunev I.V. 1 DISTRIBUTION LAW Okuev I.V. Distributio law belogs to a umber of the most complicated theoretical laws of mathematics. But it is also a very importat practical law. Nothig ca help uderstad complicated

More information

Entropies & Information Theory

Entropies & Information Theory Etropies & Iformatio Theory LECTURE I Nilajaa Datta Uiversity of Cambridge,U.K. For more details: see lecture otes (Lecture 1- Lecture 5) o http://www.qi.damtp.cam.ac.uk/ode/223 Quatum Iformatio Theory

More information

Expectation and Variance of a random variable

Expectation and Variance of a random variable Chapter 11 Expectatio ad Variace of a radom variable The aim of this lecture is to defie ad itroduce mathematical Expectatio ad variace of a fuctio of discrete & cotiuous radom variables ad the distributio

More information

Complexity Bounds of LDPC Codes for Parallel Channels

Complexity Bounds of LDPC Codes for Parallel Channels Complexity Bouds of DPC Codes for Parallel Chaels Youjia (Eugee) iu, eugeeliu@ieee.org Departmet of Electrical ad Computer Egieerig Uiversity of Colorado at Boulder Jilei Hou, jhou@qualcomm.com Qualcomm

More information

Riesz-Fischer Sequences and Lower Frame Bounds

Riesz-Fischer Sequences and Lower Frame Bounds Zeitschrift für Aalysis ud ihre Aweduge Joural for Aalysis ad its Applicatios Volume 1 (00), No., 305 314 Riesz-Fischer Sequeces ad Lower Frame Bouds P. Casazza, O. Christese, S. Li ad A. Lider Abstract.

More information

Convergence of random variables. (telegram style notes) P.J.C. Spreij

Convergence of random variables. (telegram style notes) P.J.C. Spreij Covergece of radom variables (telegram style otes).j.c. Spreij this versio: September 6, 2005 Itroductio As we kow, radom variables are by defiitio measurable fuctios o some uderlyig measurable space

More information

4. Partial Sums and the Central Limit Theorem

4. Partial Sums and the Central Limit Theorem 1 of 10 7/16/2009 6:05 AM Virtual Laboratories > 6. Radom Samples > 1 2 3 4 5 6 7 4. Partial Sums ad the Cetral Limit Theorem The cetral limit theorem ad the law of large umbers are the two fudametal theorems

More information

The Growth of Functions. Theoretical Supplement

The Growth of Functions. Theoretical Supplement The Growth of Fuctios Theoretical Supplemet The Triagle Iequality The triagle iequality is a algebraic tool that is ofte useful i maipulatig absolute values of fuctios. The triagle iequality says that

More information

Fixed-length lossy compression in the finite blocklength regime: discrete memoryless sources

Fixed-length lossy compression in the finite blocklength regime: discrete memoryless sources 20 IEEE Iteratioal Symposium o Iformatio Theory Proceedigs Fixed-legth lossy compressio i the fiite blocklegth regime: discrete memoryless sources Victoria Kostia Dept. of Electrical Egieerig Priceto Uiversity

More information

INFINITE SEQUENCES AND SERIES

INFINITE SEQUENCES AND SERIES 11 INFINITE SEQUENCES AND SERIES INFINITE SEQUENCES AND SERIES 11.4 The Compariso Tests I this sectio, we will lear: How to fid the value of a series by comparig it with a kow series. COMPARISON TESTS

More information

5.1 Review of Singular Value Decomposition (SVD)

5.1 Review of Singular Value Decomposition (SVD) MGMT 69000: Topics i High-dimesioal Data Aalysis Falll 06 Lecture 5: Spectral Clusterig: Overview (cotd) ad Aalysis Lecturer: Jiamig Xu Scribe: Adarsh Barik, Taotao He, September 3, 06 Outlie Review of

More information

Lecture 20. Brief Review of Gram-Schmidt and Gauss s Algorithm

Lecture 20. Brief Review of Gram-Schmidt and Gauss s Algorithm 8.409 A Algorithmist s Toolkit Nov. 9, 2009 Lecturer: Joatha Keler Lecture 20 Brief Review of Gram-Schmidt ad Gauss s Algorithm Our mai task of this lecture is to show a polyomial time algorithm which

More information