Approximately achieving Gaussian relay network capacity with lattice codes

Size: px
Start display at page:

Download "Approximately achieving Gaussian relay network capacity with lattice codes"

Transcription

1 Approxmately achevng Gaussan relay network capacty wth lattce codes Ayfer Özgür EFL, Lausanne, Swtzerland Suhas Dggav UCLA, USA and EFL, Swtzerland Abstract Recently, t has been shown that a quantze-mapand-forward scheme approxmately acheves wthn a constant number of bts the Gaussan relay network capacty for arbtrary topologes []. Ths was establshed usng Gaussan codebooks for transmsson and random mappngs at the relays. In ths paper, we show that the same approxmaton result can be establshed by usng lattces for transmsson and quantzaton along wth structured mappngs at the relays. I. INTRODUCTION Characterzng the capacty of relay networks has been a long-standng open queston n network nformaton theory. The semnal work of Cover and El-Gamal [] has establshed the basc achevablty schemes for relay channels. More recently there has been extenson of these technques to larger networks see [3] and references theren. In [], motvated by a determnstc model of wreless communcaton, t was shown that the quantze-map-and-forward scheme acheves wthn a constant number of bts from the nformatontheoretc cutset upper bound. Ths constant s unversal n the sense that t s ndependent of the channel gans and the operatng SNR, though t could depend on the network topology lke the number of nodes. In the quantze-map-and-forward scheme analyzed n [], each relay node frst quantzes ts receved sgnal at the nose level, then randomly maps t to a Gaussan codeword and transmts t. A natural queston that we address n ths paper s whether lattce codes retan the approxmate optmalty of the above scheme. Ths s motvated n part snce lattce codes along wth lattce decodng could enable computatonally tractable encodng and decodng methods. For example lattce codes were used for lnear functon computaton over multpleaccess networks [4] and for communcaton over multpleaccess relay networks wth orthogonal broadcast n [5]. The man result of ths paper s to show that the quantze-map-andforward scheme usng nested lattce codes for transmsson and quantzaton, stll acheves the Gaussan relay network capacty wthn a constant. Ths result s summarzed n Theorem.. The use of structured codes allows to specfy a structured mappng between the quantzaton and transmsson codebooks at each relay. The nested lattce codebooks consdered n ths paper are based on the random constructon n [6], where they are shown to acheve the capacty of the AWGN channel. Ths paper s organzed as follows: In Secton II, we state the network model and our man result. In Secton III, we summarze the constructon of the nested lattce ensemble. In Secton IV, we descrbe the network operaton. In partcular, we specfy how we use the nested lattce codes of Secton III for encodng at the source, quantzaton, mappng and transmsson at the relay nodes, and decodng at the destnaton node. In Secton V, we analyse the performance acheved by the scheme. The detaled proofs can be found n [0]. II. MAIN RESULT We consder a Gaussan relay network where a source node s wants to communcate to a destnaton node d, wth the help of N relay nodes, denoted N. The sgnal receved by node {s, d, N } s gven by y = H x + z where H s the N M channel matrx from node comprsng M transmt antennas to node comprsng N receve antennas. Each element of H represents the complex channel gan from a transmttng antenna of node to a recevng antenna of node. The nose z s complex crcularly-symmetrc Gaussan vector CN0, σ I and s..d. for dfferent nodes. The transmtted sgnals x are subect to an average power constrant. The followng theorem s the man result of ths paper. Theorem.: Usng nested lattce codes for transmsson and quantzaton along wth structured mappngs at the relays, we can acheve all rates R mn IX Ω ; Y Ω c X Ω c N Ω N between s and d, where Ω s a source-destnaton cut of the network and X Ω = {X, Ω} are..d. CN0, /M I. It has been shown n [] that the restrcton to..d. Gaussan nput dstrbutons s wthn N,d N bts/s/hz of the cut-set upper bound. Therefore the rate acheved usng lattce codes n the above theorem s wthn N,d N bts/s/hz to the cutset upper bound of the network. For smplcty of presentaton, n the rest of the paper we concentrate on a layered network where every node has a sngle transmt and receve antenna. More precsely, the sgnal receved by node n layer l, 0 l l d, denoted N l, s

2 gven by y = h x + z where h s the real scalar channel coeffcent from node n layer l, to node. s N 0, d N ld. The analyss can be extended to non-layered networks by followng the tmeexpanson argument of [], to multcast traffc wth multple destnaton nodes as well as to multple multcast where multple source nodes multcast to a group of destnaton nodes. III. CONSTRUCTION OF THE NESTED LATTICE ENSEMBLE Consder a lattce Λ or more precsely, a sequence of lattces Λ n ndexed by the lattce dmenson n wth V denotng the Vorono regon of Λ. Let us defne the second moment per dmenson of Λ as σ Λ = x dx n V where V denotes the volume of V and let the n n full-rank generator matrx of Λ be denoted by G Λ,.e., Λ = G Λ Z n. We assume that Λ or more precsely, the sequence of lattces Λ n s both Roger s and oltyrev good. The exstence of such lattces has been shown n [7], where the reader can also fnd the precse defntons of Roger s and oltyrev good. Ths fxed lattce Λ wll serve as the coarse lattce for all our nested lattce constructons. The fne lattce Λ s constructed usng Loelger s type-a constructon [8]. Let k, n, p be ntegers such that k n and p s prme. The fne lattce s constructed usng the followng steps. Draw an n k matrx G such that each of ts entres s..d accordng to the unform dstrbuton over Z p = {0,,..., p }. Form the lnear code V C = {c : c = G w,w Z k p}, where denotes modulo-p multplcaton. Lft C to R n to form Λ = p C + Z n. Λ = G Λ Λ s the desred fne lattce. Note that snce Z n Λ, we have Λ Λ. Draw v unformly over p Λ V and translate the lattce Λ by v. The nested lattce codebook conssts of all ponts of the translated fne lattce nsde the Vorono regon of the coarse lattce, Λ = v + Λ mod Λ = v + Λ V. In the above equaton, we defne x mod Λ as the quantzaton error of x R n wth respect to the lattce Λ,.e., x mod Λ = x Q Λ x, 3 where the lattce quantzer Q Λ x : R n Λ s gven by Q Λ x = argmn x λ. λ Λ Note that the quantzaton and mod operatons wth respect to a lattce can be defned n dfferent ways. The mod operaton n 3 maps x R n to the Vorono regon V of the lattce. More generally, t s possble to defne a mod or quantzaton operaton wth respect to any fundamental regon of the lattce. In partcular, when we consder the nteger lattce Z n n the sequel, or more generally ts multples p Z n where p s a postve nteger, we wll assume that x mod p Z n = x x p where x p denotes component-wse roundng to the nearest smaller nteger multple of p. In other words, the mod operaton wth respect to p Z n maps the pont x R n to the regon p [0, n. The above constructon yelds a random ensemble of nested lattce codes that has the followng desred propertes: There s a becton between Z n p p Z n [0, n p Λ G Λ [0, n p Λ V. The last observaton follows smply from the fact that both G Λ [0, n and V are fundamental regons of the lattce Λ,.e., they both tle R n. Snce C Z n p, the above becton restrcted to C yelds, C p C = Λ [0, n Λ G Λ [0, n Λ V Λ. 4 Note that Λ p Λ V. The bectons above can be explctly specfed n both drectons and we wll make use of ths fact n the next secton. The random codebook Λ has the followng statstcal propertes: Let λ p Λ V, Λ = λ = p Λ V = p n. Let λ, λ p Λ V,, Λ = λ, Λ = λ = p Λ V =. 5 pn In other words, the constructon n ths secton yelds an ensemble of nested lattce codes such that each codeword of the random codebook Λ s unformly dstrbuted over p Λ V and the codewords of Λ are parwse ndependent. These two propertes suffce to prove the random codng result of ths paper. IV. ENCODING, MAING AND DECODING The above constructon yelds a random ensemble of nested lattce pars Λ Λ wth codng rate, R = n log Λ whch can be tuned by choosng the precse magntudes of k and p. In ths ensemble, the coarse lattce Λ s fxed and the fne lattce Λ s randomzed. It has been shown n [9] that wth hgh probablty, a nested lattce Λ, Λ n ths ensemble s such that both Λ and Λ are Roger s and oltyrev-good. For quantzaton, we fx one such good member of the ensemble and use t at all the relay nodes. For transmsson, we draw a random nested lattce codebook from ths ensemble, ndependently at each relay. The mappng

3 between the quantzaton and transmsson codebooks at each relay s specfed below. Source: The source has p k messages, where p s prme and k n. The messages are represented as length-k vectors over the fnte feld Z p and mapped to a random nested lattce codebook Λ followng the constructon n Secton III. In the constructon, the coarse lattce Λ s scaled such that ts second moment σ Λ T = ǫ Λ, where Λ T now denotes the scaled verson of the lattce Λ to satsfy the power constrant. ǫ Λ 0 as n ncreases and choosng t carefully we can ensure that every codeword of Λ satsfes the power constrant. The nformaton transmsson rate s gven by R = n log pk. Let us denote by x w s, w {,..., e nr } the random transmt codewords correspondng to each message w of the source node. Relays: The relay node receves the sgnal y. The sgnal y s frst quantzed by usng a nested lattce codebook that has been generated by the constructon n Secton III. It s shown n [9] that ths constructon yelds nested lattces where the fne lattce s Roger s good wth hgh probablty f k log n. The coarse lattce s both Roger s and oltyrev good by constructon. We fx one such good nested lattce Λ Q, ΛQ and use the correspondng codebook Λ Q = ΛQ mod Λ Q at all the relay nodes for quantzaton. Therefore our quantzaton codebook s not random but fxed and moreover same for all relay nodes. We assume that the nested lattce Λ Q, ΛQ has been generated by usng the followng parameters: Let D s = max h. 6 The coarse lattce Λ Q s a scaled verson of the lattce Λ such that σ Λ Q = µd s + σ 7 for a constant µ > 0. Recall that σ s the nose varance. We denote the generator matrx of the scaled coarse lattce Λ Q by G Λ Q. The parameters k r and p r are chosen such that k r = log n and p r s the prme number such that p k r = e nrr, where R r = log σ Λ Q σ. 8 Note that snce R r s ndependent of n, p r = e nrr log n,.e, p r as n. It can be shown that wth the choce n 8 for R r, the second moment of Λ Q s such that σ Λ Q σ when n ncreases. Ths s a consequence of the fact that both Λ Q and ΛQ are Roger s good. Therefore, we are effectvely quantzng at the nose level. The quantzed sgnal s gven by ŷ = Q Λ Qy + u mod Λ Q where u s a random dther known at the destnaton node and unformly dstrbuted over the Vorono regon V Q of the More precsely, one should take p r to be the largest prme number such that p r e nrr/k. When n s large, the dfference becomes neglgble. fne lattce Λ Q. The dthers u are ndependent for dfferent nodes. Map and Forward: Let us scale the coarse lattce Λ such that ts second moment σ Λ T = ǫ Λ. Let G Λ T denote the generator matrx of the scaled coarse lattce. The quantzed sgnal ŷ at relay s mapped to the transmtted sgnal x by the followng mappng, x = G Λ T p r G p r G Λ Q ŷ mod Z n mod p r Z n + v mod Λ T, 9 where G s an n n random matrx wth ts entres unformly and ndependently dstrbuted n 0,,...,p r and v s a random vector unformly dstrbuted over p r Λ T V T, where V T s the Vorono regon of Λ T. G and v are ndependent for dfferent relay nodes. We ndex the e nrr codewords of Λ Q as, k {,...,e nrr }. The correspondng sequence that the codeword ŷ k s mapped to n 9 s denoted by x k. ŷ k roposton 4.: The above mappng has the followng propertes: At each relay, the transmtted sequences x Λ, where Λ s a nested lattce codebook. The mappng nduces a parwse ndependent and unform dstrbuton over p r ΛT V T. Formally, each quantzaton codeword ŷ k Λ Q s mapped unformly at random to the set p r Λ T V T. Two codewords ŷ k,ŷ k Λ Q such that k k are mapped ndependently. The mappng nduces an ndependent dstrbuton across the relays. The proposton says that the quantzaton codebooks at each relay are ndependently mapped to a random nested lattce codebook from the ensemble constructed n Secton III. The proof s based on the becton gven n 4: There s one-to-one correspondence between the codebook Λ Q and ts underlyng fnte feld codebook C Q. The mappng p r G Λ ŷ Q mod Z n takes the codeword ŷ Λ Q to ts correspondng codeword n C Q. Ths codeword n C Q s then mapped to a random fnte-feld codebook C = { c : c = G c, c C Q}. We then form the nested lattce codebook Λ correspondng to C followng agan the constructon of Secton III. The second property follows by observng that the random matrx G maps every nonzero vector c C Q unformly at random to another fnte feld vector n Z n p. The thrd property follows from the ndependence of the G s and v s for dfferent nodes. The mappng n 9 can be smplfed to the form, x = G Λ T G G Λ Q ŷ + v mod Λ T. Effectvely, t takes the quantzaton codebook Λ Q, expands t by multplyng wth a random matrx wth large entres of the order of p r and then folds t to the Vorono regon of Λ T. Snce the entres of G are potentally very large, even f two codewords are close n Λ Q, they are mapped ndependently to the codewords of the transmt codebook. Note that the complexty of the mappng s polynomal n n, whle random mappng of the form n [] has exponental complexty n n.

4 Destnaton: Gven ts receved sgnal y d, together wth the knowledge of all codebooks, mappngs, dthers and channel gans, the decoder performs a consstency check to recover the transmtted message. For each relay and quantzaton codeword ŷ k, t frst forms the sgnals ỹ k Note that for N l ỹ = ŷ u = ŷ k u mod Λ Q. 0 mod Λ Q = Q Λ Qy + u u mod Λ Q = y y + u mod Λ Q mod ΛQ = h x + z u mod Λ Q, where u = y +u mod Λ Q. u s ndependent of y and s unform over the Vorono regon of Λ Q Crypto Lemma, see [6]. The decoder then checks the set Ŵ of messages ŵ for whch there exsts some ndces k, such that x ŵ s,y d, {ỹ k,x k } N Ãǫ where Ãǫ denotes consstency and N denotes the set of relays. We defne consstency as follows: For a gven set of ndces {k } N, we say x ŵ s,y d, {ỹ k,x k } N Ãǫ f ỹ k h x k mod Λ Q n σc, for all N l, l l d where for convenence of notaton we have denoted x ŵ s = x k, N 0, and y d = ỹ k, N ld. We choose = + ǫσ σ c for a constant ǫ > 0 that can be taken arbtrarly small. We can nterpret the consstency check as follows. For each layer l =,...,l d the decoders pcks a set of potental quantzed receved sequences {ŷ k } Nl and the transmt sequences correspondng to them {x k } Nl. It checks for each layer l, whether the nputs and outputs are consstent,.e., whether the examned nputs {x k } Nl of the layer l could have generated the examned outputs {ŷ k } Nl. Note that the termnaton condtons are known,.e., x s s known for the message beng tested, and y d s the observed sequence at the destnaton. Therefore, effectvely the decoder checks whether there exsts a plausble set of nput and output sequences at each relay that under the message w yeld the observaton y d. Gven, note that the defnton of consstency n s closely related to weak typcalty. Indeed, t s a varant of the weak typcalty condton for Gaussan vectors. Therefore, effectvely our decoder s a typcalty decoder. V. ERROR ANALYSIS An error occurs f the transmtted message w s not n the lst,.e., w / Ŵ or when w w s also n the lst Ŵ. It s easy to show that the correct message w s n the lst wth hgh probablty. We concentrate on the probablty that there exst an error because w s not the unque message n Ŵ. Ths probablty can be upper bounded by concentratng on the par-wse error probabltes,.e., e e nr w w where w w s gven by {k } N s.t.x w s,y d, {ỹ k,x k } N Ãǫ x w s,y d, {ỹ k,x k } N Ãǫ k,...,k N We can condton on the event that the correct message produces ndces {k }, and snce ths s a generc ndex, we can carry out the entre calculaton condtoned on ths and then average over t. The summaton over the N ndces k,..., k N above can be rearranged to yeld Ω k, NΩ k k x w s,y d, {ỹ k,x k } Ãǫ s.t.k = k, NΩ c }{{} where Ω s a source-destnaton cut of the network,.e, Ω = {s, N Ω } where N Ω s a subset of the relayng nodes N. The frst summaton runs over all possble source-destnaton cuts Ω of the network, or equvalently over all subsets N Ω of the relayng nodes N. Followng [], the rearrangement of the summaton above can be nterpreted as ntroducng a noton of dstngushablty. The relay nodes n Ω are the ones that can dstngush between w and w because ỹ k ỹ k, when the relay nodes n Ω c cannot dstngush between w and w because ỹ k = ỹ k. The source node s naturally n the dstngushablty set Ω and the destnaton node s n Ω c. Thus, we sum over all possble cases for the dstngushablty set Ω. Now, let us examne the probablty denoted by. For a gven set of {k } N such that k = k, NΩ c and k k, N Ω, the consstency condton n takes two dfferent forms dependng on whether N Ω or Ω c. For nodes Ω c, the condton s equvalent to h x k x k + z u mod ΛQ n σc Ω l 3 where Ω l = Ω N l and we denote ths event by A. For nodes N Ω, the condton yelds ỹ k Ω c l h x k mod Λ Q n σc Ω l 4 where Ω c l = Ωc N l and we denote ths event by B. We have, = {A, Ω c }, {B, N Ω } h x k = A, Ω c B, N Ω A, Ω c. Note that due to roposton 4., x k,x k, {s, N } n expressons 3 and 4 are a set of ndependent random varables, unformly dstrbuted over p r Λ T V T. Due to the dtherng n 0, ỹ k n 4 s unformly dstrbuted over the Vorono regon V Q of the quantzaton lattce pont ŷk.

5 We wll frst bound the probablty A, Ω c by condtonng on the event defned n the followng lemma. Lemma 5.: Let us defne E to be the followng event, { {N, d}, {k, k } s.t. h x k x k +z u / V Q}. We have E 0. When E s true, we declare ths as an error. Ths adds a vanshng term to the decodng error probablty by the above lemma. Condtonng on the complement of E allows us to get rd of the mod operaton w.r.t Λ Q n 3. Gven E c, the condton A s equvalent to A = { h x k x k + z u n σc}. Ω l Therefore, we have A, Ω c E c = A, Ω c E c A, Ωc E c We upperbound the last probablty above n the followng lemma. Lemma 5.: h x k x k + z u n σc, Ω c Ω l e nixω;hxω+z Ω c Ωc +log+ǫ o n, where X, Ω are..d Gaussan random varables N0,, Z Ω c are..d Gaussan random varables N0, σ and H s the channel transfer matrx from nodes n Ω to nodes n Ω c. The proof of the lemma nvolves two man steps. Recall that x k,x k, Ω are dscrete random varables, ndependently and unformly dstrbuted over p r Λ T V T. We frst show that the probablty n the lemma s upper bounded by e nǫ h x x + z z n σc, Ω c Ω l 5 where x,x, Ω and z, Ωc are all ndependent Gaussan random varables such that x,x N0, σ xi n, z N0, σ z I n and as n ncreases, σx σ Λ T f Λ T s Roger s good, σz σ Λ Q σ f Λ Q s Roger s good whch s our case here. ǫ 0 when n, agan f Λ T and Λ Q are Roger s good. Gven ths translaton to Gaussan dstrbutons the problem becomes very smlar to the one for Gaussan codebooks n []. The second step s to bound the probablty n 5 by followng a smlar approach to []. We wll now upper bound the term B, N Ω A, Ω c 6 k, NΩ k k by frst removng the condton k k n the summaton above and then notng that ths term s equal to e NΩ nrr tmes the probablty B, N Ω A, Ω c evaluated for a randomly and ndependently chosen set of ndces {k } N Ω. When each of the ndces {k } N Ω s chosen unformly at random, ỹ k n 4 s a random varable unformly dstrbuted over V Q. Ths s due to the dtherng over the Vorono regon V Q of the fne lattce and the mod operaton wth respect to the coarse lattce Λ Q n 0. Moreover, by the Crypto Lemma, ν = ỹ k Ω c l Ω c l h x k h x k Ω l mod Λ Q s also unformly dstrbuted over V Q and s ndependent of h x k + h x k. Ω l Ths s due to the fact that ỹ k s ndependent of ths term, whch s due to the fact the ndex k and the dther u are chosen ndependently of everythng else. Therefore 6 s upper bounded by B, N Ω A, Ω c k, NΩ = e NΩ nrr ν nσ c N Ω e NΩ n log+ǫ++on 7 where the last nequalty follows from the below lemma. Lemma 5.3: Let ν be unformly dstrbuted over V Q. We have, ν n σc e n ««log + σ Λ Q σc o n. Combnng the results of Lemma 5. and 7, an consderng the summaton over all possble source-destnaton cuts proves the man result of ths paper stated n Theorem.. REFERENCES [] S. Avestmehr, S. Dggav and D. Tse, Wreless network nformaton flow: a determnstc approach, eprnt arxv: v - arxv.org. [] T. M. Cover and A. El Gamal, Capacty theorems for the relay channel, IEEE Trans. on Informaton Theory, vol.5, no.5, pp , September 979. [3] G. Kramer, I. Marc, and R. Yates, Cooperatve Communcatons. Foundatons and Trends n Networkng, 006. [4] B. Nazer and M. Gastpar, Computaton over Multple Access Channels,, IEEE Trans. on Informaton Theory, vol.53, no.0, pp , October 007. [5] W. Nam and S.-Y. Chung, Relay networks wth orthogonal components, n roc. 46th Annual Allerton Conference, Sept.008. [6] U. Erez and R. Zamr, Achevng log+snr on the AWGN channel wth lattce encodng and decodng, IEEE Trans. on Informaton Theory, vol.50, no.0, pp.93-54, Oct [7] U. Erez, S. Ltsyn and R. Zamr, Lattces whch are good for almost everythng, IEEE Trans. on Informaton Theory, vol.5, no.0, pp , Oct [8] H.-A. Loelger, Averagng bounds for lattces and lnear codes, IEEE Trans. on Informaton theory, vol.43, pp , Nov [9] D. Krthvasan and S. radhan, A proof of the exstence of good lattces, tech. rep., Unversty of Mchgan, July 007. See [0] A. Özgür and S. Dggav, Approxmately achevng Gaussan relay network capacty wth lattce codes, e-prnt - arxv.org, May 00. An alternatve way to upper bound 6 s to randomly choose the quantzaton lattces at each relay nstead of usng a fxed lattce.

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Lecture 3: Shannon s Theorem

Lecture 3: Shannon s Theorem CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Lecture 4: Universal Hash Functions/Streaming Cont d

Lecture 4: Universal Hash Functions/Streaming Cont d CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

Refined Coding Bounds for Network Error Correction

Refined Coding Bounds for Network Error Correction Refned Codng Bounds for Network Error Correcton Shenghao Yang Department of Informaton Engneerng The Chnese Unversty of Hong Kong Shatn, N.T., Hong Kong shyang5@e.cuhk.edu.hk Raymond W. Yeung Department

More information

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Lossy Compression. Compromise accuracy of reconstruction for increased compression. Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem. Lecture 14 (03/27/18). Channels. Decodng. Prevew of the Capacty Theorem. A. Barg The concept of a communcaton channel n nformaton theory s an abstracton for transmttng dgtal (and analog) nformaton from

More information

EGR 544 Communication Theory

EGR 544 Communication Theory EGR 544 Communcaton Theory. Informaton Sources Z. Alyazcoglu Electrcal and Computer Engneerng Department Cal Poly Pomona Introducton Informaton Source x n Informaton sources Analog sources Dscrete sources

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Learning Theory: Lecture Notes

Learning Theory: Lecture Notes Learnng Theory: Lecture Notes Lecturer: Kamalka Chaudhur Scrbe: Qush Wang October 27, 2012 1 The Agnostc PAC Model Recall that one of the constrants of the PAC model s that the data dstrbuton has to be

More information

Pulse Coded Modulation

Pulse Coded Modulation Pulse Coded Modulaton PCM (Pulse Coded Modulaton) s a voce codng technque defned by the ITU-T G.711 standard and t s used n dgtal telephony to encode the voce sgnal. The frst step n the analog to dgtal

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Lecture 10: May 6, 2013

Lecture 10: May 6, 2013 TTIC/CMSC 31150 Mathematcal Toolkt Sprng 013 Madhur Tulsan Lecture 10: May 6, 013 Scrbe: Wenje Luo In today s lecture, we manly talked about random walk on graphs and ntroduce the concept of graph expander,

More information

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Wreless Informaton Transmsson System Lab. Chapter 7 Channel Capacty and Codng Insttute of Communcatons Engneerng atonal Sun Yat-sen Unversty Contents 7. Channel models and channel capacty 7.. Channel models

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

= z 20 z n. (k 20) + 4 z k = 4

= z 20 z n. (k 20) + 4 z k = 4 Problem Set #7 solutons 7.2.. (a Fnd the coeffcent of z k n (z + z 5 + z 6 + z 7 + 5, k 20. We use the known seres expanson ( n+l ( z l l z n below: (z + z 5 + z 6 + z 7 + 5 (z 5 ( + z + z 2 + z + 5 5

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Error Probability for M Signals

Error Probability for M Signals Chapter 3 rror Probablty for M Sgnals In ths chapter we dscuss the error probablty n decdng whch of M sgnals was transmtted over an arbtrary channel. We assume the sgnals are represented by a set of orthonormal

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Lecture 3: Probability Distributions

Lecture 3: Probability Distributions Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Entropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or

Entropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or Sgnal Compresson Sgnal Compresson Entropy Codng Entropy codng s also known as zero-error codng, data compresson or lossless compresson. Entropy codng s wdely used n vrtually all popular nternatonal multmeda

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013 ISSN: 2277-375 Constructon of Trend Free Run Orders for Orthogonal rrays Usng Codes bstract: Sometmes when the expermental runs are carred out n a tme order sequence, the response can depend on the run

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

On Information Theoretic Games for Interference Networks

On Information Theoretic Games for Interference Networks On Informaton Theoretc Games for Interference Networks Suvarup Saha and Randall A. Berry Dept. of EECS Northwestern Unversty e-mal: suvarups@u.northwestern.edu rberry@eecs.northwestern.edu Abstract The

More information

Lecture Space-Bounded Derandomization

Lecture Space-Bounded Derandomization Notes on Complexty Theory Last updated: October, 2008 Jonathan Katz Lecture Space-Bounded Derandomzaton 1 Space-Bounded Derandomzaton We now dscuss derandomzaton of space-bounded algorthms. Here non-trval

More information

Min Cut, Fast Cut, Polynomial Identities

Min Cut, Fast Cut, Polynomial Identities Randomzed Algorthms, Summer 016 Mn Cut, Fast Cut, Polynomal Identtes Instructor: Thomas Kesselhem and Kurt Mehlhorn 1 Mn Cuts n Graphs Lecture (5 pages) Throughout ths secton, G = (V, E) s a mult-graph.

More information

A CHARACTERIZATION OF ADDITIVE DERIVATIONS ON VON NEUMANN ALGEBRAS

A CHARACTERIZATION OF ADDITIVE DERIVATIONS ON VON NEUMANN ALGEBRAS Journal of Mathematcal Scences: Advances and Applcatons Volume 25, 2014, Pages 1-12 A CHARACTERIZATION OF ADDITIVE DERIVATIONS ON VON NEUMANN ALGEBRAS JIA JI, WEN ZHANG and XIAOFEI QI Department of Mathematcs

More information

Strong Markov property: Same assertion holds for stopping times τ.

Strong Markov property: Same assertion holds for stopping times τ. Brownan moton Let X ={X t : t R + } be a real-valued stochastc process: a famlty of real random varables all defned on the same probablty space. Defne F t = nformaton avalable by observng the process up

More information

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space.

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space. Lnear, affne, and convex sets and hulls In the sequel, unless otherwse specfed, X wll denote a real vector space. Lnes and segments. Gven two ponts x, y X, we defne xy = {x + t(y x) : t R} = {(1 t)x +

More information

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chnese for ther wonderful remander theorem. Glenn Stevens 1. Introducton The Chnese remander theorem says we can unquely solve any par of

More information

Perfect Competition and the Nash Bargaining Solution

Perfect Competition and the Nash Bargaining Solution Perfect Competton and the Nash Barganng Soluton Renhard John Department of Economcs Unversty of Bonn Adenauerallee 24-42 53113 Bonn, Germany emal: rohn@un-bonn.de May 2005 Abstract For a lnear exchange

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Chapter 7 Channel Capacty and Codng Contents 7. Channel models and channel capacty 7.. Channel models Bnary symmetrc channel Dscrete memoryless channels Dscrete-nput, contnuous-output channel Waveform

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

Differentiating Gaussian Processes

Differentiating Gaussian Processes Dfferentatng Gaussan Processes Andrew McHutchon Aprl 17, 013 1 Frst Order Dervatve of the Posteror Mean The posteror mean of a GP s gven by, f = x, X KX, X 1 y x, X α 1 Only the x, X term depends on the

More information

Randomness and Computation

Randomness and Computation Randomness and Computaton or, Randomzed Algorthms Mary Cryan School of Informatcs Unversty of Ednburgh RC 208/9) Lecture 0 slde Balls n Bns m balls, n bns, and balls thrown unformly at random nto bns usually

More information

IN THE paradigm of network coding, a set of source nodes

IN THE paradigm of network coding, a set of source nodes 4496 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 64, NO. 6, JUNE 2018 Sngle-Uncast Secure Network Codng and Network Error Correcton are as Hard as Multple-Uncast Network Codng Wentao Huang, Tracey Ho,

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

Maximizing the number of nonnegative subsets

Maximizing the number of nonnegative subsets Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.

More information

DIFFERENTIAL FORMS BRIAN OSSERMAN

DIFFERENTIAL FORMS BRIAN OSSERMAN DIFFERENTIAL FORMS BRIAN OSSERMAN Dfferentals are an mportant topc n algebrac geometry, allowng the use of some classcal geometrc arguments n the context of varetes over any feld. We wll use them to defne

More information

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7 Stanford Unversty CS54: Computatonal Complexty Notes 7 Luca Trevsan January 9, 014 Notes for Lecture 7 1 Approxmate Countng wt an N oracle We complete te proof of te followng result: Teorem 1 For every

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

Chapter 8 SCALAR QUANTIZATION

Chapter 8 SCALAR QUANTIZATION Outlne Chapter 8 SCALAR QUANTIZATION Yeuan-Kuen Lee [ CU, CSIE ] 8.1 Overvew 8. Introducton 8.4 Unform Quantzer 8.5 Adaptve Quantzaton 8.6 Nonunform Quantzaton 8.7 Entropy-Coded Quantzaton Ch 8 Scalar

More information

Appendix B: Resampling Algorithms

Appendix B: Resampling Algorithms 407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles

More information

Genericity of Critical Types

Genericity of Critical Types Genercty of Crtcal Types Y-Chun Chen Alfredo D Tllo Eduardo Fangold Syang Xong September 2008 Abstract Ely and Pesk 2008 offers an nsghtful characterzaton of crtcal types: a type s crtcal f and only f

More information

1 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations

1 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations Physcs 171/271 -Davd Klenfeld - Fall 2005 (revsed Wnter 2011) 1 Dervaton of Rate Equatons from Sngle-Cell Conductance (Hodgkn-Huxley-lke) Equatons We consder a network of many neurons, each of whch obeys

More information

Lecture 5 Decoding Binary BCH Codes

Lecture 5 Decoding Binary BCH Codes Lecture 5 Decodng Bnary BCH Codes In ths class, we wll ntroduce dfferent methods for decodng BCH codes 51 Decodng the [15, 7, 5] 2 -BCH Code Consder the [15, 7, 5] 2 -code C we ntroduced n the last lecture

More information

State Amplification and State Masking for the Binary Energy Harvesting Channel

State Amplification and State Masking for the Binary Energy Harvesting Channel State Amplfcaton and State Maskng for the Bnary Energy Harvestng Channel Kaya Tutuncuoglu, Omur Ozel 2, Ayln Yener, and Sennur Ulukus 2 Department of Electrcal Engneerng, The Pennsylvana State Unversty,

More information

where a is any ideal of R. Lemma 5.4. Let R be a ring. Then X = Spec R is a topological space Moreover the open sets

where a is any ideal of R. Lemma 5.4. Let R be a ring. Then X = Spec R is a topological space Moreover the open sets 5. Schemes To defne schemes, just as wth algebrac varetes, the dea s to frst defne what an affne scheme s, and then realse an arbtrary scheme, as somethng whch s locally an affne scheme. The defnton of

More information

A Feedback Reduction Technique for MIMO Broadcast Channels

A Feedback Reduction Technique for MIMO Broadcast Channels A Feedback Reducton Technque for MIMO Broadcast Channels Nhar Jndal Department of Electrcal and Computer Engneerng Unversty of Mnnesota Mnneapols, MN 55455, USA Emal: nhar@umn.edu Abstract A multple antenna

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Eigenvalues of Random Graphs

Eigenvalues of Random Graphs Spectral Graph Theory Lecture 2 Egenvalues of Random Graphs Danel A. Spelman November 4, 202 2. Introducton In ths lecture, we consder a random graph on n vertces n whch each edge s chosen to be n the

More information

Games of Threats. Elon Kohlberg Abraham Neyman. Working Paper

Games of Threats. Elon Kohlberg Abraham Neyman. Working Paper Games of Threats Elon Kohlberg Abraham Neyman Workng Paper 18-023 Games of Threats Elon Kohlberg Harvard Busness School Abraham Neyman The Hebrew Unversty of Jerusalem Workng Paper 18-023 Copyrght 2017

More information

763622S ADVANCED QUANTUM MECHANICS Solution Set 1 Spring c n a n. c n 2 = 1.

763622S ADVANCED QUANTUM MECHANICS Solution Set 1 Spring c n a n. c n 2 = 1. 7636S ADVANCED QUANTUM MECHANICS Soluton Set 1 Sprng 013 1 Warm-up Show that the egenvalues of a Hermtan operator  are real and that the egenkets correspondng to dfferent egenvalues are orthogonal (b)

More information

Quantum Mechanics for Scientists and Engineers. David Miller

Quantum Mechanics for Scientists and Engineers. David Miller Quantum Mechancs for Scentsts and Engneers Davd Mller Types of lnear operators Types of lnear operators Blnear expanson of operators Blnear expanson of lnear operators We know that we can expand functons

More information

12 MATH 101A: ALGEBRA I, PART C: MULTILINEAR ALGEBRA. 4. Tensor product

12 MATH 101A: ALGEBRA I, PART C: MULTILINEAR ALGEBRA. 4. Tensor product 12 MATH 101A: ALGEBRA I, PART C: MULTILINEAR ALGEBRA Here s an outlne of what I dd: (1) categorcal defnton (2) constructon (3) lst of basc propertes (4) dstrbutve property (5) rght exactness (6) localzaton

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

The Feynman path integral

The Feynman path integral The Feynman path ntegral Aprl 3, 205 Hesenberg and Schrödnger pctures The Schrödnger wave functon places the tme dependence of a physcal system n the state, ψ, t, where the state s a vector n Hlbert space

More information

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k.

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k. THE CELLULAR METHOD In ths lecture, we ntroduce the cellular method as an approach to ncdence geometry theorems lke the Szemeréd-Trotter theorem. The method was ntroduced n the paper Combnatoral complexty

More information

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD Matrx Approxmaton va Samplng, Subspace Embeddng Lecturer: Anup Rao Scrbe: Rashth Sharma, Peng Zhang 0/01/016 1 Solvng Lnear Systems Usng SVD Two applcatons of SVD have been covered so far. Today we loo

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

Module 2. Random Processes. Version 2 ECE IIT, Kharagpur

Module 2. Random Processes. Version 2 ECE IIT, Kharagpur Module Random Processes Lesson 6 Functons of Random Varables After readng ths lesson, ou wll learn about cdf of functon of a random varable. Formula for determnng the pdf of a random varable. Let, X be

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Distributed Transmit Diversity in Relay Networks

Distributed Transmit Diversity in Relay Networks Dstrbuted Transmt Dversty n Relay etworks Cemal Akçaba, Patrck Kuppnger and Helmut Bölcske Communcaton Technology Laboratory ETH Zurch, Swtzerland Emal: {cakcaba patrcku boelcske}@nareeethzch Abstract

More information

Rate-Memory Trade-off for the Two-User Broadcast Caching Network with Correlated Sources

Rate-Memory Trade-off for the Two-User Broadcast Caching Network with Correlated Sources Rate-Memory Trade-off for the Two-User Broadcast Cachng Network wth Correlated Sources Parsa Hassanzadeh, Antona M. Tulno, Jame Llorca, Elza Erkp arxv:705.0466v [cs.it] May 07 Abstract Ths paper studes

More information

FINITELY-GENERATED MODULES OVER A PRINCIPAL IDEAL DOMAIN

FINITELY-GENERATED MODULES OVER A PRINCIPAL IDEAL DOMAIN FINITELY-GENERTED MODULES OVER PRINCIPL IDEL DOMIN EMMNUEL KOWLSKI Throughout ths note, s a prncpal deal doman. We recall the classfcaton theorem: Theorem 1. Let M be a fntely-generated -module. (1) There

More information

Lecture 4: Constant Time SVD Approximation

Lecture 4: Constant Time SVD Approximation Spectral Algorthms and Representatons eb. 17, Mar. 3 and 8, 005 Lecture 4: Constant Tme SVD Approxmaton Lecturer: Santosh Vempala Scrbe: Jangzhuo Chen Ths topc conssts of three lectures 0/17, 03/03, 03/08),

More information

PHYS 705: Classical Mechanics. Calculus of Variations II

PHYS 705: Classical Mechanics. Calculus of Variations II 1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Supplementary material: Margin based PU Learning. Matrix Concentration Inequalities

Supplementary material: Margin based PU Learning. Matrix Concentration Inequalities Supplementary materal: Margn based PU Learnng We gve the complete proofs of Theorem and n Secton We frst ntroduce the well-known concentraton nequalty, so the covarance estmator can be bounded Then we

More information

Communication Complexity 16:198: February Lecture 4. x ij y ij

Communication Complexity 16:198: February Lecture 4. x ij y ij Communcaton Complexty 16:198:671 09 February 2010 Lecture 4 Lecturer: Troy Lee Scrbe: Rajat Mttal 1 Homework problem : Trbes We wll solve the thrd queston n the homework. The goal s to show that the nondetermnstc

More information

where a is any ideal of R. Lemma Let R be a ring. Then X = Spec R is a topological space. Moreover the open sets

where a is any ideal of R. Lemma Let R be a ring. Then X = Spec R is a topological space. Moreover the open sets 11. Schemes To defne schemes, just as wth algebrac varetes, the dea s to frst defne what an affne scheme s, and then realse an arbtrary scheme, as somethng whch s locally an affne scheme. The defnton of

More information

1 Matrix representations of canonical matrices

1 Matrix representations of canonical matrices 1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:

More information

Complete subgraphs in multipartite graphs

Complete subgraphs in multipartite graphs Complete subgraphs n multpartte graphs FLORIAN PFENDER Unverstät Rostock, Insttut für Mathematk D-18057 Rostock, Germany Floran.Pfender@un-rostock.de Abstract Turán s Theorem states that every graph G

More information

THE SUMMATION NOTATION Ʃ

THE SUMMATION NOTATION Ʃ Sngle Subscrpt otaton THE SUMMATIO OTATIO Ʃ Most of the calculatons we perform n statstcs are repettve operatons on lsts of numbers. For example, we compute the sum of a set of numbers, or the sum of the

More information

Secret Communication using Artificial Noise

Secret Communication using Artificial Noise Secret Communcaton usng Artfcal Nose Roht Neg, Satashu Goel C Department, Carnege Mellon Unversty, PA 151, USA {neg,satashug}@ece.cmu.edu Abstract The problem of secret communcaton between two nodes over

More information

Linear Regression Analysis: Terminology and Notation

Linear Regression Analysis: Terminology and Notation ECON 35* -- Secton : Basc Concepts of Regresson Analyss (Page ) Lnear Regresson Analyss: Termnology and Notaton Consder the generc verson of the smple (two-varable) lnear regresson model. It s represented

More information

Approximately achieving Gaussian relay. network capacity with lattice-based QMF codes

Approximately achieving Gaussian relay. network capacity with lattice-based QMF codes Approximately achieving Gaussian relay 1 network capacity with lattice-based QMF codes Ayfer Özgür and Suhas Diggavi Abstract In [1], a new relaying strategy, quantize-map-and-forward QMF scheme, has been

More information

Application of Nonbinary LDPC Codes for Communication over Fading Channels Using Higher Order Modulations

Application of Nonbinary LDPC Codes for Communication over Fading Channels Using Higher Order Modulations Applcaton of Nonbnary LDPC Codes for Communcaton over Fadng Channels Usng Hgher Order Modulatons Rong-Hu Peng and Rong-Rong Chen Department of Electrcal and Computer Engneerng Unversty of Utah Ths work

More information

MAXIMUM A POSTERIORI TRANSDUCTION

MAXIMUM A POSTERIORI TRANSDUCTION MAXIMUM A POSTERIORI TRANSDUCTION LI-WEI WANG, JU-FU FENG School of Mathematcal Scences, Peng Unversty, Bejng, 0087, Chna Center for Informaton Scences, Peng Unversty, Bejng, 0087, Chna E-MIAL: {wanglw,

More information