CLOUD Radio Access Network (C-RAN) is an emerging

Size: px
Start display at page:

Download "CLOUD Radio Access Network (C-RAN) is an emerging"

Transcription

1 7402 EEE TRANSACTONS ON NFORMATON THEORY VOL. 62 NO. 2 DECEMBER 206 On the Optmal Fronthaul Compresson and Decodng Strateges for Uplnk Cloud Rado Access Networks Yuhan Zhou Member EEE YnfeXuStudent Member EEE WeYuFellow EEE and Jun Chen Senor Member EEE Abstract Ths paper nvestgates the compress-and-forward scheme for an uplnk cloud rado access network C-RAN model where mult-antenna base statons BSs are connected to a cloud-computng-based central processor CP va capacty-lmted fronthaul lnks. The BSs compress the receved sgnals wth Wyner-Zv codng and send the representaton bts to the CP; the CP performs the decodng of all the users messages. Under ths setup ths paper makes progress toward the optmal structure of the fronthaul compresson and CP decodng strateges for the compress-and-forward scheme n the C-RAN. On the CP decodng strategy desgn ths paper shows that under a sum fronthaul capacty constrant a generalzed successve decodng strategy of the quantzaton and user message codewords that allows arbtrary nterleaved order at the CP acheves the same rate regon as the optmal jont decodng. Furthermore t s shown that a practcal strategy of successvely decodng the quantzaton codewords frst then the user messages acheves the same maxmum sum rate as jont decodng under ndvdual fronthaul constrants. On the jont optmzaton of user transmsson and BS quantzaton strateges ths paper shows that f the nput dstrbutons are assumed to be Gaussan then under jont decodng the optmal quantzaton scheme for maxmzng the achevable rate regon s Gaussan. Moreover Gaussan nput and Gaussan quantzaton wth jont decodng acheve to wthn a constant gap of the capacty regon of the Gaussan multple-nput multple-output MMO uplnk C-RAN model. Fnally ths paper addresses the computatonal aspect of optmzng uplnk MMO C-RAN by showng that under fxed Gaussan nput the sum rate maxmzaton problem over the Gaussan quantzaton nose covarance matrces can be formulated as convex optmzaton problems thereby facltatng ts effcent soluton. Manuscrpt receved January 9 206; revsed August 9 206; accepted October Date of publcaton October 3 206; date of current verson November Ths work was supported by the Natural Scences and Engneerng Research Councl of Canada. Y. Xu was supported n part by a Grant from the Unversty Grants Commttee of the Hong Kong Specal Admnstratve Regon Chna under Project AoE/E- 02/08 and n part by the Natonal Natural Scence Foundaton of Chna under Grant Ths paper was presented n part at the 205 EEE nternatonal Symposum on nformaton Theory. Y. Zhou was wth The Edward S. Rogers Sr. Department of Electrcal and Computer Engneerng Unversty of Toronto Toronto ON M5S 3G4 Canada. He s now wth Qualcomm Technologes nc. San Dego CA 922 USA e-mal: yzhou@ece.utoronto.ca. Y. Xu was wth the School of nformaton Scence and Engneerng Southeast Unversty Nanjng Chna. He s now wth the Department of nformaton Engneerng nsttute of Network Codng The Chnese Unversty of Hong Kong Hong Kong e-mal: ynfexu@nc.cuhk.edu.hk. W. Yu s wth The Edward S. Rogers Sr. Department of Electrcal and Computer Engneerng Unversty of Toronto Toronto ON M5S 3G4 Canada e-mal: weyu@ece.utoronto.ca. J. Chen s wth the Department of Electrcal and Computer Engneerng McMaster Unversty Hamlton ON L8S 4L8 Canada e-mal: junchen@ece.mcmaster.ca. Communcated by O. Smeone Assocate Edtor for Communcatons. Dgtal Object dentfer 0.09/TT ndex Terms Cloud rado access network multple-access relay channel compress-and-forward fronthaul compresson jont decodng generalzed successve decodng.. NTRODUCTON CLOUD Rado Access Network C-RAN s an emergng moble network archtecture n whch base-statons BSs n multple cells are connected to a cloud-computng based central processor CP through wred/wreless fronthaul lnks. n the deployment of a C-RAN system the BSs degenerate nto remote antennas heads mplementng only rado functonaltes such as frequency up/down converson samplng flterng and power amplfcaton. The baseband operatons at the BSs are mgrated to the CP. The C-RAN model effectvely vrtualzes rado-access operatons such as the encodng and decodng of user nformaton and the optmzaton of rado resources []. Advanced jont multcell processng technques such as the coordnated mult-pont CoMP and network multple-nput multple-output MMO can be effcently supported by the C-RAN archtecture potentally enablng sgnfcantly hgher data rates than conventonal cellular networks [2]. Ths paper consders the uplnk of a MMO C-RAN system under fnte-capacty fronthaul constrants as shown n Fg. whch conssts of multple remote users sendng ndependent messages to the CP through multple BSs servng as relay nodes. Both the user termnals and the BSs are equpped wth multple antennas. The BSs and the CP are connected va noseless fronthaul lnks wth fnte capacty. Ths channel model can be thought of as a two-hop relay network wth an nterference channel between the users and the BSs followed by a noseless multple-access channel between the BSs and the CP. Ths paper assumes that a compress-andforward relayng strategy s employed n whch the relayng BSs perform dstrbuted lossy source codng to compress the receved sgnals and forward the representaton bts to the CP through dgtal fronthaul lnks and all the user messages are eventually decoded at the CP. The lossy source codng mplemented at BSs nvolves Wyner-Zv codng typcally consstng of quantzaton followed by bnnng n order to acheve hgh compresson effcency by leveragng the correlaton between the receved sgnals across dfferent BSs whch s dfferent from the pont-to-pont fronthaul compresson mplemented n today s conventonal C-RAN systems. A key queston n the desgn of compress-and-forward strategy n uplnk C-RAN s the optmal nput codng strategy EEE. Personal use s permtted but republcaton/redstrbuton requres EEE permsson. See for more nformaton.

2 ZHOU et al.: ON OPTMAL FRONTHAUL COMPRESSON AND DECODNG STRATEGES FOR UPLNK C-RANs 7403 Fg.. The uplnk C-RAN model under fnte-capacty fronthaul constrants. at the user termnals the optmal relayng strategy at the BSs and the optmal decodng strategy at the CP. Toward ths end ths paper restrcts attenton to the strategy of compressng the receved sgnals at the BSs then ether jont decodng of the quantzaton and message codewords smultaneously or generalzed successve decodng of the quantzaton and message codewords n some arbtrary order at the CP. Under ths assumpton ths paper makes the followng contrbutons toward revealng the structure of the optmal compress-andforward strategy. Frst motvated by the fact that successve decodng s much easer to mplement than jont decodng we seek to understand whether successve decodng at the CP can perform as well as jont decodng. Toward ths end ths paper shows that generalzed successve decodng ndeed acheves the same rate regon as jont decodng for an uplnk C-RAN model under a sum fronthaul constrant. Further although not necessarly so for the general rate regon f one focuses on maxmzng the sum rate the partcular strategy of successvely decodng the quantzaton codewords frst then the user messages acheves the optmal sum rate. Second we seek to understand the optmal nput dstrbuton and quantzaton schemes n uplnk C-RAN. Although t s well known that jont Gaussan strateges are not necessarly optmal ths paper shows that f we fx the nput dstrbuton to be Gaussan then the optmal quantzaton scheme s Gaussan under jont decodng and vce versa. Moreover jont Gaussan sgnalng can be shown to acheve the capacty regon of the Gaussan multple-nput multple-output MMO uplnk C-RAN model to wthn a constant gap. Fnally ths paper makes progress on the computatonal front by showng that under the jont Gaussan assumpton the optmzaton of the quantzaton covarance matrces for maxmzng the sum rate can be formulated as a convex optmzaton problem. These results suggest that jont Gaussan nput sgnalng and Gaussan quantzaton s a sutable strategy for the uplnk C-RAN. A. Related Work The achevable rate regon of compress-and-forward wth jont decodng for the uplnk C-RAN model was frst characterzed n [3] for a sngle-transmtter model then n [4] for the mult-transmtter case. However the number of rate constrants n the jont decodng rate regon grows exponentally wth the sze of the network [3 Proposton V.] whch makes the evaluaton of the achevable rate computatonally prohbtve. The achevable rate regon of the compressand-forward strategy wth practcal successve decodng n whch the quantzaton codewords are decoded frst then the user messages are decoded based on the recovered quantzaton codewords has also been studed for the uplnk C-RAN model [5 Theorem ]. One of the objectves of ths paper s to llustrate the relatonshp between jont decodng and successve decodng. n the exstng lterature the equvalence between these two decodng schemes s frst demonstrated for sngle-source sngle-destnaton and sngle-relay networks [6 Appendx 6C] then shown for sngle-source sngle-destnaton and multple-relay networks [7] under ether block-by-block forward decodng or block-by-block backward decodng. Ths paper further demonstrates that n the case of uplnk C-RAN whch s a multple-source sngle-destnaton multple-relay network the optmalty of successve decodng stll holds under sutable condtons. n general t s challengng to fnd the optmal jont nput and quantzaton nose dstrbutons that maxmze the achevable rate of the compress-and-forward scheme for uplnk C-RAN. Gaussan sgnalng s not necessarly optmal n partcular n a smple example of uplnk C-RAN wth one user and two BSs shown n [5] bnary nput s shown to outperform Gaussan nput for a broad range of sgnal-tonose ratos SNRs. However Gaussan nput and Gaussan quantzaton can be shown to be approxmately optmal. n fact the uplnk C-RAN model s an example of a general Gaussan relay network wth multple sources and a sngle destnaton for whch a generalzaton of compress-and-forward wth jont decodng referred to as nosy network codng scheme [8] [] and wth Gaussan nput and Gaussan quantzaton can be shown to acheve to wthn a constant gap to the nformaton theoretcal capacty of the overall network. nstead of usng nosy network codng our prevous work [2] shows that successve decodng can acheve the sum capacty of uplnk C-RAN to wthn constant gap f the fronthaul lnks are subjected to a sum capacty constrant. n ths work we further demonstrate that the compress-and-forward scheme wth jont decodng can acheve to wthn a constant gap to the entre capacty regon of the uplnk C-RAN model wth ndvdual fronthaul constrants; same s true for successve decodng under sutable condton. An mportant theoretcal result obtaned n ths paper s that f the nput dstrbutons of the uplnk C-RAN model are fxed to be Gaussan then Gaussan quantzer s n fact optmal under jont decodng. Fndng the optmal quantzaton for the C-RAN model s related to the mutual nformaton constrant problem [3] for whch entropy power nequalty s used to show that Gaussan quantzaton s optmal for a three-node relay network wth Gaussan nput. However t s challengng to extend ths approach to the uplnk C-RAN model whch has multple sources. Ths paper provdes a novel proof of the optmalty of Gaussan quantzaton based on the de Brujn dentty and the Fsher nformaton nequalty. The dea of the

3 7404 EEE TRANSACTONS ON NFORMATON THEORY VOL. 62 NO. 2 DECEMBER 206 proof s nspred by the connecton between the C-RAN model and the CEO problem n source codng [4] where a source s descrbed to a central unt by remote agents wth nosy observatons. The soluton to the CEO problem s known for the scalar Gaussan case [5] [6]; sgnfcant recent progress has been made n the vector case e.g. [7]. The smlarty between the uplnk C-RAN model and the CEO problem has been noted n [5] based on whch a capacty upper bound for the uplnk C-RAN model s establshed. n ths paper we use technques for establshng the outer bound for the Gaussan vector CEO problem [8] to prove the optmalty of Gaussan quantzaton. We also remark the connecton between ths quantzaton optmzaton problem and the nformaton bottleneck method [9] for whch jont Gaussan dstrbuton s shown to be Pareto optmal. The technque used n ths paper s a sgnfcantly smpler alternatve to the enhancement technque gven n [20]. Ths paper also makes progress n observng that the optmzaton of Gaussan quantzaton nose covarance matrces for maxmzng the weghted sum rate of uplnk C-RAN can be reformulated as a convex optmzaton problem. The quantzaton nose covarance optmzaton problem for uplnk C-RAN has been consdered extensvely n the lterature. Varous optmzaton algorthms have been developed to maxmze the achevable rates of the compress-and-forward scheme for the case of ether successve decodng of the quantzaton codewords followed by the user messages [2] [22] or jont decodng of the quantzaton codewords and user messages smultaneously [23]. n partcular a zero-dualty gap result has been shown for the weghted sum rate maxmzaton problem under a sum fronthaul capacty constrant n [2] based on a tme-sharng argument to facltate the algorthm desgn for searchng optmal quantzaton nose covarance matrces. However the optmzaton problems formulated n these works.e. [2] [23] are nherently nonconvex hence only locally convergent algorthms are obtaned. nstead ths paper provdes a convex formulaton of the problem that allows globally optmal Gaussan quantzaton nose covarance matrces to be found. Note that here the optmzaton of the quantzaton nose covarance matrx s performed under the fxed Gaussan nput. The jont optmzaton of the nput sgnal and quantzaton nose covarance matrces remans a computatonally challengng dffcult problem [24]. B. Man Contrbutons Ths paper establshes several nformaton theoretc results on the compress-and-forward scheme for the uplnk MMO C-RAN model wth fnte-capacty fronthaul lnks. A summary of our man contrbutons s as follows: Ths paper demonstrates that generalzed successve decodng for compress-and-forward whch allows the decodng of the quantzaton and user message codewords n an arbtrary order can acheve the same rate regon as jont decodng for compress-and-forward under a sum fronthaul capacty constrant. Further successve decodng of the quantzaton codewords frst then the user message codewords can acheve the same maxmum sum rate as jont decodng under ndvdual fronthaul constrants. Ths paper shows that under Gaussan nput and Gaussan quantzaton compress-and-forward wth jont decodng acheves to wthn a constant gap of the capacty regon of the uplnk MMO C-RAN model. Combnng wth the result above the same constant-gap result also holds for generalzed successve decodng under a sum fronthaul constrant and for successve decodng for sum rate maxmzaton. Ths paper shows that under fxed Gaussan nput Gaussan quantzaton maxmzes the achevable rate regon under jont decodng. Combnng wth the optmalty result for successve decodng ths also mples that under fxed Gaussan nput Gaussan quantzaton s optmal for generalzed successve decodng under a sum fronthaul constrant and for successve decodng for sum rate maxmzaton. Under jont Gaussan sgnalng and Gaussan quantzaton the optmzaton of quantzaton nose covarance matrces for maxmzng weghted sum rate under jont decodng and for maxmzng sum rate under successve decodng can be formulated as convex optmzaton problems whch facltate ther effcent soluton. C. Paper Organzaton and Notaton The rest of the paper s organzed as follows. Secton ntroduces the channel model for the uplnk MMO C-RAN and characterzes the achevable rate regons for compressand-forward schemes wth jont decodng and generalzed successve decodng respectvely. Secton demonstrates the rate-regon optmalty of generalzed successve decodng under a sum fronthaul constrant and the sum-rate optmalty of successve decodng. Secton V focuses on establshng the optmalty of Gaussan quantzers wth jont decodng under Gaussan nput. n addton Secton V also establshes the approxmate capacty of the uplnk MMO C-RAN model to wthn constant gap and shows the convex formulaton of the weghted sum rate maxmzaton problems over the quantzaton nose covarance matrces. Secton V concludes the paper. Notaton: Boldface letters denote vectors or matrces where context should make the dstncton clear. Superscrpts T and denote transpose operaton Hermtan transpose and matrx nverse operators; E[ ] and Tr denote expectaton and matrx trace operators; co denotes the convex closure operaton; p denotes the probablty dstrbuton functon n ths paper. We use X j = X X +...X j to denote a matrx wth j + columns for j. For a vector/matrx X X S denotes a vector/matrx wth elements whose ndces are elements of S. Gven matrces X...X L } dag X l } L denotes the block dagonal matrx formed wth X l on the dagonal. For random vectors X and Y JX Y denotes the Fsher nformaton matrx of X condtonal on Y; covx Y denotes the covarance matrx of X condtonal on Y.

4 ZHOU et al.: ON OPTMAL FRONTHAUL COMPRESSON AND DECODNG STRATEGES FOR UPLNK C-RANs ACHEVABLE RATE REGONS FOR UPLNK C-RAN A. Channel Model Ths paper consders an uplnk C-RAN model where K moble users communcate wth a CP through L BSs as shown n Fg.. The noseless dgtal fronthaul lnk connectng the BS l to the CP has the capacty of C l bts per complex dmenson. The fronthaul capacty C l s the maxmum long-term average throughput of the lth fronthaul lnk.e. lm n= n n C l C l wherec l represents the nstantaneous transmsson rate of the lth fronthaul lnk at the th tme slot. Each user termnal s equpped wth M antennas; each BS s equpped wth N antennas. Perfect channel state nformaton CS s assumed to be avalable to all the BSs and to the CP. For smple notaton we denote K = K } and L = L} n ths paper. Let X k C M be the sgnal transmtted by the kth user whch[ s subject ] to per-user transmt power constrant of P k.e. E X k X k P k. The sgnal receved at the lth BS can be expressed as K Y l = H lk X k + Z l l = 2...L k= where Z l CN0 l represents the addtve Gaussan nose for BS l and s ndependent across dfferent BSs and H lk denotes the complex channel matrx from user k to BS l. We consder the compress-and-forward scheme [25] [26] appled to the uplnk C-RAN system n whch the BSs compress the receved sgnals Y l and forward the quantzaton bts to the CP for decodng. At the CP the user messages are decoded usng ether jont decodng or some form of successve decodng. n jont decodng the quantzaton codewords and the message codewords are decoded smultaneously whereas n successve decodng the quantzaton codewords and messages are decoded successvely n some prescrbed order. Dfferent orderngs can potentally result n dfferent achevable rates. B. Achevable Rate-Fronthaul Regons for Jont Decodng Successve Decodng and Generalzed Successve Decodng n the followng we present the achevable rate-fronthaul regons of compress-and-forward wth jont decodng and dfferent forms of successve decodng. Proposton [3 Proposton V.]: For the uplnk C-RAN model shown n Fg. the achevable rate-fronthaul regon of compress-and-forward wth jont decodng PJD s the closure of the convex hull of all R R K C...C L R K +L R k < k T [ C l + satsfyng Y l ; Ŷ l X K ] + X T ; Ŷ S c X T c for all T K and S L for some product dstrbuton Kk= px k [ ] L pŷ l y l such that E X k X k P k for k =...K. 2 Note that for the uplnk C-RAN model the rate regon 2 gven by compress-and-forward wth jont decodng s dentcal to the rate regon of the nosy network codng scheme [9] whch s an extenson of the compress-and-forward scheme to the general multple access relay network by usng jont decodng at the recever and block Markov codng at the transmtters. As a more practcal decodng strategy successve decodng of quantzaton codewords frst and then the user messages at the CP can also be used n uplnk C-RAN. The followng proposton states the rate-fronthaul regon acheved by successve decodng. Proposton 2: [5 Theorem ]: For the uplnk C-RAN model shown n Fg. the achevable rate-fronthaul regon of compress-and-forward wth successve decodng PSD sthe closure of the convex hull of all R R K C...C L R+ K +L satsfyng R k < X T ; Ŷ L X T c T K 3 and k T Y S ; Ŷ S Ŷ S c < C l S L 4 for some product dstrbuton K k= p x k L [ ] pŷ l y l such that E X k X k P k for k =...K. Note that 3 s the multple-access rate regon 4 represents the Berger-Tung rate regon for dstrbuted lossy compresson [6 Theorem 2.] whle 2 ncorporates the jont decodng of the quantzaton codewords and the user messages. Because of ts lower decodng complexty successve decodng s usually preferred for practcal mplementaton of the uplnk C-RAN systems [2] [22]. Note that n the above strategy successve decodng apples only to the vector X k user message codewords and the vector Y l quantzaton codewords; the elements wthn vectors X k and Y l are stll decoded jontly. t s possble to mprove upon the successve decodng scheme by allowng arbtrary nterleaved decodng orders between quantzaton codewords and user message codewords. We call ths the generalzed successve decodng scheme n ths paper. The generalzed successve decodng scheme s frst suggested n [27] under the name of jont base-staton successve nterference cancelaton scheme. n such a successve decodng strategy the set of potental decodng orders ncludes all the permutatons of quantzaton and user message codewords. Denote π as a permutaton on the set of quantzaton and user message codewords Ŷ Ŷ 2...Ŷ L X X 2...X K. For a gven permutaton π the decodng order s gven by the ndex of the elements n π.e. π π2 πl + K. For example consder an uplnk C-RAN model as shown n Fg. wth 2 BSs and 2 users. f π = Ŷ X Ŷ 2 X 2 then the decodng of Ŷ 2 and X 2 can use both prevously decoded user messages and quantzaton codewords as sde nformaton. The resultng rate regon s

5 7406 EEE TRANSACTONS ON NFORMATON THEORY VOL. 62 NO. 2 DECEMBER 206 characterzed as R < X ; Ŷ R 2 < X 2 ; Ŷ Ŷ 2 X for some product dstrbuton px px 2 pŷ y pŷ 2 y 2 that satsfes C > Y ; Ŷ 6 C 2 > Y 2 ; Ŷ 2 Ŷ X. Let Xk Yl denote the ndces of user messages that are decoded before X k and Y l under the permutaton π respectvely. Lkewse let J Xk J Yl denote the ndces of quantzaton codewords that are decoded before X k and Y l under the permutaton π respectvely. The rate-fronthaul regon of generalzed successve decodng for uplnk C-RAN s stated n the followng proposton. Proposton 3: For the uplnk C-RAN model shown n Fg. the achevable rate-fronthaul regon of generalzed successve decodng wth decodng order π P GSD π sthe closure of the convex hull of all R R K C...C L R+ K +L satsfyng R k < X k ; Ŷ JXk X Xk k K 7 and C l > 5 Y l ; Ŷ l Ŷ JYl X Yl l L 8 for some product dstrbuton K k= p x k L [ ] pŷ l y l such that E X k X k P k for k =...K. The generalzed successve decodng regon PGSD s defned to be the closure of the convex hull of the unon of regons P GSD π over all possble permutaton π s.e. PGSD = co P GSD π. 9 π. OPTMALTY OF SUCCESSVE DECODNG n general we have P SD P GSD P JD. However successve decodng s more desrable than jont decodng not only because of ts lower complexty but also due to the fact that ts rate regon can be more easly evaluated. Thus there s a tradeoff between complexty and performance n desgnng decodng strateges for uplnk C-RAN. To further understand ths tradeoff ths secton establshes that: By allowng arbtrary decodng orders of quantzaton and message codewords the generalzed successve decodng actually acheves the same rate regon as jont decodng under a sum fronthaul constrant; 2 The practcal successve decodng strategy n whch the BSs decode the quantzaton codewords frst then the user messages actually acheves the same maxmum sum rate as jont decodng under ndvdual fronthaul constrants. A. Optmalty of Generalzed Successve Decodng Under a Sum Fronthaul Constrant Ths secton shows that n the specal case where the fronthaul lnks are subject to a sum capacty constrant generalzed successve decodng acheves the rate regon as jont decodng. n ths model the fronthaul capactes are constraned by L C l C and C l 0 justfable n stuatons where the fronthaul are mplemented n shared medum e.g. wreless fronthaul lnks as has been consdered n [2] and [2]. Under the sum fronthaul capacty constrant C the rate regons acheved by wth jont decodng R JDs s defned as R JDs R R K C...C L PJD = R...R K L C l C C l 0. 0 Lkewse the rate regon acheved wth generalzed successve decodng R GSDs s gven by R GSDs = R...R K R R K C...C L PGSD L C l C C l 0 }. The followng theorem states the man result of ths secton. Theorem : For the uplnk C-RAN model wth the sum fronthaul capacty constrant L C l C and C l 0 the rate regon acheved by generalzed successve decodng and jont codng are dentcal.e. R GSDs = R JDs. Proof: See Appendx A. The roadmap for the proof of Theorem shares the same dea as the characterzaton of the rate dstorton regon for the CEO problem under logarthmc loss [28] and the capacty regon for the multple-access channel [29] whch uses the propertes of submodular polyhedron see Appendx B. Specfcally n order to show R GSDs = R JDs we show that under fxed product dstrbuton K k= px k L pŷ l y l every extreme pont of the polyhedron R JDs C s domnated by the ponts n the polyhedron defned by R GSDs C. We conjecture that Theorem holds also for the case of ndvdual fronthaul capacty constrants. However n that case fndng the domnant faces of polyhedron PJD becomes much more dffcult t appears non-trval to extend the current proof to the case of ndvdual fronthaul constrants. B. Optmalty of Successve Decodng for Maxmzng Sum Rate As a specal nstance of generalzed successve decodng successve decodng reconstructs quantzaton codewords frst then user message codewords n a sequental order. n what follows we show that the optmal sum rate acheved by ths specal successve decodng s the same as that acheved by jont decodng.

6 ZHOU et al.: ON OPTMAL FRONTHAUL COMPRESSON AND DECODNG STRATEGES FOR UPLNK C-RANs 7407 Under fxed nput dstrbuton and fxed fronthaul capactes C l forl =...L the maxmum sum rate acheved by jont decodng R JDSU M s defned as K R JDSU M = max R k 2 k= s.t. R R K C...C L PJD. Lkewse the maxmum sum rate for successve decodng R SDSU M s gven by K R SDSU M = max R k 3 k= s.t. R R K C...C L PSD. The followng theorem demonstrates the optmalty of successve decodng for maxmzng uplnk C-RAN under ndvdual fronthaul constrants. Theorem 2: For the uplnk C-RAN model wth fronthaul capactes C l shown n Fg. the maxmum sum rates acheved by successve decodng and jont decodng are the same.e. R SDSU M = R JDSU M. Proof: See Appendx C. We remark that Theorem 2 can be thought as a generalzaton of a result n [7] that shows under block-byblock forward decodng the compress-and-forward scheme wth compresson-message successve decodng acheves the same maxmum rate as that wth compresson-message jont decodng for a sngle-source sngle-destnaton relay network. The uplnk C-RAN s a multple-source sngle-destnaton relay network. f all the user termnals are regarded as one super transmtter then t follows from [7] that successve decodng and jont decodng acheve the same maxmum sum rate. However the proof n [7] s qute complcated. n ths paper we provde an alternatve proof technque for showng the optmalty of successve decodng for sum rate maxmzaton n uplnk C-RAN. The new proof utlzes the propertes of submodular optmzaton whch s smpler than the proof provded n [7]. The proofs of Theorem 2 and Theorem llustrate the usefulness of submodular optmzaton n establshng ths type of results. t s remarked that successve decodng and jont decodng acheve the same sum rate but do not acheve the same rate regon. The achevable rate regon of generalzed successve decodng s n general larger than that of successve decodng. For example consder the compress-and-forward scheme for maxmzng the rate of user R only. The optmal decodng order should be X K\} Ŷ L X. Wth ths decodng order user can acheve larger rate than usng the decodng order of Ŷ L X K because the decoded user messages X 2 X 3...X K can serve as sde nformaton for the decodng of Ŷ L. n general to maxmze a weghted sum rate one needs to maxmze over L+K! orderngs for generalzed successve decodng. The man result of ths secton shows however that for maxmzng the sum rate n uplnk C-RAN successve decodng of the quantzaton codewords frst and then the user messages s optmal; ths reduces the search space consderably to L!K! decodng orders. V. UPLNK C-RAN WTH GAUSSAN NPUT AND GAUSSAN QUANTZATON n ths secton we specalze to the compress-and-forward scheme for uplnk C-RAN wth Gaussan nput sgnal at the users and Gaussan quantzaton at the BSs. Although t s known that jont Gaussan dstrbuton s suboptmal for uplnk C-RAN [5] Gaussan nput s desrable because t leads to achevable rate regons that can be easly evaluated. n the followng secton t s shown that wth Gaussan nput and Gaussan quantzaton compress-and-forward wth jont decodng can acheve the capacty regon of uplnk C-RAN to wthn a constant gap. The gap depends on the network sze but s ndependent of the channel gan matrx and the SNR. We further establsh the optmalty of Gaussan compresson at the relayng BSs for jont decodng f the nput s Gaussan. These results can be further extended to generalzed successve decodng under a sum fronthaul constrant and successve decodng for the maxmum sum rate. Addtonally under Gaussan sgnalng the optmzaton of quantzaton nose covarance matrces for weghted sum-rate maxmzaton under jont decodng and for sum rate maxmzaton under practcal successve decodng can be cast as convex optmzaton problems thereby facltatng ther effcent numercal soluton. Throughout ths secton we focus on the achevable rates under the fxed Gaussan nput and the fxed fronthaul capacty constrants C l for l =...L. A. Achevable Rate Regons Under Gaussan nput and Gaussan Quantzaton We let the nput dstrbuton be Gaussan.e. X k CN0 K k then evaluate the rate regons for the compressand-forward scheme wth jont decodng and successve decodng under Gaussan quantzaton denoted as R G JDGn and R G SDGn respectvely. Set L pŷ l y l CNy l Q l where Q l s the Gaussan quantzaton nose covarance matrx at the lth BS. Wth Gaussan nput and Gaussan quantzaton we have Y l ; Ŷ l X K = log l + Q l 4 Q l and X T ; Ŷ S c X T c H S c T K T H S = log c T + dag l + Q l } c. dag l + Q l } c 5 The achevable rate regon 2 for jont decodng can be evaluated as R k < [ C l log ] l + Q l Q l k T H S c T K T H S + log c T + dag l + Q l } c dag l + Q l } c 6 for all T K and S L.

7 7408 EEE TRANSACTONS ON NFORMATON THEORY VOL. 62 NO. 2 DECEMBER 206 Lkewse the achevable rate expresson 3 for successve decodng becomes H S c KK K H LK + dag l + Q l } R k < log dag l + Q l } k T 7 for all T K. n dervng the fronthaul constrant 4 we start wth evaluatng the mutual nformaton Y S ; Ŷ S Ŷ S c = X K Y S ; Ŷ S Ŷ S c X K ; Ŷ S Y S Ŷ S c = X K ; Ŷ S Ŷ S c + Y S ; Ŷ S X K Ŷ S c X K ; Ŷ S Y S Ŷ S c a = X K ; Ŷ S Ŷ S c + Y S ; Ŷ S X K b = X K ; Ŷ L X K ; Ŷ S c + Y l ; Ŷ l X K 8 for all S L where the equalty a follows from the fact that and Y S ; Ŷ S X K Ŷ S c = Y S ; Ŷ S X K 9 X K ; Ŷ S Y S Ŷ S c = 0 20 and equalty b follows from the fact that Y S ; Ŷ S X K = Y l ; Ŷ l X K. 2 The above equatons 9-2 follow from the Markov chan Ŷ Y X K Y j Ŷ j = j. We further evaluate the mutual nformaton expresson 8 wth Gaussan nput and Gaussan quantzaton whch yelds that Y S ; Ŷ S Ŷ S c H LK K K H LK + dag l + Q l } = log dag l + Q l } H S c KK K H S log c K + dag l + Q l } c dag l + Q l } c + log l + Q l Q l H LK K K H LK + dag l + Q l } = log H S c KK K H S c K + dag l + Q l } c log Q l C l. nstead of parameterzng the rate expressons over Q l as n above n ths secton we ntroduce the followng reparameterzaton whch s crucal for provng our man results. Defne B l = l + Q l. 22 We represent the rate regons of jont decodng and successve decodng n terms of B l n the followng. Proposton 4: For the uplnk C-RAN model shown n Fg. and under fxed Gaussan nput X K CN0 K K wth K K = dag K k } k K. The rate-fronthaul regon for jont decodng under Gaussan quantzaton PJDGn G s the closure of the convex hull of all R R K C...C L satsfyng R k < l C l log k T l B l + log c H lt B lh lt + KT 23 KT for all T [ K and ] S L forsome0 B l l where K T = E X T X T s the covarance matrx of X T andh lt denotes the channel matrx from X T to Y l. Furthermore under the fxed fronthaul capacty constrants C l for l =...L the rate regon acheved by jont decodng R G JDGn s defned as R G JDGn } = R...R K : R R K C...C L PJDGn G. 24 Proposton 5: For the uplnk C-RAN model shown n Fg. and under fxed Gaussan nput X K CN0 K K wth K K = dag K k } k K. The rate-fronthaul regon for successve decodng PSDGn G s the closure of the convex hull of all R R K C...C L satsfyng L H lt B lh lt + K T R k < log T K k T KT 25 and L H lk B lh lk + K K log + log H c lk B lh lk + KK l l B l < C l S L 26 [ ] for some 0 B l l wherek T = E X T X T s the covarance matrx of X T andh lt denotes the channel matrx from X T to Y l. Moreover under the fxed fronthaul capacty constrants C l for l =...L the rate regon acheved by

8 ZHOU et al.: ON OPTMAL FRONTHAUL COMPRESSON AND DECODNG STRATEGES FOR UPLNK C-RANs 7409 successve decodng R G SDGn s defned as R G SDGn = } R...R K : R R K C...C L PSDGn G. 27 B. Gaussan nput and Gaussan Quantzaton Acheve Capacty to wthn Constant Gap Wth Gaussan nput and Gaussan quantzaton the rate regon of jont decodng 23 can be shown to be wthn a constant gap to the capacty regon of uplnk C-RAN. Ths constant-gap result s stated n the followng theorem. Theorem 3: For any rate tuple R R 2...R K wthn the cut-set bound for uplnk C-RAN wth fxed fronthaul capactes of C l shown n Fg. the rate tuple R η R 2 η...r K η wth η = NL + M s achevable for compress-and-forward wth Gaussan nput Gaussan quantzaton and jont decodng where L s the number of BSs n the network M s the number of transmt antennas at user and N s the number of receve antennas at BS.e. R η R 2 η...r K η R G JDGn. Proof: See Appendx D. Although the uplnk C-RAN model s an example of a relay network for whch nosy network codng approach apples and t s known that compress-and-forward wth jont decodng acheves the same rate regon as nosy network codng for uplnk C-RAN we remark that Theorem 3 does not mmedately follow from the constant-gap optmalty result of nosy network codng [9]. The constant-gap optmalty of nosy network codng s proven for Gaussan relay networks whereas the uplnk C-RAN model contans fronthaul lnks whch are dgtal connectons and not Gaussan channels. Combnng wth our earler results on the optmalty of successve decodng constant-gap optmalty results can also be obtaned for compress-and-forward wth generalzed successve decodng and successve decodng. These results are summarzed n the followng corollary. Corollary : For the uplnk C-RAN model as shown n Fg. compress-and-forward wth generalzed successve decodng under Gaussan nput and Gaussan quantzaton acheves the capacty regon to wthn NL+ M bts per complex dmenson f the fronthaul lnks are subjected to a sum capacty constrant L C l C. Furthermore compressand-forward wth successve decodng under Gaussan nput and Gaussan quantzaton acheves the sum capacty of an uplnk C-RAN model wth ndvdual fronthaul constrants to wthn NL + MK bts per complex dmenson. C. Optmalty of Gaussan Quantzaton Under Jont Decodng For the Gaussan uplnk MMO C-RAN model t s known that Gaussan nput and Gaussan quantzaton are not jontly optmal [5]. However f the quantzaton nose s fxed as Gaussan then the optmal nput dstrbuton must be Gaussan. Ths s because the channel reduces to a conventonal Gaussan multple-access channel n ths case. The man result of ths secton s that the converse s also true.e. under fxed Gaussan nput Gaussan quantzaton actually maxmzes the achevable rate regon of the uplnk C-RAN model under jont decodng. Under fxed fronthaul capacty constrants C l for l =...L we let R JDGn denote the rate regon of jont decodng under Gaussan nput and optmal quantzaton. n the followng we frst defne Fsher nformaton and state the two man tools for provng ths result: the Brujn dentty and the Fsher nformaton nequalty. We then present the man theorem on the optmalty of Gaussan quantzaton for jont decodng.e. R G JDGn = R JDGn. Defnton : Let X Y be a par of random vectors wth jont probablty dstrbuton functon p x y. The Fsher nformaton matrx of X s defned as J X = E [ log p X log p X T]. 28 Lkewse the Fsher nformaton matrx of X condtonal on Y s defned as [ J X Y = E log p X Y log p X Y T]. 29 Lemma : Fsher nformaton nequalty [30] [8 Lemma 2]: Let U X be an arbtrary complex random vector where the condtonal Fsher nformaton of X condtoned on U exsts. We have log πej X U h X U. 30 Lemma 2 Brujn dentty [3] [8 Lemma 3]: Let V V 2 be an arbtrary random vector wth fnte second moments and N be a zero-mean Gaussan random vector wth covarance N. Assume V V 2 and N are ndependent. We have cov V 2 V V 2 + N = N N J V 2 + N V N. 3 Theorem 4: For the uplnk C-RAN under fxed Gaussan nput dstrbuton and assumng jont decodng Gaussan quantzaton s optmal.e. R G JDGn = R JDGn. Proof: Recall that the achevable rate regon of the compress-and-forward scheme under jont decodng s gven by the set of R...R K derved from 2 under the jont dstrbuton p x...x K y...y L ŷ...ŷ L K L L = p x k p y l x...x K p ŷ l y l. 32 k= For fxed Gaussan nput X K CN0 K K and fxed L pŷ l y l choose B l wth 0 B l l such that cov Y l X K Ŷ l = l l B l l l = L. We proceed to show that the achevable rate regon as gven by 23 wth a Gaussan L pŷ l y l CNY l Q l where Q l = B l l s as large as that of 2 under Gaussan nput.

9 740 EEE TRANSACTONS ON NFORMATON THEORY VOL. 62 NO. 2 DECEMBER 206 Frst note that Y l ; Ŷ l X K = log πe l h Y l X K Ŷ l Y l X K Ŷ l log πe l log πe cov l = log l = L 33 l B l where we use the fact that Gaussan dstrbuton maxmzes dfferental entropy. Moreover we have X T ; Ŷ S c X T c = h X T h X T X T c Ŷ S c log K T log J X T X T c Ŷ S c where the nequalty s due to Lemma. Snce Y S c = H S c T X T + H S c T cx T c + Z S c t follows from the MMSE estmaton of Gaussan random vectors that X T where G T l = KT = E [X T X T c Y S c] + N T S c = c G T l Yl H lt cx T c + NT S c + j S c H jt j H jt H lt l and N T S c CN 0 N wth covarance matrx N = KT + H lt l H lt. 34 c Here E [X T X T c Y S c] s the MMSE estmator of X T from X T c Y S c. The error n estmaton s N T S c and the MMSE matrx s N. By the matrx complementary dentty between Fsher nformaton matrx and MMSE n Lemma 2 we have J X T X T c Ŷ S c = N N cov G T l Y l H lt cx T c X K Ŷ S c N c = N = N = N N cov N c H lt G T l Y l X K Ŷ S c [ c G T l cov c l B l H lt = K T + c H lt B lh lt. N Y l X K Ŷ l G T l ] N Therefore X T ; Ŷ S c X T c log = log JX T X T c Ŷ S c KT KT + c H lt B lh lt KT 35 for all T K and S L. Combnng 33 and 35 we conclude that R G JDGn as derved from 23 s as large as R JDGn. Therefore RG JDGn = R JDGn. D. Optmzaton of Gaussan nput and Gaussan Quantzaton Nose Covarance Matrces Ths secton addresses the numercal optmzaton of the Gaussan nput and quantzaton nose covarance matrces for uplnk MMO C-RAN under gven fronthaul capacty constrants. Frst we note that even when restrctng to Gaussan nput and Gaussan quantzaton the jont optmzaton of nput and quantzaton nose covarance matrces s stll a challengng problem for the uplnk MMO C-RAN. However f we fx the quantzaton nose covarance then the nput optmzaton reduces to that of optmzng a conventonal Gaussan multple-access channel. n partcular the problem of maxmzng the weghted sum rate can be formulated as a convex optmzaton whch can be readly solved [32]. Conversely f we fx the transmt covarance matrx the optmzaton of quantzaton nose covarance can n some cases be formulated as convex optmzaton. The key enablng fact s the reparameterzaton n term of B l 22 nstead of drect optmzaton over Q l. Consder frst the case of jont decodng. Usng 23 under the fxed C l for l =...L the weghted sum rate maxmzaton problem can be formulated over R k B l } as follows: max R k B l s.t. K μ k R k k= R k log c H lt B lh lt + KT k T KT + l C l log T K S L l B l 0 B l l l L 36 where μ k represents the weght assocated wth user k whch s typcally determned from upper layer protocols. The key observaton s that the above problem s convex n R k B l }. However we also note that because of jont decodng the number of constrants s exponental n the sze of the network. Consequently the above optmzaton problem can only be solved for small networks n practce. Note that the above formulaton consders the optmzaton of nstantaneous achevable rates R k under nstantaneous fronthaul capacty constrants C l n a fxed tme slot. The soluton obtaned however also apples to the more general case

10 ZHOU et al.: ON OPTMAL FRONTHAUL COMPRESSON AND DECODNG STRATEGES FOR UPLNK C-RANs 74 of optmzng the weghted sum rates under weghted sum fronthaul constrant e.g. L ν l C l C. Ths s because f we consder a slghtly more general formulaton of optmzng an objectve of max R k B l C l k= K μ k R k γ L ν l C l 37 under the same constrants as n 36 and L ν l C l C. Such an optmzaton problem s convex so tme-sharng s not needed. For ths reason the rest of ths secton consders the formulaton wth nstantaneous rates only. We now consder the weghted sum-rate maxmzaton problem for the case of successve decodng of the quantzaton codewords followed by the user messages. However the drect characterzaton of successve decodng rate does not gve rse to a convex formulaton. Nevertheless for the specal case of maxmzng the sum rate.e. wth μ = =μ K = usng Theorem 2 whch shows that successve decodng acheves the same maxmum sum rate as jont decodng the sum-rate maxmzaton problem wth successve decodng can be equvalently formulated as follows: Theorem 5: For the uplnk C-RAN model wth ndvdual fronthaul capacty constrant C l asshownnfg.thesum rate maxmzaton problem under successve decodng can be formulated as the followng convex problem: max RB l R s.t. R l C l log l B l + log c H lt B lh lt + KK S L KK 0 B l l l L. 38 Further f the fronthaul lnks are subject to a sum capacty constrant of C the sum rate maxmzaton problem can be formulated as the followng convex problem: max RB l R L H lk B lh lk + K K s.t. R log KK L l R + log C l B l 0 B l l l L. 39 We remark that the formulaton for uplnk C-RAN wth ndvdual fronthaul capactes 38 has exponental number of constrants because the CP n effect needs to search over L! dfferent decodng orders of quantzaton codewords at the BSs. n practcal mplementaton a heurstc method can be used to determne the decodng orders of quantzaton codewords for avodng the exponental search [24] [33]. Alternatvely f the C-RAN has a sum fronthaul constrant then the number of constrants s lnear n network sze because we only need to consder the case of S = L and S = n 38. Consequently the resultng quantzaton nose covarance optmzaton problem 39 can be solved n polynomal tme. Note that convexty s a key advantage of the above problem formulatons as compared to prevous approaches n the lterature e.g. [2] [22] that parameterze the optmzaton problem over the quantzaton nose covarance Q l whch leads to a nonconvex formulaton. We emphasze the mportance of Gaussan nput for the convex formulaton n Theorem 5. Suppose that both nput sgnal X K and compressed sgnal Ŷ l are dscrete random vectors wth fnte alphabet. For fxed nput dstrbuton the sum-rate maxmzaton problem under the sum fronthaul constrant can be wrtten as max X K ; Ŷ L pŷ l y l s.t. Y L ; Ŷ L C p ŷ l y l 0 ŷ l p ŷl y l = l L. 40 The above problem can be thought as a varant of the nformaton bottleneck method [9] whch can be solved by a generalzed Blahut-Armoto BA algorthm [34] [35]. However due to the non-convex nature of problem 40 the generalzed BA algorthm can only converge to a local optmum. V. CONCLUSON Ths paper provdes a number of nformaton theoretcal results on the optmal compress-and-forward scheme for the uplnk MMO C-RAN model where the BSs are connected to a CP through noseless fronthaul lnks of lmted capactes. t s shown that the generalzed successve decodng scheme whch allows arbtrary decodng orders between quantzaton and user message codewords can acheve the same rate regon as jont decodng under a sum fronthaul constrant. Moreover the practcal successve decodng of the quantzaton codewords followed by the user messages s shown to acheve the same maxmum sum rate as jont decodng under ndvdual fronthaul constrants. n addton f the nput dstrbuton s assumed to be Gaussan t s shown that Gaussan quantzaton maxmzes the achevable rate regon of jont decodng. Wth Gaussan nput sgnalng the optmzaton of Gaussan quantzaton for maxmzng the weghted sum rate under jont decodng and the sum rate under successve decodng can be cast as convex optmzaton problems whch facltates effcent numercal soluton. Fnally Gaussan nput and Gaussan quantzaton acheve the capacty regon of the uplnk C-RAN model to wthn constant gap. Collectvely these results provde justfcatons for the practcal choce of usng Gaussan-lke nput sgnals at the user termnals Gaussan-lke quantzaton at the relayng BSs and successve decodng of quantzaton codewords followed by user messages at the CP for mplementng uplnk MMO C-RAN.

11 742 EEE TRANSACTONS ON NFORMATON THEORY VOL. 62 NO. 2 DECEMBER 206 V. APPENDX A OPTMALTY OF GENERALZED SUCCESSVE DECODNG n ths appendx we prove Theorem whch states the equvalence between generalzed successve decodng and jont decodng under a sum-capacty fronthaul constrant. We begn by ntroducng an outer bound for the achevable rate regon of jont decodng under a sum fronthaul constrant. Under the sum fronthaul capacty constrant defne the rate-fronthaul regon for jont decodng PJDs o as the closure of the convex hull of all R R 2...R K C satsfyng R k < mn C Y l ; Ŷ l X K k T X T ; Ŷ L X T c } T K 4 C > Y l ; Ŷ l X K for some product dstrbuton K k= p x k L pŷ l y l. Under fxed sum fronthaul constrant C defne the regon R o JDs as follows } R o JDs R =...R K : R R K C PJDs o. 42 Note that the rate regon R o JDs s an outer bound for jont decodng rate regon 0 because only the constrants correspondng to S = and S = L are ncluded. These constrants turn out to be the only actve ones under the sum fronthaul constrant L C l C and C l 0. Under the sum fronthaul constrant the generalzed successve decodng regon P GSDs π for decodng order π can be derved from 2 by lettng L C l = C. More specfcally P GSDs π s the closure of the convex hull of all R R 2...R K C satsfyng R k < X k ; Ŷ JXk X Xk k K L 43 C > Y l ; Ŷ l Ŷ JYl X Yl for some product dstrbuton K k= p x k L pŷ l y l where Xk Yl are the ndces of user messages that are decoded before X k and Y l under the permutaton π andj Xk J Yl are the ndces of the quantzaton codewords that are decoded before X k and Y l under decodng order π. Defne PGSDs to be the closure of the convex hull of all P GSDsπ s over decodng order π s.e. PGSDs = co P GSDs π. π We say a pont R...R K C s domnated by a pont n PGSDS f there exsts some R...R K C n PGSDs for whch R k R k for k = 2...K andc C. Gven the defntons of R GSDs R JDs and Ro JDs ts easy to see that R GSDs R JDs R o JDs. To show R GSDs = R JDs t suffces to show Ro JDs R GSDs whch s equvalent to showng that f a pont R R 2...R K C PJDs o then the same pont R R 2...R K C PGSDs also. To show ths t suffces to show that for any fxed product dstrbuton K k= p x k L pŷ l y l and fxed C each extreme pont R...R K C as defned by 4 s domnated by a pont n PGSDs wth the average sum fronthaul capacty requrement at most C. To ths end defne a set functon f : 2 K R as follows: f T := mn C Y l ; Ŷ l X K X T ; Ŷ L X T c } for each T K. t can be verfed that the functon f s a submodular functon Appendx B Lemma 3. By constructon R R 2...R K as defned by 42 satsfes R k f T k T whch s a submodular polyhedron assocated wth f. t follows by basc results n submodular optmzaton Appendx V Proposton 6 that for a lnear orderng 2 K on the set K an extreme pont of R JDs can be computed as follows R j = f... j } f... j }. Furthermore the extreme ponts of R o JDs can be enumerated over all the orderngs of K. Each orderng of K s analyzed n the same manner hence for notatonal smplcty we only consder the natural orderng j = j n the followng proof. By constructon R j = mn C } Y l ; Ŷ l X K X j ; Ŷ L X K j+ mn C } Y l ; Ŷ l X K X j ; Ŷ L X K j. 44 Due to the fact that X j ; Ŷ L X K j+ X j ; Ŷ L X K j for some product dstrbuton K k= p x k L pŷ l y l equaton 44 can yeld two dfferent results. Case : the frst term C Y l; Ŷ l X K n the mnma n equaton 44 s not actve for any j; Case2:theterm C Y l; Ŷ l X K s actve startng wth some ndex j. Case holds f C X K ; Ŷ L + Y l ; Ŷ l X K. n ths case the resultng extreme pont r JD = R R 2... R K C satsfes R j = X j ; Ŷ L X K j+ for j = 2...K R K = X K ; Ŷ L C = X K ; Ŷ L + Y l ; Ŷ l X K. Consder successve decodng wth the decodng order Ŷ L X K X. The extreme pont R...R K C PGSDs correspondng to ths decodng order s R j = X j ; Ŷ L X K j+ for j = 2...K R K = X K ; Ŷ L C = Y L ; Ŷ L.

12 ZHOU et al.: ON OPTMAL FRONTHAUL COMPRESSON AND DECODNG STRATEGES FOR UPLNK C-RANs 743 Followng the Markov chan Ŷ Y X K Y j Ŷ j = j t can be shown that Y l ; Ŷ l X K + X K ; Ŷ L = Y L ; Ŷ L. Clearly r JD can be acheved by the decodng order of Ŷ L X K X. Thus r JD s domnated by a pont n PGSDs. Case 2 holds f C X K ; Ŷ L + Y l ; Ŷ l X K. We let X j = for < j and assume that X j ; Ŷ L X K j C Y l ; Ŷ l X K and C Y l ; Ŷ l X K X j ; Ŷ L X K j+ for some j K. The resultng extreme pont r 2 JD = R R 2... R K C satsfes R = X ; Ŷ L X+ K for < j [ R = C ] + Y l ; Ŷ l X K X j ; Ŷ L X K for = j R = 0 for > j C = X j ; Ŷ L X K j+ + Y l ; Ŷ l X K where [ ] + means max 0}. Note that users wth ndex > j are nactve and are essentally removed from the network. n ths case the rate-fronthaul tuple does not correspond to a specfc corner pont obtaned wth a specfc generalzed successve decodng order but that t les on the convex-hull of two corner ponts of two dfferent generalzed successve decodng orders. To obtan a vsualzaton on Case 2 the rate-fronthaul regon for a two-user C-RAN model under a fxed jont dstrbuton p x x 2 y y 2 ŷ ŷ 2 s llustrated n Fg. 2. n the case of K = j = 2 t s shown that the ratefronthaul tuple r 2 JD les on the convex-hull of two corner ponts r and r 2. To prove the statement mathematcally we consder generalzed successve decodng wth the followng two dfferent decodng orders: Decodng order satsfes X K... X j+ Ŷ L X j... X. The extreme pont r GSD = R...R K C of PGSDs correspondng to Decodng order satsfes R = X ; Ŷ L X+ K for j R = 0 for > j C = Y L ; Ŷ L X K j+ Fg. 2. An llustraton of the rate-fronthaul tuple n Case 2 n Appendx A wth a two-user C-RAN model under a fxed jont dstrbuton p x x 2 y y 2 ŷ ŷ 2. where C represents the requred fronthaul capacty n order to acheve the above rate tuple R...R K wth decodng order. Decodng order 2 s X K... X j Ŷ L X j... X. The extreme pont r 2 GSD = R 2...R2 K C2 of PGSDs correspondng to Decodng order 2 satsfes R 2 = X ; Ŷ L X+ K for < j R 2 = 0 for j C 2 = Y L ; Ŷ L X K j where C 2 represents the requred fronthaul capacty n order to acheve the above rate tuple R 2...R2 K wth decodng order 2. Observe that the rate tuples R...R K and R2...R2 K gven by above two decodng orders dfferent at only the jth component where R j = X j ; Ŷ L X K j+ and R 2 j = 0 and R = R 2 = R for all < j. Now choose a parameter θ such that C Y l ; Ŷ l X K X j ; Ŷ L X K j θ =. X j ; Ŷ L X K j+ 45

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

x = x 1 + :::+ x K and the nput covarance matrces are of the form ± = E[x x y ]. 3.2 Dualty Next, we ntroduce the concept of dualty wth the followng t

x = x 1 + :::+ x K and the nput covarance matrces are of the form ± = E[x x y ]. 3.2 Dualty Next, we ntroduce the concept of dualty wth the followng t Sum Power Iteratve Water-fllng for Mult-Antenna Gaussan Broadcast Channels N. Jndal, S. Jafar, S. Vshwanath and A. Goldsmth Dept. of Electrcal Engg. Stanford Unversty, CA, 94305 emal: njndal,syed,srram,andrea@wsl.stanford.edu

More information

Lecture 3: Shannon s Theorem

Lecture 3: Shannon s Theorem CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Perfect Competition and the Nash Bargaining Solution

Perfect Competition and the Nash Bargaining Solution Perfect Competton and the Nash Barganng Soluton Renhard John Department of Economcs Unversty of Bonn Adenauerallee 24-42 53113 Bonn, Germany emal: rohn@un-bonn.de May 2005 Abstract For a lnear exchange

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Wreless Informaton Transmsson System Lab. Chapter 7 Channel Capacty and Codng Insttute of Communcatons Engneerng atonal Sun Yat-sen Unversty Contents 7. Channel models and channel capacty 7.. Channel models

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Lossy Compression. Compromise accuracy of reconstruction for increased compression. Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence) /24/27 Prevew Fbonacc Sequence Longest Common Subsequence Dynamc programmng s a method for solvng complex problems by breakng them down nto smpler sub-problems. It s applcable to problems exhbtng the propertes

More information

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem. Lecture 14 (03/27/18). Channels. Decodng. Prevew of the Capacty Theorem. A. Barg The concept of a communcaton channel n nformaton theory s an abstracton for transmttng dgtal (and analog) nformaton from

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

Pulse Coded Modulation

Pulse Coded Modulation Pulse Coded Modulaton PCM (Pulse Coded Modulaton) s a voce codng technque defned by the ITU-T G.711 standard and t s used n dgtal telephony to encode the voce sgnal. The frst step n the analog to dgtal

More information

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Chapter 7 Channel Capacty and Codng Contents 7. Channel models and channel capacty 7.. Channel models Bnary symmetrc channel Dscrete memoryless channels Dscrete-nput, contnuous-output channel Waveform

More information

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Hidden Markov Models & The Multivariate Gaussian (10/26/04) CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Entropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or

Entropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or Sgnal Compresson Sgnal Compresson Entropy Codng Entropy codng s also known as zero-error codng, data compresson or lossless compresson. Entropy codng s wdely used n vrtually all popular nternatonal multmeda

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

A Local Variational Problem of Second Order for a Class of Optimal Control Problems with Nonsmooth Objective Function

A Local Variational Problem of Second Order for a Class of Optimal Control Problems with Nonsmooth Objective Function A Local Varatonal Problem of Second Order for a Class of Optmal Control Problems wth Nonsmooth Objectve Functon Alexander P. Afanasev Insttute for Informaton Transmsson Problems, Russan Academy of Scences,

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Time-Varying Systems and Computations Lecture 6

Time-Varying Systems and Computations Lecture 6 Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy

More information

Lecture 5 Decoding Binary BCH Codes

Lecture 5 Decoding Binary BCH Codes Lecture 5 Decodng Bnary BCH Codes In ths class, we wll ntroduce dfferent methods for decodng BCH codes 51 Decodng the [15, 7, 5] 2 -BCH Code Consder the [15, 7, 5] 2 -code C we ntroduced n the last lecture

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

The Second Anti-Mathima on Game Theory

The Second Anti-Mathima on Game Theory The Second Ant-Mathma on Game Theory Ath. Kehagas December 1 2006 1 Introducton In ths note we wll examne the noton of game equlbrum for three types of games 1. 2-player 2-acton zero-sum games 2. 2-player

More information

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud Resource Allocaton wth a Budget Constrant for Computng Independent Tasks n the Cloud Wemng Sh and Bo Hong School of Electrcal and Computer Engneerng Georga Insttute of Technology, USA 2nd IEEE Internatonal

More information

Error Probability for M Signals

Error Probability for M Signals Chapter 3 rror Probablty for M Sgnals In ths chapter we dscuss the error probablty n decdng whch of M sgnals was transmtted over an arbtrary channel. We assume the sgnals are represented by a set of orthonormal

More information

Hidden Markov Models

Hidden Markov Models Hdden Markov Models Namrata Vaswan, Iowa State Unversty Aprl 24, 204 Hdden Markov Model Defntons and Examples Defntons:. A hdden Markov model (HMM) refers to a set of hdden states X 0, X,..., X t,...,

More information

A new construction of 3-separable matrices via an improved decoding of Macula s construction

A new construction of 3-separable matrices via an improved decoding of Macula s construction Dscrete Optmzaton 5 008 700 704 Contents lsts avalable at ScenceDrect Dscrete Optmzaton journal homepage: wwwelsevercom/locate/dsopt A new constructon of 3-separable matrces va an mproved decodng of Macula

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

Computing Correlated Equilibria in Multi-Player Games

Computing Correlated Equilibria in Multi-Player Games Computng Correlated Equlbra n Mult-Player Games Chrstos H. Papadmtrou Presented by Zhanxang Huang December 7th, 2005 1 The Author Dr. Chrstos H. Papadmtrou CS professor at UC Berkley (taught at Harvard,

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0 Bézer curves Mchael S. Floater September 1, 215 These notes provde an ntroducton to Bézer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals Smultaneous Optmzaton of Berth Allocaton, Quay Crane Assgnment and Quay Crane Schedulng Problems n Contaner Termnals Necat Aras, Yavuz Türkoğulları, Z. Caner Taşkın, Kuban Altınel Abstract In ths work,

More information

Games of Threats. Elon Kohlberg Abraham Neyman. Working Paper

Games of Threats. Elon Kohlberg Abraham Neyman. Working Paper Games of Threats Elon Kohlberg Abraham Neyman Workng Paper 18-023 Games of Threats Elon Kohlberg Harvard Busness School Abraham Neyman The Hebrew Unversty of Jerusalem Workng Paper 18-023 Copyrght 2017

More information

Winter 2008 CS567 Stochastic Linear/Integer Programming Guest Lecturer: Xu, Huan

Winter 2008 CS567 Stochastic Linear/Integer Programming Guest Lecturer: Xu, Huan Wnter 2008 CS567 Stochastc Lnear/Integer Programmng Guest Lecturer: Xu, Huan Class 2: More Modelng Examples 1 Capacty Expanson Capacty expanson models optmal choces of the tmng and levels of nvestments

More information

EGR 544 Communication Theory

EGR 544 Communication Theory EGR 544 Communcaton Theory. Informaton Sources Z. Alyazcoglu Electrcal and Computer Engneerng Department Cal Poly Pomona Introducton Informaton Source x n Informaton sources Analog sources Dscrete sources

More information

Speeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem

Speeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem H.K. Pathak et. al. / (IJCSE) Internatonal Journal on Computer Scence and Engneerng Speedng up Computaton of Scalar Multplcaton n Ellptc Curve Cryptosystem H. K. Pathak Manju Sangh S.o.S n Computer scence

More information

Consider the following passband digital communication system model. c t. modulator. t r a n s m i t t e r. signal decoder.

Consider the following passband digital communication system model. c t. modulator. t r a n s m i t t e r. signal decoder. PASSBAND DIGITAL MODULATION TECHNIQUES Consder the followng passband dgtal communcaton system model. cos( ω + φ ) c t message source m sgnal encoder s modulator s () t communcaton xt () channel t r a n

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

Chapter - 2. Distribution System Power Flow Analysis

Chapter - 2. Distribution System Power Flow Analysis Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load

More information

The lower and upper bounds on Perron root of nonnegative irreducible matrices

The lower and upper bounds on Perron root of nonnegative irreducible matrices Journal of Computatonal Appled Mathematcs 217 (2008) 259 267 wwwelsevercom/locate/cam The lower upper bounds on Perron root of nonnegatve rreducble matrces Guang-Xn Huang a,, Feng Yn b,keguo a a College

More information

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0 Bezer curves Mchael S. Floater August 25, 211 These notes provde an ntroducton to Bezer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of the

More information

Refined Coding Bounds for Network Error Correction

Refined Coding Bounds for Network Error Correction Refned Codng Bounds for Network Error Correcton Shenghao Yang Department of Informaton Engneerng The Chnese Unversty of Hong Kong Shatn, N.T., Hong Kong shyang5@e.cuhk.edu.hk Raymond W. Yeung Department

More information

Convexity preserving interpolation by splines of arbitrary degree

Convexity preserving interpolation by splines of arbitrary degree Computer Scence Journal of Moldova, vol.18, no.1(52), 2010 Convexty preservng nterpolaton by splnes of arbtrary degree Igor Verlan Abstract In the present paper an algorthm of C 2 nterpolaton of dscrete

More information

Maximizing the number of nonnegative subsets

Maximizing the number of nonnegative subsets Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum

More information

Tornado and Luby Transform Codes. Ashish Khisti Presentation October 22, 2003

Tornado and Luby Transform Codes. Ashish Khisti Presentation October 22, 2003 Tornado and Luby Transform Codes Ashsh Khst 6.454 Presentaton October 22, 2003 Background: Erasure Channel Elas[956] studed the Erasure Channel β x x β β x 2 m x 2 k? Capacty of Noseless Erasure Channel

More information

On the Multicriteria Integer Network Flow Problem

On the Multicriteria Integer Network Flow Problem BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of

More information

Power law and dimension of the maximum value for belief distribution with the max Deng entropy

Power law and dimension of the maximum value for belief distribution with the max Deng entropy Power law and dmenson of the maxmum value for belef dstrbuton wth the max Deng entropy Bngy Kang a, a College of Informaton Engneerng, Northwest A&F Unversty, Yanglng, Shaanx, 712100, Chna. Abstract Deng

More information

State Amplification and State Masking for the Binary Energy Harvesting Channel

State Amplification and State Masking for the Binary Energy Harvesting Channel State Amplfcaton and State Maskng for the Bnary Energy Harvestng Channel Kaya Tutuncuoglu, Omur Ozel 2, Ayln Yener, and Sennur Ulukus 2 Department of Electrcal Engneerng, The Pennsylvana State Unversty,

More information

Online Appendix. t=1 (p t w)q t. Then the first order condition shows that

Online Appendix. t=1 (p t w)q t. Then the first order condition shows that Artcle forthcomng to ; manuscrpt no (Please, provde the manuscrpt number!) 1 Onlne Appendx Appendx E: Proofs Proof of Proposton 1 Frst we derve the equlbrum when the manufacturer does not vertcally ntegrate

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

Linear Regression Analysis: Terminology and Notation

Linear Regression Analysis: Terminology and Notation ECON 35* -- Secton : Basc Concepts of Regresson Analyss (Page ) Lnear Regresson Analyss: Termnology and Notaton Consder the generc verson of the smple (two-varable) lnear regresson model. It s represented

More information

On balancing multiple video streams with distributed QoS control in mobile communications

On balancing multiple video streams with distributed QoS control in mobile communications On balancng multple vdeo streams wth dstrbuted QoS control n moble communcatons Arjen van der Schaaf, José Angel Lso Arellano, and R. (Inald) L. Lagendjk TU Delft, Mekelweg 4, 68 CD Delft, The Netherlands

More information

University of Alberta. Library Release Form. Title of Thesis: Joint Bandwidth and Power Allocation in Wireless Communication Networks

University of Alberta. Library Release Form. Title of Thesis: Joint Bandwidth and Power Allocation in Wireless Communication Networks Unversty of Alberta Lbrary Release Form Name of Author: Xaowen Gong Ttle of Thess: Jont Bandwdth and Power Allocaton n Wreless Communcaton Networks Degree: Master of Scence Year ths Degree Granted: 2010

More information

On Network Coding of Independent and Dependent Sources in Line Networks

On Network Coding of Independent and Dependent Sources in Line Networks On Network Codng of Independent and Dependent Sources n Lne Networks Mayank Baksh, Mchelle Effros, WeHsn Gu, Ralf Koetter Department of Electrcal Engneerng Department of Electrcal Engneerng Calforna Insttute

More information

Approximately achieving Gaussian relay network capacity with lattice codes

Approximately achieving Gaussian relay network capacity with lattice codes Approxmately achevng Gaussan relay network capacty wth lattce codes Ayfer Özgür EFL, Lausanne, Swtzerland ayfer.ozgur@epfl.ch Suhas Dggav UCLA, USA and EFL, Swtzerland suhas@ee.ucla.edu Abstract Recently,

More information

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2 Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

Affine transformations and convexity

Affine transformations and convexity Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/

More information

Application of Nonbinary LDPC Codes for Communication over Fading Channels Using Higher Order Modulations

Application of Nonbinary LDPC Codes for Communication over Fading Channels Using Higher Order Modulations Applcaton of Nonbnary LDPC Codes for Communcaton over Fadng Channels Usng Hgher Order Modulatons Rong-Hu Peng and Rong-Rong Chen Department of Electrcal and Computer Engneerng Unversty of Utah Ths work

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Rate-Memory Trade-off for the Two-User Broadcast Caching Network with Correlated Sources

Rate-Memory Trade-off for the Two-User Broadcast Caching Network with Correlated Sources Rate-Memory Trade-off for the Two-User Broadcast Cachng Network wth Correlated Sources Parsa Hassanzadeh, Antona M. Tulno, Jame Llorca, Elza Erkp arxv:705.0466v [cs.it] May 07 Abstract Ths paper studes

More information

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7 Stanford Unversty CS54: Computatonal Complexty Notes 7 Luca Trevsan January 9, 014 Notes for Lecture 7 1 Approxmate Countng wt an N oracle We complete te proof of te followng result: Teorem 1 For every

More information

NP-Completeness : Proofs

NP-Completeness : Proofs NP-Completeness : Proofs Proof Methods A method to show a decson problem Π NP-complete s as follows. (1) Show Π NP. (2) Choose an NP-complete problem Π. (3) Show Π Π. A method to show an optmzaton problem

More information

Power Allocation for Distributed BLUE Estimation with Full and Limited Feedback of CSI

Power Allocation for Distributed BLUE Estimation with Full and Limited Feedback of CSI Power Allocaton for Dstrbuted BLUE Estmaton wth Full and Lmted Feedback of CSI Mohammad Fanae, Matthew C. Valent, and Natala A. Schmd Lane Department of Computer Scence and Electrcal Engneerng West Vrgna

More information

= z 20 z n. (k 20) + 4 z k = 4

= z 20 z n. (k 20) + 4 z k = 4 Problem Set #7 solutons 7.2.. (a Fnd the coeffcent of z k n (z + z 5 + z 6 + z 7 + 5, k 20. We use the known seres expanson ( n+l ( z l l z n below: (z + z 5 + z 6 + z 7 + 5 (z 5 ( + z + z 2 + z + 5 5

More information

Edge Isoperimetric Inequalities

Edge Isoperimetric Inequalities November 7, 2005 Ross M. Rchardson Edge Isopermetrc Inequaltes 1 Four Questons Recall that n the last lecture we looked at the problem of sopermetrc nequaltes n the hypercube, Q n. Our noton of boundary

More information

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution. Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,

More information

THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan.

THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan. THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY Wllam A. Pearlman 2002 References: S. Armoto - IEEE Trans. Inform. Thy., Jan. 1972 R. Blahut - IEEE Trans. Inform. Thy., July 1972 Recall

More information

Formulas for the Determinant

Formulas for the Determinant page 224 224 CHAPTER 3 Determnants e t te t e 2t 38 A = e t 2te t e 2t e t te t 2e 2t 39 If 123 A = 345, 456 compute the matrx product A adj(a) What can you conclude about det(a)? For Problems 40 43, use

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1 On an Extenson of Stochastc Approxmaton EM Algorthm for Incomplete Data Problems Vahd Tadayon Abstract: The Stochastc Approxmaton EM (SAEM algorthm, a varant stochastc approxmaton of EM, s a versatle tool

More information

OPTIMAL COMBINATION OF FOURTH ORDER STATISTICS FOR NON-CIRCULAR SOURCE SEPARATION. Christophe De Luigi and Eric Moreau

OPTIMAL COMBINATION OF FOURTH ORDER STATISTICS FOR NON-CIRCULAR SOURCE SEPARATION. Christophe De Luigi and Eric Moreau OPTIMAL COMBINATION OF FOURTH ORDER STATISTICS FOR NON-CIRCULAR SOURCE SEPARATION Chrstophe De Lug and Erc Moreau Unversty of Toulon LSEET UMR CNRS 607 av. G. Pompdou BP56 F-8362 La Valette du Var Cedex

More information

Lecture 4: November 17, Part 1 Single Buffer Management

Lecture 4: November 17, Part 1 Single Buffer Management Lecturer: Ad Rosén Algorthms for the anagement of Networs Fall 2003-2004 Lecture 4: November 7, 2003 Scrbe: Guy Grebla Part Sngle Buffer anagement In the prevous lecture we taled about the Combned Input

More information