On MIMO Channel Capacity with Output Quantization Constraints
|
|
- Victoria Ramsey
- 6 years ago
- Views:
Transcription
1 On MIMO Channel Capacty wth Output Quantzaton Constrants Abbas Khall NYU New York, USA Stefano Rn NCTU Hsnchu, Tawan Luca Barletta Poltecnco d Mlano Mlano, Italy luca.barletta@polm.t Elza Erkp NYU New York, USA elza@nyu.edu Yonna C. Eldar Technon Hafa, Israel yonna@ee.technon.ac.l arxv: v1 [cs.it] 5 Jun 018 Abstract The capacty of a Multple-Input Multple-Output (MIMO) channel n whch the antenna outputs are processed by an analog lnear combnng network and quantzed by a set of threshold quantzers s studed. The lnear combnng weghts and quantzaton thresholds are selected from a set of possble confguratons as a functon of the channel matrx. The possble confguratons of the combnng network model specfc analog recever archtectures, such as sngle antenna selecton, sgn quantzaton of the antenna outputs or lnear processng of the outputs. An nterestng connecton between the capacty of ths channel and a constraned sphere packng problem n whch unt spheres are packed n a hyperplane arrangement s shown. From a hgh-level perspectve, ths follows from the fact that each threshold quantzer can be vewed as a hyperplane parttonng the transmtter sgnal space. Accordngly, the output of the set of quantzers corresponds to the possble regons nduced by the hyperplane arrangement correspondng to the channel realzaton and recever confguraton. Ths connecton provdes a number of mportant nsghts nto the desgn of quantzaton archtectures for MIMO recevers; for nstance, t shows that for a gven number of quantzers, choosng confguratons whch nduce a larger number of parttons can lead to hgher rates. 1 Index Terms MIMO, capacty, one-bt quantzaton, sphere packng, hybrd analog-dgtal recever. I. INTRODUCTION As the couplng of multple antennas and low-resoluton quantzaton hold the promse of enablng mllmeter-wave communcaton, the effect of fnte-precson output quantzaton on the performance of MIMO systems has been wdely nvestgated n recent lterature. In [1], the authors propose a general framework to study the capacty of MIMO channels wth varous output quantzaton constrants and derve some ntal results on the scalng of capacty n the number of avalable quantzaton levels. In ths paper, we further our understandng of output quantzaton constrants n MIMO channels by drawng a connecton between a constraned sphere packng problem and formulaton n [1]. Ths connecton suggest a rather nsghtful geometrc-combnatoral approach to the desgn of recever quantzaton strateges for MIMO channel wth output quantzaton. Lterature Revew: In [], low resoluton output quantzaton for MIMO channels s nvestgated through numercal evaluatons. The authors are perhaps the frst to note that the loss due to quantzaton can be relatvely small. Quantzaton 1 Ths work has been supported n part by NSF Grant # for the SISO channel s studed n detal where t s shown that, f the output s quantzed usng M bts, then the optmal nput dstrbuton need not have more than M 1 ponts n ts support. A cuttng-plane algorthm s employed to compute ths capacty and to generate optmum nput support. In [3], the authors study the capacty of MIMO channels wth sgn quantzaton of the outputs and reveal a connecton between a geometrc-combnatoral problem and the capacty of ths model at hgh SNR. Contrbutons: In the model of [1], the output of a MIMO channel s processed by an analog combnng network before beng fed to N tq threshold quantzers. The combnng network s chosen among a set of possble confguratons as a functon of the channel matrx: these confguratons represent analog operatons that can be performed by the recever analog frontend. Through the problem formulaton n [1], t s possble to study the performance of dfferent recever archtectures as a functon of the avalable quantzaton bts N tq and transmt power. In ths paper, we show that the capacty of the model n [1] can be approxmately characterzed usng the soluton of a geometrc-combnatoral problem. Each threshold quantzer n effect obtans a lnear combnaton of the nosy channel nputs and can thus be vewed as parttonng the transmt sgnal space wth a hyperplane. The output of the set of quantzers dentfes a regon among those nduced by the hyperplane arrangement correspondng to the channel matrx and recever confguraton. Transmtted ponts can be relably dstngushed at the recever when they are separated by a hyperplane n the transmt space. Our result generalzes those of [1], [3] and provdes an ntutve approach to desgn effectve, and sometmes surprsng, quantzaton strateges. For example, one would expect that, for a recever able to perform lnear combnaton before quantzaton, the optmal transmsson strategy s to perform Sngular Value Decomposton (SVD) followed by multlevel quantzaton of each sub-channel. We show that ths scheme s actually sub-optmal at hgh SNR as recever confguratons whch nduce a larger number of parttons may lead to hgher transmsson rates. Organzaton: The channel model s ntroduced n Sec. II. Combnatoral notons are presented n Sec. III. Pror results and a motvatng example gven n Sec. IV. The man result s presented n Sec. V. Sec. VI concludes the paper.
2 T x X 1 X Channel H Z 1 Z Z 3 W 1 W W 3 Analog combner V Fg. 1: System model for N t =, N r = 3, and N tq = 4. Notaton: All logarthms are taken n base two. The vector dag{m} s the dagonal of the matrx M whle λ(m) s the vector of egenvalues of M. The dentty matrx of sze n s I n, the all zero/all one matrx of sze n m s 0 n m /1 n m, respectvely. Dmensons for these matrces are omtted( when ) n mpled by the context. We adopt the conve1nton that = 0 for > n. II. CHANNEL MODEL Consder the dscrete-tme MIMO channel wth N t /N r transmt/receve antennas n whch an nput vector X n = [X 1,n... X Nt,n] T results n the output vector W n = [W 1,n... W Nr,n] T accordng to the relatonshp W n = HX n Z n, 1 n N, (1) where Z n s an..d. Gaussan vector of sze N r wth zero mean and unt varance and H s a full rank matrx of sze N r N t (.e. rank(h) = mn(n t, N r )), fxed through the transmsson block-length and known at the recever and transmtter. The nput s subject to the power constrant N n=1 E[ X n ] NP where X n s the -norm. We study a varaton of the model n (1) shown n Fg.1 n whch the output vector W n s processed by a recever analog front-end and fed to N tq threshold one-bt quantzers. Ths results n the vector Y n = [Y 1,n... Y Ntq,n] T { 1, 1} Ntq gven by Y n = sgn(vw n t), 1 n N, () where V s the analog combnng matrx of sze N tq N r, t s a threshold vector of length N tq and sgn(u) s the functon producng the sgn of each component of the vector u as plus or mnus one. The matrx V and the vector t are selected among a set of avalable confguratons F [1]: {V, t} F { R Ntq Nr, R Ntq}. (3) For a fxed recever confguraton, {V, t}, the capacty of the channel n () s obtaned as C(V, t) = t 1 t t 3 t 4 Y 1 Y Y 3 Y 4 max I(X; Y). (4) P X (x), E[ X ] P We are nterested n determnng the largest attanable performance over all possble confguratons, leadng to C(F) = max C(V, t). (5) {V,t} F The full rank assumpton s justfed for rchly scatterng envronments. In the followng, we provde an approxmate characterzaton of the soluton of the maxmzaton n (5) under the assumpton that dag{hh T } = dag{vv T } = 1 1 Nr. Under ths assumpton, the dervaton of the results s partcularly succnct and thus fttng to the avalable space. The more general case of any arbtrary channel matrx H and any combnng matrx V s presented n [4]. III. COMBINATORIAL INTERLUDE Ths secton brefly ntroduces a few combnatoral concepts useful n the remander of the paper. A hyperplane arrangement A s a fnte set of n affne hyperplanes n R m for some n, m N. A hyperplane arrangement A = {x R m, a T x = b } n =1 can be expressed as A = {x, Ax = b} where A s obtaned by lettng each row correspond to a T and defnng b = [b 1... b n ] T. A plane arrangement s sad to be n General Poston (GP) f and only f every n n sub-matrx of A has non zero determnant [5]. Lemma III.1. A hyperplane arrangement of sze n n R m dvdes R m nto at most r(m, n) = ( ) m n =0 n regons. Lemma III.. A hyperplane arrangement of sze n n R m where all the hyperplanes pass through the orgn dvdes R m nto at most r 0(n, m) = m ) =0 regons. ( n 1 Lemma III.3. Let A be a hyperplane arrangement of sze l n R m and consder a hyperplane arrangement B of sze dl wth d N hyperplanes parallel to each of the hyperplanes n A. Then B dvdes R m nto at most r p (m, n, d) = m ( l =0 ) d (1 d) l regons. A necessary condton to attan the qualtes n Lem. III.1, Lem. III. and Lem. III.3 s for the hyperplane arrangement A to be n GP. A untary sphere packng n R m s defned as P = N S m (c, 1), (6) =1 where S m (c, r) s the m-dmensonal hyper-sphere wth center c and radus r. A hyperplane separates two spheres f each sphere belongs to one of the half-spaces nduced by the hyperplane. A sphere packng P s sad to be separable by the hyperplane arrangement A f any two spheres are separated by at least one hyperplane n A. A sphere packng n a sphere s a packng P for whch P S(c, r) for some c, r. Our am s to show a connecton between the capacty n (5) and the followng sphere packng problem. Defnton III.4. Separable sphere packng n a sphere: Gven a hyperplane arrangement A and a constant r R, defne r ssps (A, r) as the largest number of unt spheres n a packng P contaned n the sphere S(0, r) and separable by A. IV. PRIOR RESULTS AND A MOTIVATING EXAMPLE The maxmzaton n (5) yelds the optmal performance for any set of possble recever confguratons. One s often nterested n studyng and comparng the performance for specfc classes of recever confguratons: three such classes
3 are studed n detal n [1]: sngle antenna selecton and multlevel quantzaton, sgn quantzaton of the outputs and lnear combnng and multlevel quantzaton. A. Pror Results The smplest recever archtecture of nterest s perhaps the one n whch a sngle antenna output s selected by the recever and quantzed through a hgh-resoluton quantzer. Ths s obtaned n the model of Sec. II by settng F = { V = [ 0 Ntq 1 N tq 1 0 Ntq N r 1 ], 0 N r 1 t R Ntq}, (7) where the term 1 Ntq 1 selects the antenna wth the hghest channel gan. Proposton 1. [1, Prop. ]. The capacty of the MIMO channel wth sngle antenna selecton and multlevel quantzaton s upper bounded as C select 1 log ( mn { 1 h T max P, (N tq 1) }), (8) where h T max s the row of H wth the largest -norm. The upper bound n (8) can be attaned to wthn bts-per-channel-use (bpcu). Our man result, dscussed n detal n Sec.V, s nspred by an ntrgung connecton between Lem. III. and the nfnte SNR capacty of the MIMO channel wth sgn quantzaton of the outputs [3]. Note that the model n [3] s obtaned by settng N tq = N r and lettng F be the set of all matrces obtaned by permutng the rows of [I, 0]. Proposton. [3, Prop. 3]. The capacty of the MIMO channel wth sgn quantzaton of the outputs n whch H s n GP at nfnte SNR s bounded as log(r 0 (N r, N t )) C SNR sgn log(r 0 (N r, N t ) 1). Recall that the most general archtecture n Sec. II has F = { V R Ntq Nr, t R Ntq}, (9) and corresponds to a recever analog front-end whch s able to perform any lnear combnaton of the antenna outputs before quantzaton. Proposton 3. [1, Prop. 6]. The capacty of a MIMO channel wth lnear combnng and multlevel quantzaton s upper bounded as C lnear R (λ(h), P, N tq ) K. (10) The capacty s to wthn a gap of 3K bpcu from the upper bound n (10) for R (λ(h), P, N tq) = K 1 =1 log(1 λp) f K ( ) =1 1 λ P 1 N tq ( ) Ntq K log K 1 otherwse, (11) wth K = max{n t, N r }, P = (µ λ ) and µ R s the smallest value for whch P = P. To establsh the achevablty of Prop. 3, the SVD can be used to transform the channel nto K = mn{n t, N r } parallel sub-channels wth ndependent unt-varance addtve nose and gans λ(h). After SVD, the quantzaton strategy s chosen dependng on whether the performance s bounded by the effect of the addtve nose or by the quantzaton nose. B. Motvatng Example for the Combnatoral Approach Let us consder the three archtectures n Propostons 1-3 for the case of N t =, N r = 3 and N tq = 4, also shown n Fg. 1, and provde some hgh-level ntuton on the relatonshp between capacty and the sphere packng problem n Def. III.4. Prop. 1: Snce the threshold quantzers are used to sample the same antenna output, the number of possble outputs s at most N r 1 so that the performance n Prop. 1 s bounded by log(n tq 1) = log 5.3 bpcu at hgh SNR. Ths recever confguraton can be nterpreted as follow: an antenna output represents a lne n the two-dmensonal transmt sgnal space; each threshold quantzer corresponds to a translaton of ths lne and these N tq parallel lnes partton the sgnal space nto at most N tq 1 subregons. Prop. : Sgn quantzaton of the outputs corresponds to the hyperplane arrangement n whch all hyperplanes pass through the orgn: the number of regons nduced by ths arrangement s obtaned through Lem. III.. There are r 0 = 8 parttons, as also shown n Fg. b, yeldng a maxmum rate of 3 bpcu, attanable at hgh SNR. Prop. 3: When the recever can perform lnear combnng before quantzaton, the SVD can be used to transform the channel nto two parallel sub-channels. Ths strategy corresponds to the hyperplane arrangement n Lem. III.3 and the number of parttons nduced s 9, as also shown n Fg. c. Lem. III.1: Ths lemma actually ndcates that the largest number of regons s 11 so that the rate log(11) = 3.46 bpcu can be obtaned through the recever confguraton n Fg. d at hgh SNR. 3 Gven the above nterpretaton of the capacty at hgh SNR, a feasble fnte SNR strategy s the one n whch, for a gven recever confguraton, the channel nputs are chosen as the center of the spheres wth suffcently large radus nsde each partton subject to the power constrant. The average achevable rate of the four strateges dscussed above s plotted n Fg. 3. Each lne n Fg. 3 corresponds to one of the sphere packng confguratons n Fg.. For a gven channel realzaton, V and t are chosen to result n the parttonngs of the transmtter space correspondng to each of the subfgures n Fg., approprately scaled by the avalable transmt power. Note that the confguratons are not optmzed. The channel nputs are then chosen as unformly dstrbuted over the center of the spheres packed n the parttonngs. 3 Note that ths does not contradct the result of Prop. 3 snce the nner bound s bpcu from the outer bound.
4 (a) Confguraton correspondng to Prop.1 (b) Confguraton correspondng to Prop. (c) Confguraton correspondng to Prop.3 (d) Confguraton nspred by Lem. III.1 Fg. : Dfferent recever output quantzaton strateges. Rate (bpcu) Fg. a Fg. b Fg. c Fg. d Outer bound and output alphabets as X = Y = [0 : r(n t, N tq )] and let the channel transton probablty be determned by the channel nput support and the recever analog confguraton. Also, let us defne sgn (x) as { sgn x x < 1 (x) = sgn(x) x 1, and the set N m as N m = {n n, n n S m (n, 1), n N m }, (14) P (db) Fg. 3: Smulaton results for N tq = 4, N r = 3, and N t = dscussed n Sec. IV-B. The average performance s calculated over real..d. zeromean, untary varance Gaussan channel gans, further scaled to guarantee that each row has untary -norm. The capacty of the channel wthout quantzaton constrant s also provded as a reference. From Fg. 3 we see that, at hgh SNR, the best performance s attaned by the confguraton correspondng to Lem. III.1, snce at hgh SNR the performance s determned by the number of transmtted ponts. As the SNR decreases, confguratons wth less transmtted ponts perform better. V. MAIN RESULT Sec. IV-B provdes a geometrc-combnatoral nterpretaton of the capacty of the model n (1)-() for the recever archtectures consdered n [1]. The man result of the paper s to make such nterpretaton more rgorous and more general. Theorem 1. The capacty expresson n (5) when dag{hh T } = dag{vv T } = 1 1 Nr s upper bounded as for C(F) max A log r ssps (A, P ) 3 K 3, (1) A {x, VHx = t, (V, t) F}, (13) and K = max{n t, N r }. The capacty s wthn.5n t bpcu from the outer bound n (1). Proof: Only the converse proof s presented here whle the achevablty proof s provded n [4]. Let us choose the nput that s N m s composed of a set of ponts selected from the unt sphere around the nteger ponts n N m. Fnally, let Q N m(x) be the mappng whch assgns each pont n R m to the closest pont n N m and ( W N = HQ N N (X N ), Y N = sgn VW N t t E N = W N W N. Usng Fano s nequalty, we wrte N(R ɛ N ) I(Y N, E N ; X N ) I(Y N ; X N ) H(E N ) H(Z N ) ), (15a) = I(Y N ; X N ) NN r log 3, (15b) where, n (15a), we have used the fact that we can reconstruct Y N from Y N and the value of E N. In (15b), we used the fact that snce dag{hh T } = 1, components of H(X Q Nt m(x)) have support at most [ 1, 1]. The largest varance of a random varable wth fnte support s for the case n whch the probablty dstrbuton s evenly dstrbuted at the end ponts, so that Var[E ] 3/. Usng the Gaussan maxmzes entropy property, we obtan H(E ) 1/ log(πe3). From a hgh-level perspectve, (15) shows that the capacty of the channel n (1)-() s close to the capacty of the channel wth no addtve nose but n whch the nput s mapped to N m. Next, we show that restrctng the nput to a peak power constrant, nstead of an average power constrant, has a bounded effect on the capacty. Let us represent X n hyper-geometrc coordnates as X = φ X for φ S Nt (0, 1) and φ = 1 and defne X N as X = φ ( X mod ) P, 1 N (16)
5 where mod (x) ndcates the modulus operaton; n other words, X has the same drecton as X but ts modulus s folded over P. Accordngly, defne Ŵ N = HQ N N ( X ( ) N ), Ŷ N = sgn VŴN t, t and use these defntons to further bound the term I(Y N ; X N ) n (15b) as I(Y N ; X N ) I(ŶN, Y N ; X N, X N ) (17) = I(ŶN ; X N ) I(ŶN ; X N X N ) (18) I(Y N ; X N, X N ŶN ). Note that I(ŶN ; X N X N ) = 0 because of the Markov chan Ŷ N X N X N. For the term I(Y N ; X N, X N ŶN ) we wrte I(Y N ; X N, X N ŶN ) H(Y N ŶN ) H(H(X N X N )) H(X N X N ), (19a) (19b) (19c) where (19a) follows from the fact that Y N s a dscrete random varable, (19b) from the fact that X and X are also dscrete random varables, and (19c) from the fact that H s full rank by assumpton. Next, to bound the term H(X N X N ), we can wrte X X = φ ( X / ) P, (0) where / ndcates the quotent of the modulus operaton. The entropy of ths random varable can then be rewrtten as H(X N X N ) H(φ N ) H( X N / P ) N(N r 1) N max H( X / P ). It can be shown that H( X / P ) 3 bpcu: the proof follows from the fact that the power constrant can be volated only a fnte number of tmes, whch leads to the fact that X / P s concentrated around small nteger values. By combnng the bounds n (15) and (17) we obtan N(R ɛ N ) I(ŶN ; X N ) 3 NK 3N N max I(Ŷ; X) 3 NK 3N. (1) P X Let us now evaluate the mutual nformaton term I(ŶN ; X N ), Ŷ N s a determnstc functon of X N and can be nterpreted as the membershp functon ndcatng to whch partton of the hyperplane arrangement the nput belongs to. For ths reason, I(ŶN ; X N ) s maxmzed by choosng an nput support as the subset of N m n whch a sngle pont s contaned n each partton nduced by {VHx = t} and lettng the nput dstrbuton be unformly dstrbuted over ths set. As a fnal step of the proof, we note that the upper bound n (1) can be mnmzed over the choce of the set N m n (14). In other words, by varyng the choce of n n n (14), the ponts n N m are moved outsde the correspondng partton, thus tghtenng the bound n (1). Accordngly, unless a partton contans a unt ball centered around a value n R Nr, a value n n can be chosen so that N m does not contan a value n such partton. It then follows that I(ŶN ; X N ) log r ssps (A, P ) whch s the desred result. Remark V.1. Th. 1 extends the results n Sec. IV-A as t holds for any set of possble recever confguratons F n (3). The results n Sec. IV-A only hold when F has a specfc form as n (7) or (9). On the other hand, Th. 1 does not provde a closed-form characterzaton of capacty as t nvolves the soluton of a packng problem. In partcular, lettng F n (13) have the form of (7) or (9) does not mmedately recover the capacty characterzaton n Sec. IV-A as Th. 1 follows a dfferent approach than [1] to bound capacty. Remark V.. When consderng the model wth any H and V, the result n Th. 1 generalzes as follows. The channel model n () s reduced to model where V and H are such that dag{hh T } = dag{vv T } = 1 1 Nr by lettng the addtve nose Z n have a general covarance matrx. For a channel model under ths normalzaton, consder the addtve nose after combnng, Z n = VZ n : the varance of the th entry of Z n, Z,n, determnes the uncertanty n the output of the th quantzer, Y,n. Accordngly, the capacty s approxmatvely equal to the number of separable ponts whch can be ftted n the sphere of radus P such that each pont s at dstance at least (Var[ Z,n ]) 1/ from the th hyperplane. The complete dervaton can be found n [4]. VI. CONCLUSION In ths paper, the capacty of a MIMO channel wth output quantzaton constrants for recevers equpped wth analog combners and one-bt threshold quantzers s nvestgated. The connecton between the capacty of the system and a constraned sphere packng problem s showed by argung that the threshold quantzers can be nterpreted as hyperplanes parttonng the transmt sgnal space. Ths connecton reveals, for example, that the nfnte SNR capacty of a channel wth lnear combner s attaned by a recever confguraton whch parttons the transmt sgnal space n the largest number of regons. REFERENCES [1] S. Rn, L. Barletta, Y. C. Eldar, and E. Erkp, A general framework for MIMO recevers wth low-resoluton quantzaton, Proc. IEEE Inf. Theory Workshop, Nov [] J. Sngh, O. Dabeer, and U. Madhow, On the lmts of communcaton wth low-precson analog-to-dgtal converson at the recever, IEEE Trans. Commun., vol. 57, no. 1, pp , 009. [3] J. Mo and R. W. Heath, Capacty analyss of one-bt quantzed MIMO systems wth transmtter channel state nformaton, IEEE Trans. Sgnal Process., vol. 63, no. 0, pp , 015. [4] A. Khall, S. Rn, L. Barletta, Y. Eldar, and E. Erkp, A general framework for low-resoluton recevers for MIMO channels, under preparaton. [5] T. Cover, Geometrcal and statstcal propertes of systems of lnear nequaltes wth applcatons n pattern recognton, IEEE Trans. Electron. Comput., vol. EC-14, no.3, pp , Jun
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationVQ widely used in coding speech, image, and video
at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng
More information2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification
E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton
More informationprinceton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg
prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there
More informationLecture 3: Shannon s Theorem
CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that
More informationChapter 7 Channel Capacity and Coding
Chapter 7 Channel Capacty and Codng Contents 7. Channel models and channel capacty 7.. Channel models Bnary symmetrc channel Dscrete memoryless channels Dscrete-nput, contnuous-output channel Waveform
More informationU.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016
U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and
More informationInner Product. Euclidean Space. Orthonormal Basis. Orthogonal
Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,
More informationComposite Hypotheses testing
Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationLectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More information4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA
4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected
More informationMore metrics on cartesian products
More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of
More informationECE559VV Project Report
ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate
More informationLecture 3. Ax x i a i. i i
18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest
More informationCHAPTER III Neural Networks as Associative Memory
CHAPTER III Neural Networs as Assocatve Memory Introducton One of the prmary functons of the bran s assocatve memory. We assocate the faces wth names, letters wth sounds, or we can recognze the people
More informationECE 534: Elements of Information Theory. Solutions to Midterm Exam (Spring 2006)
ECE 534: Elements of Informaton Theory Solutons to Mdterm Eam (Sprng 6) Problem [ pts.] A dscrete memoryless source has an alphabet of three letters,, =,, 3, wth probabltes.4,.4, and., respectvely. (a)
More informationErrors for Linear Systems
Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch
More informationLossy Compression. Compromise accuracy of reconstruction for increased compression.
Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost
More informationComparison of Regression Lines
STATGRAPHICS Rev. 9/13/2013 Comparson of Regresson Lnes Summary... 1 Data Input... 3 Analyss Summary... 4 Plot of Ftted Model... 6 Condtonal Sums of Squares... 6 Analyss Optons... 7 Forecasts... 8 Confdence
More informationChapter 7 Channel Capacity and Coding
Wreless Informaton Transmsson System Lab. Chapter 7 Channel Capacty and Codng Insttute of Communcatons Engneerng atonal Sun Yat-sen Unversty Contents 7. Channel models and channel capacty 7.. Channel models
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More information2.3 Nilpotent endomorphisms
s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms
More informationCOMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS
Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationEGR 544 Communication Theory
EGR 544 Communcaton Theory. Informaton Sources Z. Alyazcoglu Electrcal and Computer Engneerng Department Cal Poly Pomona Introducton Informaton Source x n Informaton sources Analog sources Dscrete sources
More informationHidden Markov Models & The Multivariate Gaussian (10/26/04)
CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models
More informationConvergence of random processes
DS-GA 12 Lecture notes 6 Fall 216 Convergence of random processes 1 Introducton In these notes we study convergence of dscrete random processes. Ths allows to characterze phenomena such as the law of large
More informationStructure and Drive Paul A. Jensen Copyright July 20, 2003
Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.
More informationError Probability for M Signals
Chapter 3 rror Probablty for M Sgnals In ths chapter we dscuss the error probablty n decdng whch of M sgnals was transmtted over an arbtrary channel. We assume the sgnals are represented by a set of orthonormal
More informationLecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.
Lecture 14 (03/27/18). Channels. Decodng. Prevew of the Capacty Theorem. A. Barg The concept of a communcaton channel n nformaton theory s an abstracton for transmttng dgtal (and analog) nformaton from
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationNegative Binomial Regression
STATGRAPHICS Rev. 9/16/2013 Negatve Bnomal Regresson Summary... 1 Data Input... 3 Statstcal Model... 3 Analyss Summary... 4 Analyss Optons... 7 Plot of Ftted Model... 8 Observed Versus Predcted... 10 Predctons...
More informationInductance Calculation for Conductors of Arbitrary Shape
CRYO/02/028 Aprl 5, 2002 Inductance Calculaton for Conductors of Arbtrary Shape L. Bottura Dstrbuton: Internal Summary In ths note we descrbe a method for the numercal calculaton of nductances among conductors
More informationTime-Varying Systems and Computations Lecture 6
Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy
More informationThe Feynman path integral
The Feynman path ntegral Aprl 3, 205 Hesenberg and Schrödnger pctures The Schrödnger wave functon places the tme dependence of a physcal system n the state, ψ, t, where the state s a vector n Hlbert space
More informationEcon107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)
I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes
More informationAPPENDIX A Some Linear Algebra
APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,
More informationChapter 13: Multiple Regression
Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to
More information3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X
Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number
More informationPower Allocation/Beamforming for DF MIMO Two-Way Relaying: Relay and Network Optimization
Power Allocaton/Beamformng for DF MIMO Two-Way Relayng: Relay and Network Optmzaton Je Gao, Janshu Zhang, Sergy A. Vorobyov, Ha Jang, and Martn Haardt Dept. of Electrcal & Computer Engneerng, Unversty
More informationTHE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan.
THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY Wllam A. Pearlman 2002 References: S. Armoto - IEEE Trans. Inform. Thy., Jan. 1972 R. Blahut - IEEE Trans. Inform. Thy., July 1972 Recall
More informationn α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0
MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector
More informationCS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016
CS 29-128: Algorthms and Uncertanty Lecture 17 Date: October 26, 2016 Instructor: Nkhl Bansal Scrbe: Mchael Denns 1 Introducton In ths lecture we wll be lookng nto the secretary problem, and an nterestng
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More informationCommunication with AWGN Interference
Communcaton wth AWG Interference m {m } {p(m } Modulator s {s } r=s+n Recever ˆm AWG n m s a dscrete random varable(rv whch takes m wth probablty p(m. Modulator maps each m nto a waveform sgnal s m=m
More informationThe Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD
he Gaussan classfer Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world X observatons g decson functon L[g,y] loss of predctng y wth g Bayes decson rule s
More informationChapter 8 SCALAR QUANTIZATION
Outlne Chapter 8 SCALAR QUANTIZATION Yeuan-Kuen Lee [ CU, CSIE ] 8.1 Overvew 8. Introducton 8.4 Unform Quantzer 8.5 Adaptve Quantzaton 8.6 Nonunform Quantzaton 8.7 Entropy-Coded Quantzaton Ch 8 Scalar
More informationAssortment Optimization under MNL
Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.
More informationFeature Selection: Part 1
CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?
More information10-701/ Machine Learning, Fall 2005 Homework 3
10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40
More informationAPPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14
APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce
More informationPsychology 282 Lecture #24 Outline Regression Diagnostics: Outliers
Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.
More informationLecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.
prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove
More informationProblem Set 9 Solutions
Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus
More informationP R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /
Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons
More informationPulse Coded Modulation
Pulse Coded Modulaton PCM (Pulse Coded Modulaton) s a voce codng technque defned by the ITU-T G.711 standard and t s used n dgtal telephony to encode the voce sgnal. The frst step n the analog to dgtal
More informationWinter 2008 CS567 Stochastic Linear/Integer Programming Guest Lecturer: Xu, Huan
Wnter 2008 CS567 Stochastc Lnear/Integer Programmng Guest Lecturer: Xu, Huan Class 2: More Modelng Examples 1 Capacty Expanson Capacty expanson models optmal choces of the tmng and levels of nvestments
More informationStanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011
Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected
More informationNUMERICAL DIFFERENTIATION
NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the
More informationNotes on Frequency Estimation in Data Streams
Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to
More informationSolutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.
Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,
More informationEntropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or
Sgnal Compresson Sgnal Compresson Entropy Codng Entropy codng s also known as zero-error codng, data compresson or lossless compresson. Entropy codng s wdely used n vrtually all popular nternatonal multmeda
More informationEigenvalues of Random Graphs
Spectral Graph Theory Lecture 2 Egenvalues of Random Graphs Danel A. Spelman November 4, 202 2. Introducton In ths lecture, we consder a random graph on n vertces n whch each edge s chosen to be n the
More informationSection 8.3 Polar Form of Complex Numbers
80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the
More informationEstimating the Fundamental Matrix by Transforming Image Points in Projective Space 1
Estmatng the Fundamental Matrx by Transformng Image Ponts n Projectve Space 1 Zhengyou Zhang and Charles Loop Mcrosoft Research, One Mcrosoft Way, Redmond, WA 98052, USA E-mal: fzhang,cloopg@mcrosoft.com
More informationLecture 4: Universal Hash Functions/Streaming Cont d
CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected
More informationLINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity
LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have
More informationCOS 521: Advanced Algorithms Game Theory and Linear Programming
COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton
More information8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS
SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars
More informationCSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography
CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve
More informationAntenna Combining for the MIMO Downlink Channel
Antenna Combnng for the IO Downlnk Channel arxv:0704.308v [cs.it] 0 Apr 2007 Nhar Jndal Department of Electrcal and Computer Engneerng Unversty of nnesota nneapols, N 55455, USA Emal: nhar@umn.edu Abstract
More informationTransform Coding. Transform Coding Principle
Transform Codng Prncple of block-wse transform codng Propertes of orthonormal transforms Dscrete cosne transform (DCT) Bt allocaton for transform coeffcents Entropy codng of transform coeffcents Typcal
More information1 Derivation of Point-to-Plane Minimization
1 Dervaton of Pont-to-Plane Mnmzaton Consder the Chen-Medon (pont-to-plane) framework for ICP. Assume we have a collecton of ponts (p, q ) wth normals n. We want to determne the optmal rotaton and translaton
More informationFormulas for the Determinant
page 224 224 CHAPTER 3 Determnants e t te t e 2t 38 A = e t 2te t e 2t e t te t 2e 2t 39 If 123 A = 345, 456 compute the matrx product A adj(a) What can you conclude about det(a)? For Problems 40 43, use
More informationProf. Dr. I. Nasser Phys 630, T Aug-15 One_dimensional_Ising_Model
EXACT OE-DIMESIOAL ISIG MODEL The one-dmensonal Isng model conssts of a chan of spns, each spn nteractng only wth ts two nearest neghbors. The smple Isng problem n one dmenson can be solved drectly n several
More informationApproximately achieving Gaussian relay network capacity with lattice codes
Approxmately achevng Gaussan relay network capacty wth lattce codes Ayfer Özgür EFL, Lausanne, Swtzerland ayfer.ozgur@epfl.ch Suhas Dggav UCLA, USA and EFL, Swtzerland suhas@ee.ucla.edu Abstract Recently,
More informationTracking with Kalman Filter
Trackng wth Kalman Flter Scott T. Acton Vrgna Image and Vdeo Analyss (VIVA), Charles L. Brown Department of Electrcal and Computer Engneerng Department of Bomedcal Engneerng Unversty of Vrgna, Charlottesvlle,
More informationThe Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction
ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also
More informationChapter 8 Indicator Variables
Chapter 8 Indcator Varables In general, e explanatory varables n any regresson analyss are assumed to be quanttatve n nature. For example, e varables lke temperature, dstance, age etc. are quanttatve n
More informationFisher Linear Discriminant Analysis
Fsher Lnear Dscrmnant Analyss Max Wellng Department of Computer Scence Unversty of Toronto 10 Kng s College Road Toronto, M5S 3G5 Canada wellng@cs.toronto.edu Abstract Ths s a note to explan Fsher lnear
More informationSupplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso
Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed
More informationStatistical pattern recognition
Statstcal pattern recognton Bayes theorem Problem: decdng f a patent has a partcular condton based on a partcular test However, the test s mperfect Someone wth the condton may go undetected (false negatve
More informationLecture 4: November 17, Part 1 Single Buffer Management
Lecturer: Ad Rosén Algorthms for the anagement of Networs Fall 2003-2004 Lecture 4: November 7, 2003 Scrbe: Guy Grebla Part Sngle Buffer anagement In the prevous lecture we taled about the Combned Input
More informationGames of Threats. Elon Kohlberg Abraham Neyman. Working Paper
Games of Threats Elon Kohlberg Abraham Neyman Workng Paper 18-023 Games of Threats Elon Kohlberg Harvard Busness School Abraham Neyman The Hebrew Unversty of Jerusalem Workng Paper 18-023 Copyrght 2017
More informationMaximizing the number of nonnegative subsets
Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum
More informationLecture 3: Probability Distributions
Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the
More informationFoundations of Arithmetic
Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an
More informationISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013
ISSN: 2277-375 Constructon of Trend Free Run Orders for Orthogonal rrays Usng Codes bstract: Sometmes when the expermental runs are carred out n a tme order sequence, the response can depend on the run
More informationOutline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1]
DYNAMIC SHORTEST PATH SEARCH AND SYNCHRONIZED TASK SWITCHING Jay Wagenpfel, Adran Trachte 2 Outlne Shortest Communcaton Path Searchng Bellmann Ford algorthm Algorthm for dynamc case Modfcatons to our algorthm
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationEdge Isoperimetric Inequalities
November 7, 2005 Ross M. Rchardson Edge Isopermetrc Inequaltes 1 Four Questons Recall that n the last lecture we looked at the problem of sopermetrc nequaltes n the hypercube, Q n. Our noton of boundary
More informationMultipath richness a measure of MIMO capacity in an environment
EUROEA COOERATIO I THE FIELD OF SCIETIFIC AD TECHICAL RESEARCH EURO-COST SOURCE: Aalborg Unversty, Denmark COST 73 TD 04) 57 Dusburg, Germany 004/Sep/0- ultpath rchness a measure of IO capacty n an envronment
More informationFor now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.
Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson
More informationMarkov Chain Monte Carlo Lecture 6
where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways
More informationSignal space Review on vector space Linear independence Metric space and norm Inner product
Sgnal space.... Revew on vector space.... Lnear ndependence... 3.3 Metrc space and norm... 4.4 Inner product... 5.5 Orthonormal bass... 7.6 Waveform communcaton system... 9.7 Some examples... 6 Sgnal space
More informationFREQUENCY DISTRIBUTIONS Page 1 of The idea of a frequency distribution for sets of observations will be introduced,
FREQUENCY DISTRIBUTIONS Page 1 of 6 I. Introducton 1. The dea of a frequency dstrbuton for sets of observatons wll be ntroduced, together wth some of the mechancs for constructng dstrbutons of data. Then
More informationMEM 255 Introduction to Control Systems Review: Basics of Linear Algebra
MEM 255 Introducton to Control Systems Revew: Bascs of Lnear Algebra Harry G. Kwatny Department of Mechancal Engneerng & Mechancs Drexel Unversty Outlne Vectors Matrces MATLAB Advanced Topcs Vectors A
More information