The Fading Number of Memoryless Multiple-Input Multiple-Output Fading Channels 2 R n 2 =0 = 1 = 1.

Size: px
Start display at page:

Download "The Fading Number of Memoryless Multiple-Input Multiple-Output Fading Channels 2 R n 2 =0 = 1 = 1."

Transcription

1 65 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY R n =0 = n n0 Cn(ix i +(n 0 i)y 0 n) i=0 = n n0 Cn((x i 0 y)i +ny 0 n) i=0 =(x 0 y) n Cni i +(ny 0 n) n n + (x 0 y)(ny 0 n) Cni: i i=0 i=0 i=0 Fro te fact tat te derivatives of H(Z) wit respect to " are uniforly bounded on [0; =] (see [6], also iplied by Teore. of [4] and te coputation of H " (Z)j "=0), we draw te conclusion tat te second coefficient of H(Z) is equal to H 00 (Z)j "== = : [6] O. Zu, I. Kanter, and E. Doany, Asyptotics of te entropy rate for a idden Marov process, J. Statist. Pys., vol., no. 3 4, pp , 005. [7] O. Zu, E. Doany, I. Kanter, and M. Aizenan, Fro finite-syste entropy to entropy rate for a idden Marov process, IEEE Signal Process. Lett.,, vol. 3, no. 9, pp , Sep ACKNOWLEDGMENT Te autors wis to tan te anonyous reviewer for pointing out te Faa Di Bruno forula, wic greatly siplified te proof of Lea.3. REFERENCES [] D. Blacwell, Te entropy of functions of finite-state Marov cains, in Trans. st Prague Conf. Inforation Toery, Statistical Decision Functions, Rando Processes, Prague, Czecoslovaia, 957, pp [] G. Constantine and T. Savits, A ultivariate Faa Di Bruno forula wit applications, Trans. Aer. Mat. Soc., vol. 348, no., pp , Feb [3] R. Garavi and V. Anantara, An upper bound for te largest Lyapunov exponent of a Marovian product of nonnegative atrices, Teor. Cop. Sci., vol. 33, no. 3, pp , Feb [4] G. Han and B. Marcus, Analyticity of entropy rate of idden Marov cains, IEEE Trans. Inf. Teory, vol. 5, no., pp , Dec [5] T. Holliday, A. Goldsit, and P. Glynn, Capacity of finite state cannels based on Lyapunov exponents of rando atrices, IEEE Trans. Inf. Teory, vol. 5, no. 8, pp , Aug [6] P. Jacquet, G. Seroussi, and W. Szpanowsi, On te entropy of a idden Marov process, in Proc. Data Copression Conf., Snowbird, UT, Mar. 004, pp [7] J. Keeny and J. Snell, Finite Marov Cains. Princeton, N.J.: Van Nostrand, 960. [8] R. Leipni and T. Reid, Multivariable Faa Di Bruno forulas, in Electronic Proc 9t Annu. Int. Conf. Tecnology in Collegiate Mateatics [Online]. Available: ttp://arcives.at.ut.edu/ictcm/ep-9. tl#c3 [9] D. Lind and B. Marcus, An Introduction to Sybolic Dynaics and Coding. Cabridge, U.K.: Cabridge Univ. Press, 995. [0] B. Marcus, K. Petersen, and S. Willias, Transission rates and factors of Marov cains, Contep. Mat., vol. 6, pp , 984. [] E. Ordentlic and T. Weissan, On te optiality of sybol by sybol filtering and denoising, IEEE Trans. Inf. Teory, vol. 5, no., pp. 9 40, Jan [] E. Ordentlic and T. Weissan, New bounds on te entropy rate of idden Marov process, in Proc. Inforation Teory Worsop, San Antonio, TX, Oct. 004, pp. 7. [3] E. Ordentlic and T. Weissan, Personal Counication. [4] Y. Peres, Analytic dependence of Lyapunov exponents on transition probabilities, in Lyapunov s Exponents, Proceedings of a Worsop (Lecture Notes in Mateatics). Berlin, Gerany: Springer-Verlag, 990, vol [5] Y. Peres, Doains of analytic continuation for te top Lyapunov exponent, Ann. Inst. H. Poincaré Probab. Statist., vol. 8, no., pp. 3 48, 99. Te Fading Nuber of Meoryless Multiple-Input Multiple-Output Fading Cannels Stefan M. Moser, Meber, IEEE Abstract In tis correspondence, we derive te fading nuber of ultiple-input ultiple-output (MIMO) flat-fading cannels of general (not necessarily Gaussian) regular law witout teporal eory. Te cannel is assued to be noncoerent, i.e., neiter receiver nor transitter ave nowledge about te cannel state, but tey only now te probability law of te fading process. Te fading nuber is te second ter, after te double-logaritic ter, of te ig signal-to-noise ratio (SNR) expansion of cannel capacity. Hence, te asyptotic cannel capacity of eoryless MIMO fading cannels is derived exactly. Te result is ten specialized to te nown cases of single-input ultiple-output (SIMO), ultiple-input single-output (MISO), and single-input single-output (SISO) fading cannels, as well as to te situation of Gaussian fading. Index Ters Cannel capacity, fading nuber, Gaussian fading, general flat fading, ig signal-to-noise ratio (SNR), ultiple antenna, ultiple-input ultiple-output (MIMO), noncoerent. I. INTRODUCTION It as been recently sown in [], [] tat, wenever te atrixvalued fading process is of finite differential entropy rate (a so-called regular process), te capacity of noncoerent ultiple-input ultipleoutput (MIMO) fading cannels typically grows only double-logaritically in te signal-to-noise ratio (SNR). Tis is in star contrast to bot, te coerent fading cannel were te receiver as perfect nowledge about te cannel state, and to te noncoerent fading cannel wit nonregular cannel law, i.e., te differential entropy rate of te fading process is not finite. In te forer case te capacity grows logaritically in te SNR wit a Manuscript received June, 006; revised Marc, 007. Tis wor was supported by te Industrial Tecnology Researc Institute (ITRI), Zudong, Taiwan, under Contract G Te autor is wit te Departent of Counication Engineering, National Ciao Tung University (NCTU), Hsincu, Taiwan (e-ail: stefan.oser@ieee. org). Counicated by K. Kobayasi, Associate Editor for Sannon Teory. Digital Object Identifier 0.09/TIT /$ IEEE

2 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY factor in front of te logarit tat is related to te nuber of receive and transit antennas [3]. In te latter case, te asyptotic growt rate of te capacity depends igly on te specific details of te fading process. In te case of Gaussian fading, nonregularity eans tat te present fading realization can be predicted precisely fro te past realizations. However, in every noncoerent syste te past realizations are not nown a priori, but need to be estiated eiter by nown past cannel inputs and outputs or by eans of special training signals. Depending on te spectral distribution of te fading process, te dependence of suc estiations on te available power can vary largely wic gives rise to a uge variety of possible ig-snr capacity beaviors: it is sown in [4], [5], and [6] tat depending on te spectru of te nonregular Gaussian fading process, te asyptotic beavior of te cannel capacity can be varied in a large range: it is possible to ave very slow double-logaritic growt, fast logaritic growt, or even exotic situations were te capacity grows proportionally to a fractional power of log SNR. Siilarly, Liang and Veeravalli sow in [7] tat te capacity of a Gaussian bloc-fading cannel depends critically on te assuptions one aes about te tie-correlation of te fading process: if te correlation atrix is ran deficient, te capacity grows logaritically in te SNR, oterwise double-logaritically. In tis correspondence we will only consider noncoerent cannels wit regular fading processes, i.e., te capacity at ig SNR will be growing double-logaritically. To quantify te rates at wic tis poor power efficiency begins, [], [] introduce te fading nuber as te second ter in te ig-snr asyptotic expansion of cannel capacity. Hence, te capacity can be written as C(SNR) = log( + log( + SNR)) + + o() () were o() tends to zero as te SNR tends to infinity, and were is a constant, denoted fading nuber, tat does not depend on te SNR. Explicit expressions of te fading nuber are nown for a nuber of fading odels. For cannels wit eory, te fading nuber of single-input single-output (SISO) fading cannels is derived in [], [] and te single-input ultiple-output (SIMO) case is derived in [8] and []. For eoryless fading cannels, te fading nuber is nown in te situation of only one antenna at transitter and receiver (SISO) (H) = log + log jhj 0 (H) () in te situation of a SIMO fading cannel (H) = ( ^He )+n R log H 0 log 0 (H) (3) (bot are special cases fro te corresponding situation wit eory), and also in te case of a ultiple-input single-output (MISO) fading cannel [], [] (H )=sup log + log jh j 0 (H ) : (4) Te ost general situation of ultiple antennas at bot transitter and receiver, owever, as been solved so far only in te special case of a particular rotational syetry of te fading process: if every rotation of te input vector of te cannel can be undone by a corresponding rotation of te output vector, and vice-versa, ten te fading nuber as been sown in [], [] to be ( ) = log n 0(n R ) + n R log ^e 0 ( ^e) (5) For a precise definition of te notation used in tis corrspondence, we refer to Section II. were ^e n is an arbitrary constant vector of unit lengt, and were n R denotes te nuber of receive antennas. Suc fading cannels are called rotation-coutative in te generalized sense (for a detailed definition see Section V). In tis correspondence, we will extend tese results and derive te fading nuber of general eoryless MIMO fading cannels. Te reainder of tis correspondence is structured as follows. Before we proceed in Section III to introduce te cannel odel in detail, te following section will clarify our notation. We will ten present te ain result, i.e., te fading nuber of te general eoryless MIMO fading cannel in Section IV. Te corresponding proof is found in Section VII. In Section V, te nown fading nubers of SISO, SIMO, MISO, and rotation-coutative MIMO fading cannels are derived once ore as special cases of te new general result fro Section IV. In Section VI, we investigate te situation of Gaussian fading processes. We will conclude in Section VIII. II. NOTATION We try to use uppercase letters for rando quantities and lowercase letters for teir realizations. Tis rule, owever, is broen wen dealing wit atrices and soe constants. To better differentiate between scalars, vectors, and atrices we ave resorted to using different fonts for te different quantities. Uppercase letters suc as X are used to denote scalar rando variables taing value in te reals or in te coplex plane. Teir realizations are typically written in lowercase, e.g., x. For rando vectors we use bold face capitals, e.g., X and bold lowercase for teir realizations, e.g., x. Deterinistic atrices are denoted by uppercase letters but of a special font, e.g., ; and rando atrices are denoted using anoter special uppercase font, e.g.,. Te capacity is denoted by C, te energy per sybol by E, and te signal-to-noise ratio is denoted by SNR. We use te sortand Ha b for (H a;h a+;...;h b ). For ore coplicated expressions, suc as H a a ; H a+ a+ ;...; H b b, b we use te duy variable ` to clarify notation: H` `. `=a Heritian conjugation is denoted by () y, and () stands for te transpose (witout conjugation) of a atrix or vector. Te trace of a atrix is denoted by tr (). We use to denote te Euclidean nor of vectors or te Euclidean operator nor of atrices. Tat is x t= jx (t) j ; x (6) ax ^w: (7) ^w= Tus, is te axial singular value of te atrix. Te Frobenius nor of atrices is denoted by F and is given by te square root of te su of te squared agnitudes of te eleents of te atrix, i.e., F tr ( y ): (8) Note tat for every atrix F (9) as can be verified by upper bounding te squared agnitude of eac of te coponents of ^w using te Caucy Scwarz inequality. We will often split a coplex vector v up into its agnitude v and its direction v ^v (0) v

3 654 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY 007 were we reserve tis notation exclusively for unit vectors, i.e., trougout te correspondence every vector carrying a at, ^v or ^V, denotes a (deterinistic or rando, respectively) vector of unit lengt ^v = ^V =: () To be able to wor wit suc direction vectors we sall need a differential entropy-lie quantity for rando vectors tat tae value on te unit spere in : let denote te area easure on te unit spere in. If a rando vector ^V taes value in te unit spere and as te density p ^V (^v) wit respect to, ten we sall let ( ^V) 0 log p ^V ( ^V) () if te expectation is defined. We note tat just as ordinary differential entropy is invariant under translation, so is ( ^V) invariant under rotation. Tat is, if is a deterinistic unitary atrix, ten ( ^V) = ( ^V): (3) Also note tat ( ^V) is axiized if ^V is uniforly distributed on te unit spere, in wic case ( ^V) = log c (4) were c denotes te surface area of te unit spere in c = 0() : (5) Te definition () can be easily extended to conditional entropies: if W is soe rando vector, and if conditional on W = w te rando vector ^V as density p ^VjW (^vjw) ten we can define ( ^V j W = w) 0 log p ^VjW ( ^VjW) W = w (6) and we can define ( ^Vj W) as te expectation (wit respect to W) of ( ^V j W = w). Based on tese definitions, we ave te following lea. Lea : Let V be a coplex rando vector taing value in and aving differential entropy (V). Let V denote its nor and ^V denote its direction as in (0). Ten (V) =(V)+ ( ^V jv) +(0 ) [log V] (7) = ( ^V) +(Vj ^V) +( 0 ) [log V] (8) wenever all te quantities in (7) and (8), respectively, are defined. Here (V) is te differential entropy of V wen viewed as a real (scalar) rando variable. Proof: Oitted. We sall write X N (; ) if X 0 is a circularly syetric, zero-ean, Gaussian rando vector of covariance atrix (X 0 )(X 0 ) y =.ByX U([a; b]) we denote a rando variable tat is uniforly distributed on te interval [a; b]. Te probability distribution of a rando variable X or rando vector X is denoted by X or X, respectively. Trougout te correspondence e denotes a coplex rando variable tat is uniforly distributed over te unit circle e Unifor on fz : jzj =g: (9) Wen it appears in forulas wit oter rando variables, e is always assued to be independent of tese oter variables. All rates specified in tis correspondence are in nats per cannel use, i.e., log() denotes te natural logaritic function. III. THE CHANNEL MODEL We consider a cannel wit n T transit antennas and n R receive antennas wose tie- output Y n is given by Y = x + Z : (0) Here x n denotes te tie- input vector; te rando atrix n n denotes te tie- fading atrix; and te rando vector Z n denotes te tie- additive noise vector. We assue tat te rando vectors fz g are spatially and teporally wite, zero-ean, circularly syetric, coplex Gaussian rando vectors, i.e., fz g IID N 0; for soe > 0. Here denotes te identity atrix. As for te atrix-valued fading process f g we will not specify a particular distribution, but sall only assue tat it is stationary, ergodic, of a finite-energy fading gain, i.e., F < () and regular, i.e., its differential entropy rate is finite (f g) li n" n ( ;...; n) > 0: () Furterore, we will restrict ourselves to te eoryless case, i.e., we assue tat f g is independetn and identically distributed (IID) wit respect to tie. Since tere is no eory in te cannel, an IID input process fx g will be sufficient to acieve capacity and we will terefore drop te tie index ereafter, i.e., (0) siplifies to Y = x + Z: (3) Note tat wile we assue tat tere is no teporal eory in te cannel, we do not restrict te spatial eory, i.e., te different fading coponents H (i;j) of te fading atrix ay be dependent. We assue tat te fading and te additive noise Z are independent and of a joint law tat does not depend on te cannel input x. As for te input, we consider two different constraints: a pea-power constraint and an average-power constraint. We use E to denote te axial allowed instantaneous power in te forer case, and to denote te allowed average power in te latter case. For bot cases we set E SNR : (4) Te capacity C(SNR) of te cannel (3) is given by C(SNR) =supi(x; Y) (5) were te supreu is over te set of all probability distributions on X satisfying te constraints, i.e. for a pea-power constraint, or for an average-power constraint. X E; alost surely (6) X E (7)

4 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY Specializing [, Teore 4.], [, Teore 6.0] to eoryless MIMO fading, we ave li fc(snr) 0 log log SNRg < : (8) SNR" Note tat [, Teore 4.], [, Teore 6.0] is stated under te assuption of an average-power constraint only. However, since a peapower constraint is ore stringent tan an average-power constraint, (8) also olds in te situation of a pea-power constraint. Te fading nuber is now defined as in [, Definition 4.6], [, Definition 6.3] by ( ) li fc(snr) 0 log log SNRg: (9) SNR" Pria facie te fading nuber depends on weter a pea-power constraint (6) or an average-power constraint (7) is iposed on te input. However, it will turn out tat te eoryless MIMO fading nuber is identical for bot cases. IV. MAIN RESULT A. Preliinaries Before we can state our ain result, we need to introduce tree concepts: Te first concerns probability distributions tat escape to infinity, te second a tecnique of upper bounding utual inforation, and te tird concept concerns circular syetry. ) Escaping to Infinity: We start wit a discussion about te concept of capacity-acieving input distributions tat escape to infinity. A sequence of input distributions paraeterized by te allowed cost (in our case of fading cannels te cost is te available power or SNR) is said to escape to infinity if it assigns to every fixed copact set a probability tat tends to zero as te allowed cost tends to infinity. In oter words tis eans tat in te liit wen te allowed cost tends to infinity suc a distribution does not use finite-cost sybols. Tis notion is iportant because te asyptotic capacity of any cannels of interest can only be acieved by input distributions tat escape to infinity. As a atter of fact one can sow tat every input distribution tat only acieves a utual inforation of identical asyptotic growt rate as te capacity ust escape to infinity. Loosely speaing, for any cannels it is not favorable to use finite-cost input sybols wenever te cost constraint is loosened copletely. In te following we will only state tis result specialized to te situation at and. For a ore general description and for all proofs we refer to [8, Sec. VII.C.3], [, Sec..6]. Definition : Let fx;eg E0 be a faily of input distributions for te eoryless fading cannel (3), were tis faily is paraeterized by te available average power E suc tat X E; E0: (30) We say tat te input distributions fx;eg E0 escape to infinity if for every E 0 > 0 li E" X;E(X E 0 )=0: (3) We now ave te following lea. Lea 3: Let te eoryless MIMO fading cannel be given as in (3) and let fx;eg E0 be a faily of distributions on te cannel input tat satisfy te power constraint (30). Let I(X;E) denote te utual inforation between input and output of cannel (3) wen te input is distributed according to te law X;E. Assue tat te faily of input distributions fx;eg E0 is suc tat te following condition is satisfied: I(X;E) li =: (3) E" log log E Ten fx;eg E0 ust escape to infinity. Proof: A proof can be found in [8, Teore 8, Rear 9], [, Corollary.8]. ) An Upper Bound on Cannel Capacity: In [] and [] a new approac of deriving upper bounds to cannel capacity as been introduced. Since capacity is by definition a axiization of utual inforation, it is iplicitly difficult to find upper bounds to it. Te proposed tecnique bases on a dual expression of utual inforation tat leads to an expression of capacity as a iniization instead of a axiization. Tis way it becoes uc easier to find upper bounds. Again, ere we only state te upper bound in a for needed in te derivation of Teore 7. For a ore general for, for ore ateatical details, and for all proofs we refer to [, Sec. IV], [, Sec..4]. Lea 4: Consider a eoryless cannel wit input s n and output T. Ten for an arbitrary distribution on te input S te utual inforation between input and output of te cannel is upper bounded as follows: I(S; T ) 0(T js) + log + log + log 0 ; +( 0 ) log(jt j + ) + jt j + (33) were ; > 0 and 0 are paraeters tat can be cosen freely, but ust not depend on te distribution of S. Proof: A proof can be found in [, Sec. IV], [, Sec..4]. 3) Capacity-Acieving Input Distributions and Circular Syetry: Te final preliinary rear concerns circular syetry. We say tat a rando vector W is circularly syetric if W L = W e (34) were U([0; ]) is independent of W and were L = stands for equal in law. Note tat tis is not to be confused wit isotropically distributed, wic eans tat a vector as equal probability to point in every direction. Circular syetry only concerns te pase of te coponents of a vector, not te vector s direction. Te following lea says tat for our cannel odel an optial input can be assued to be circularly syetric. Lea 5: Assue a cannel as given in (3). Ten te capacity-acieving input distribution can be assued to be circularly syetric, i.e., te input vector X can be replaced by Xe, were U([0; ]) is independent of every oter rando quantity. Proof: A proof is given in Appendix A. Rear 6: Note tat te proof of Lea 5 relies only on te fact tat te additive noise is assued to be circularly syetric. B. Fading Nuber of General Meoryless MIMO Fading Cannels We are now ready for te ain result, i.e., te fading nuber of a eoryless MIMO fading cannel. Teore 7: Consider a eoryless MIMO fading cannel (3) were te rando fading atrix taes value in n n and satisfies ( ) > 0 (35)

5 656 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY 007 and F < : (36) Ten, irrespective of weter a pea-power constraint (6) or an average-power constraint (7) is iposed on te input, te fading nuber ( ) is given by (37) sown at te botto of te page. Here denotes a rando vector of unit lengt and denotes its probability law, i.e., te supreu is taen over all distributions of te rando unit-vector. Note tat te expectation in te second ter is understood jointly over and. Moreover, tis fading nuber is acievable by a rando vector X = R were is distributed according to te distribution tat acieves te fading nuber in (37) and were R is a nonnegative rando variable independent of suc tat log R U([log log E; log E]) : (38) Proof: A proof is given in Section VII. Note tat even if it igt not be obvious at first sigt it is not ard to sow tat te distribution tat acieves te supreu in (37) is circularly syetric. Tis is in agreeent wit Lea 5. Te evaluation of (37) can be pretty awward ainly due to te first ter, i.e., te differential entropy wit respect to te surface area easure. We terefore will derive next an upper bound to te fading nuber tat is easier to evaluate. To tat goal firstly note tat for an arbitrary constant nonsingular n R n R atrix and an arbitrary constant nonsingular n T n T atrix ( )=( ); (39) see [, Lea 4.7], [, Lea 6.4]. Second, note tat for an arbitrary rando unit vector ^Y n ( ^Y) log c n = log n (40) 0(n R) were c n denotes te surface area of te unit spere in n as defined in (5) and were te upper bound is acieved wit equality only if ^Y is uniforly distributed on te spere, i.e., ^Y is isotropically distributed. Using tese two observations we get te following upper bound on te fading nuber. Corollary 8: Te fading nuber of a eoryless MIMO fading cannel as given in Teore 7 can be upper bounded as follows: ( ) n R log 0 log 0(n R) +infsup ; n R log 0 ( ) (4) were te infiu is over all nonsingular n R n R coplex atrices and nonsingular n T n T coplex atrices. Proof: Using te two observations (39) and (40), we iediately get fro Teore 7 + n R [log j = ] 0 ( j = ) : (4) Te result now follows by noting tat (4) can always be acieved by coosing in (4) to be te distribution wic wit probability taes on te value tat acieves te axiu in (4). Tis upper bound is possibly tigter tan te upper bound given in [, Lea 4.4], [, Lea 6.6] because of te additional infiu over. V. SOME KNOWN SPECIAL CASES In tis section we will briefly sow ow soe already nown results of various fading nubers can be derived as special cases fro tis new ore general result. We start wit te situation of a fading atrix tat is rotation-coutative in te generalized sense, i.e., te fading atrix is suc tat for every constant unitary n T n T atrix t tere exists an n R n R constant unitary atrix r suc tat r L = t (43) were L = stands for as te sae law ; and for every constant unitary n R n R atrix r tere exists a constant unitary n T n T atrix t suc tat (43) olds [, Definition 4.37], [, Definition 6.37]. Te property of rotation-coutativity for rando atrices is a generalization of te isotropic distribution of rando vectors, i.e., we ave te following lea. Lea 9: Let be rotation-coutative in te generalized sense. Ten te following two stateents old. If n is an isotropically distributed rando vector tat is independent of, ten n is isotropically distributed. If ^e; ^e 0 n are two constant unit vectors, ten ^e L = ^e 0 ; ^e = ^e 0 = (44) ( ^e) =( ^e 0 ); ^e = ^e 0 =: (45) Proof: For a proof see, e.g., [, Lea 4.38], [, Lea 6.38]. Fro Lea 9 it iediately follows tat in te situation of rotation-coutative fading te only ter in te expression of te fading nuber (37) tat depends on is Tis entropy is axiized if is uniforly distributed on te surface of te n R -diensional coplex unit spere, wic can be acieved according to Lea 9 by te coice of an isotropic distribution for. Ten according to (4) and (5) : ( ) inf sup ; n R log 0 log 0(n R ) = log n 0(n R ) : (46) ( ) = sup + n R log 0 log 0 ( j ) : (37)

6 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY Te expression of te fading nuber (37) ten reduces to (5) ( )=log n 0(n R ) 0 log + nr log ^e 0 ( ^e) (47) we get (H) = log + log jhj 0 log 0 (H) (55) = log + log jhj 0 (H): (56) were ^e is an arbitrary constant unit vector in n. In case of a SIMO fading cannel, te direction vector reduces to a pase ter e 8. Fro Lea 5 we now tat an optial coice of e 8 is circularly syetric, suc tat (37) becoes (H) = ( ^He )+n R log H 0 log 0 (H): (48) Before we continue wit te MISO case, we would lie to rear tat te only ter in (37) tat depends on te distribution of te pase of eac coponent of X is Since we now fro Lea 5 tat is circularly syetric, we can terefore equivalently write : = e : (49) Turning to te MISO case now note tat te distribution of H jh j e is identical to te distribution of e, independently of te distribution of H and. Hence, H jh j e = (e ) = log : (50) Te fading nuber (37) ten becoes (H )=sup log + log jh j 0 log 0 (H j ) (5) VI. GAUSSIAN FADING Te evaluation of te fading nuber is rater difficult even for te usually sipler situation of Gaussian fading processes. However, we are able to give te exact value for soe iportant special cases, and we will give bounds on soe oters. Trougout tis section we assue tat te fading atrix can be written as = + ~ (57) were all coponents of ~ are independent of eac oter and zeroean, unit-variance Gaussian distributed, and were denotes a constant line-of-sigt atrix. Note tat for soe constant unitary n R n R atrix and soe constant unitary n T n T atrix te law of ~ is identical to te law of ~. Terefore, witout loss of generality, we ay restrict ourselves to atrices tat are diagonal, i.e., for n R n T or, for n R >n T =( ~ n (n 0n ) ) (58) = ~ (n 0n )n (59) were ~ is a infn R ;n T ginfn R ;n T g diagonal atrix wit te singular values of on te diagonal. A. Scalar Line-of-Sigt Matrix We start wit a scalar line-of-sigt atrix, i.e., we assue ~ = d were denotes te identity atrix. Under tese assuptions te fading nuber as been nown already for n R = n T =, in wic case te fading atrix is rotation coutative [], []: ( )=g (jdj ) 0 0 log 0(): (60) = sup sup log + log jh j j = 0 (H = ) (5) log + log jh j 0 (H ) (53) Here g () is a continuous, onotonically increasing, concave function defined as sown in (6) at te botto of te page, for, were Ei () denotes te exponential integral function defined as Ei (0x) 0 x e 0t and () is Euler s psi function given by t dt; x > 0 (6) wic can be acieved for a distribution of tat wit probability taes on te value tat acieves te fading nuber in (53). Finally, te SISO case is a cobination of te arguents of te SIMO and MISO case, i.e., using (e ) = log (54) () j= j (63) wit 0:577 denoting Euler s constant. Tis function g () is te expected value of te logarit of a noncentral ci-square rando g () 0 log() 0 Ei (0) + (0) j e 0 (j 0 )! 0 (0)! j(00j)! j= j ; > 0 (); =0 (6)

7 658 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY 007 variable, i.e., for soe Gaussian rando variables fu jg j= IID N (0; ) and for soe coplex constants f j g j= we ave Proposition : Assue n R >n T and a Gaussian fading atrix as given in (57). Let te line-of-sigt atrix be given as were log j= s ju j + jj = g (s ) (64) j= j j j (65) (see [9], [, Lea 0.], [, Lea A.6] for ore details and a proof). We would lie to epasize tat g () is continuous for all 0, i.e., in particular li log() 0 Ei (0) + #0 0 (0) j j= e 0 ( 0 )! (j 0 )! 0 j( 0 0 j)! for all. Moreover, for all and 0 j = () (66) log 0 Ei (0) g () log( + ): (67) A derivation of (67) can be found in Appendix B. We now consider te case were n R n T : Corollary 0: Assue n R n T and a Gaussian fading atrix as given in (57). Let te line-of-sigt atrix be given as Ten = d ( n n (n 0n ) ) : (68) ( )=n R g n (jdj ) 0 n R 0 log 0(n R ) (69) were g () is defined in (6). Proof: We write for te unit vector were 4 n and 4 0 n 0n. Ten were ~ H N (0; n ). Hence = (70) = + ~ = d4 + ~ H (7) ( j ) =( ~ H)=n R log e (7) n R log = n Rg n (jdj 4 ) n Rg n (jdj ) (73) log n 0(n R ) : (74) Here, te equality in (73) follows fro te fact tat d4 + ~ H is noncentral ci-square distributed and fro (64); te inequality in (73) follows fro te onotonicity of g () and is tigt if 4 =, i.e., 4 0 = 0; and te inequality in (74) follows fro (4) and (5) and is tigt if 4 is uniforly distributed on te unit spere in n so tat is isotropically distributed. Te result now follows fro Teore 7. Te case n R >n T is ore difficult since ten (74) is in general not tigt. We will only state an upper bound. Ten = d n (n 0n )n : (75) ( ) n T log + jdj n T + n R log n R 0 n R 0 log 0(n R ): (76) Proof: Tis result is a special case of Proposition 3 and as been publised before in [, Eq. (8)], [, Eq. (6.4)]. B. General Line-of-Sigt Matrix Next we assue Gaussian fading as defined in (57) wit a general line-of-sigt atrix aving singular values d ;...;d infn ;n g. Hence, ~,defined in (58) and (59), is given as ~ =diag d;...;d infn ;n g (77) were jd jjd j jd infn ;n g j > 0. We again start wit te case n R n T. Corollary : Assue n R n T and a Gaussian fading atrix as given in (57). Let te line-of-sigt atrix ave singular values d ;...;d n, were jd jjd jjd n j > 0. Ten ( ) n R g n ( ) 0 n R 0 log 0(n R ) (78) were g () is given in (6) and were = jd j. Proof: A proof is given in Appendix C. Te situation n R >n T is again ore coplicated. We include tis case in a new upper bound based on (4) wic olds independently of te particular relation between n R and n T. Proposition 3: Assue a Gaussian fading atrix as given in (57) and let te line-of-sigt atrix be general wit singular values d ;...;d infn ;n g, were jd jjd jjd infn ;n g j > 0. Ten te fading nuber is upper bounded as follows: ( ) infn R ;n T g log were infn R;n Tg +n R log n R 0 n R 0 log 0(n R ) (79) jd j jd infn ;n g j = infn ;n g + jd j + + jd infn ;n g j : (80) Proof: A proof is given in Appendix D. VII. PROOF OF THE MAIN RESULT Te proof of Teore 7 consists of two parts. First, we derive an upper bound to te fading nuber assuing an average-power constraint (7) on te input. Te ey ingredients ere are te preliinary results fro Section IV-A. In a second part we ten sow tat tis upper bound can actually be acieved by an input tat satisfies te pea-power constraint (6). Since a pea-power constraint is ore restrictive tan te corresponding average-power constraint, te teore follows.

8 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY Because te proof is rater tecnical, we will give a sort overview to clarify te ain ideas. Te upper bound relies strongly on Lea 3 wic says tat te input can be assued to tae on large values only, i.e., at ig SNR te additive noise will becoe negligible so tat we can bound I(X; Y) I(X; X): (8) Tis ter is ten split into a ter tat only considers te agnitude of X and a ter tat taes into account te direction: I(X; X) =I(X; X) +I X; X : (8) For te first ter wic is related to MISO fading we ten use te bounding tecnique of Lea 4. Because Lea 3 only olds in te liit wen E tends to infinity, we introduce an event X > E 0 for soe fixed E 0 0 and condition everyting on tis event. To derive a lower bound on capacity we coose a specific input distribution of te for X = R (83) were te distribution of R is suc tat it acieves te fading nuber of an SIMO fading cannel and were te distribution of is independent of R and will be only specified at te very end of te derivation (it will be cosen to axiize te fading nuber). We ten split te utual inforation into two ters: I(X; Y) =I(R; Y j ) +I( ; Y): (84) Te first ter (alost) corresponds to an SIMO fading cannel wit side-inforation for wic te fading nuber is nown. Te second ter is treated separately. A. Derivation of an Upper Bound In te following we will use te notation R X to denote te agnitude of te input vector X, i.e., we ave X = R. Note tat in tis section we are not allowed to assue tat R is independent of. Fro Lea 3, we now tat te capacity-acieving input distribution ust escape to infinity. Hence, we fix an arbitrary finite E 0 0 and define an indicator rando variable as follows: Let E ; if X E 0 0; oterwise. (85) p Pr [E =]=Pr X E 0 (86) were we now fro Lea 3 tat We now bound as follows: li p =: (87) E" I(X; Y) I(X;E; Y) (88) = I(E; Y) +I(X; Y j E) (89) = H(E) 0 H(E j Y) +I(X; Y j E) (90) H(E)+I(X; Y j E) (9) = H b (p)+pi(x; Y j E =) +(0p)I(X; Y j E =0) (9) H b (p)+i(x; Y j E =)+(0 p)c(e 0) (93) were H b () 0 log 0 ( 0 ) log( 0 ) (94) is te binary entropy function. Here, (88) follows fro adding an additional rando variable to utual inforation; te subsequent two equalities follow fro te cain rule and fro te definition of utual inforation (notice tat we use entropy and not differential entropy because E is a binary rando variable); in te subsequent inequality we rely on te nonnegativity of entropy; and te last inequality follows fro bounding p and fro upper bounding te utual inforation ter by te capacity C for te available power wic conditional on E =0 is E 0. We rear tat even toug C(E 0) is unnown, we now tat it is finite and independent of E so tat fro (87) we ave li E" fh b(p)+(0 p)c(e 0)g =0: (95) We continue wit te second ter of (93) as follows: I(X; Y j E =)=I(X; X + Z j E =) (96) I(X; X + Z; Z j E =) (97) = I(X; X; Z j E =) (98) = I(X; X j E =) + I(X; Z j X;E =) (99) = I(X; X j E =) (00) X = I X; X; E = (0) X = I X; X; = I(X; X je =) + I X; I(X; X;e j E =) + I X; = I(X; Xe j E =) + I X; E = (0) X;E = (03) X;E = (04) X;E = : (05) Here, (97) follows fro adding an additional rando vector Z to te arguent of te utual inforation; te subsequent equality fro subtracting te nown vector Z fro Y; te subsequent two equalities follow fro te cain rule and te independence between te noise and all oter rando quantities; ten we split X into agnitude and direction vector and use te cain rule again; (04) follows fro adding a rando variable to utual inforation: we introduce e tat is independent of all te oter rando quantities and tat is uniforly distributed on te coplex unit circle; and te last equality olds because fro Xe we can easily get bac X and e. We next apply Lea 4 to te first ter in (05), i.e., we coose S = X and T = Xe. Note tat we need to condition everyting on te event E =. We get I(X; Xe j E =) 0( + log 0 ; Xe j X;E = ) + log + log

9 660 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY 007 +(0 ) log X + E = + X E = + (06) were ; > 0, and 0 can be cosen freely, but ust not depend on X. Notice tat fro a conditional version of Lea wit = follows tat ( Xe j X = x;e =) = (e j X = x;e =) + ( X je ; X = x;e =) + [log X jx = x;e =] (07) = log + ( X jx = x;e =) + [log X jx = x;e =] (08) were we ave used tat e is independent of all oter rando quantities and uniforly distributed on te unit circle. Taing te expectation over X conditional on E =we ten yield ( Xe j X;E =) = log + ( X jx;e =) + [log X je = ] (09) = log + ( R j ;R;E =) + log R E = (0) = log + ( j ;R;E =) + log R E = + log je = + [log RjE = ] () = log + ( j ;E =) + [log R j E =] + log E = () were te second equality follows fro te definition of R X; were te tird equality follows fro te scaling property of entropy wit a real arguent; and were te last equality follows because given, is independent of R. Next we assue 0 <<suc tat 0 >0. Ten we define sup x E suc tat ( 0 ) log( X + ) E = log x + 0 log x (3) =(0 ) log( X + ) 0 log X E = +(0 ) log X E = (4) ( 0 ) sup x E log( x + ) 0 log x +(0 ) log X E = (5) =(0 ) +(0 ) log X E = (6) +(0 ) log X E = : (7) Note tat in te first inequality we ave ade use of te fact tat E =, i.e., tat X E 0. Finally, we bound = X E = sup = sup R E = (8) R E = (9) R E = (0) sup E () p were we ave used te fact tat R needs to satisfy te average-power constraint (7) to get te following bound: E R () = p R E = +(0 p) R E =0 (3) p R E = : (4) Plugging (), (7), and () into (06) we yield I(X; Xe j E =) 0log 0 ( j ;E =)0 [log R j E =] 0 log E = + log + log 0 ; +(0 ) log X E = + + sup E p + : (5) Next, we continue wit te second ter in (05): I X; X;E = = X;E = 0 R; ;R;E = (6) = X;E = 0 ; ;R;E = (7) E = 0 ; ;E = : (8) Here, te last inequality follows because conditioning cannot increase entropy and because given and, te ter = does not depend on R.

10 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY Hence, using (8), (5), and (05) in (93), we get I(X;Y) H b (p) +(0 p)c(e 0) 0 log 0 ( j ;E =) 0 [log R j E =]0 log E = + log + log 0 ; +(0) log X E = = li sup I(X; Y) 0 log + log + E (38) E" li E" sup 0 ( j ) 0 log + + sup E p + + E = + sup + n R log + log 0 ; E p H b (p) 0 ; ;E = (9) = H b (p)+(0 p)c(e 0 ) 0 log 0 ( j ;E =) +(n R 0 ) log E = 0 log R E = +(0 p)c(e 0)+(log 0 log E 0 0 ) 0 log + log + E (39) 0 log je = + log + log 0 ; + [log X je =]0 log X j E = + + sup E p + + E = (30) = li E" sup 0 ( j ) 0 log + n R log + log 0 ; = E = 0 ( j ;E =) + n R log je = 0 log + log 0 ; + sup E p H b (p)+(0 p)c(e 0 ) + log 0 log X E = (3) E = 0 ( j ;E =) + n R log E = 0 log + log 0 ; = sup + sup E p H b (p) +(0p)C(E 0)+(log 0 log E 0 0 ) 0 log + log + E (40) + n R log 0 ( j ) 0 log + sup E p H b(p)+(0 p)c(e 0) + (log 0 log E 0 0 ): (3) Here, (30) follows again fro a conditional version of Lea siilar to (07) () wic allows us to cobine te fourt and te last ter in (9); in te subsequent equality we aritetically rearrange te ters; and te final inequality follows fro te following bound: log X E = inf x E = log E 0 + inf log x (33) log (34) log E 0 + (35) were te last line sould be taen as a definition for. Notice tat 0 << (36) as can be argued as follows: te lower bound on follows fro [, Lea 6.7f)], [, Lea A.5f)] because ( ) > 0 and F <. Te upper bound on can be verified using te concavity of te logarit function and Jensen s inequality. Note tat (3) does not depend on te distribution of R anyore, but only on! Hence, we can get an upper bound on capacity by taing te supreu over all possible distributions. Tis ten gives us te following upper bound on te fading nuber: + li E" log 0 ; = sup 0 log + sup E p H b(p) +(0 p)c(e 0 )+(log 0 log E 0 0 ) + log 0 log + log + E (4) 0 ( j ) +n R log 0 log + log( 0 e 0 )+ + 0 log : (4) Here, te first two equalities follows fro te definition of te fading nuber (9); te subsequent inequality fro (3); (40) follows because te paraeters,, and ust not depend on te input distribution (owever, note tat we are allowed to let te depend on E); te subsequent equality follows since te first four ters do not depend on E; and in te last equality we ave used (95) and ade te following coices on te free paraeters and ( )= li E" C(E) 0 log + log + E (37) (E) = log E + log sup [ ] (43)

11 66 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY 007 (E) = (E) e= (44) for soe constant 0. For tis coice, note tat li E" log 0 ; 0 log = log( 0 e0 ) li E" (45) li E" (log 0 log E 0 0 ) = (46) sup E p + =0 (47) li E" log 0 log + log + E = 0 log : (48) (Copare wit [, App. VII], [, Sec. B.5.9].) To finis te derivation of te upper bound, we let go to zero. Note tat! 0 as # 0 as can be seen fro (3). Note furter tat Terefore, we get ( ) sup liflog( 0 e 0 ) 0 log g =0: (49) #0 0( j ) +n R log 0 log : (50) B. Derivation of a Lower Bound To derive a lower bound on capacity (or te fading nuber, respectively) we coose a specific input distribution. Let X be of te for X = R : (5) Here n is assued to be a rando unit-vector tat is circularly syetric, but wose exact distribution will be specified later. Te rando variable R + 0 is cosen to be independent of and suc tat were we coose x in as log R U [log x in; log E] (5) x in log E: (53) Note tat tis coice of R satisfies te pea-power constraint (6) and terefore also te average-power constraint (7). Using suc an input to our MIMO fading cannel we get te following lower bound to cannel capacity: C(E) I(X; Y) (54) = I(R; ; Y) (55) = I( ; Y) +I(R; Y j ) (56) = I( ; Y) +I(R; Ye j ) 0 I(R; Ye j ) + I(R; Y j ) (57) = I( ; Y) +I(R; e ; Ye j ) 0 I(e ; Ye j ;R) 0 I(R; Ye j ) +I(R; Y j ): (58) Here we ave introduced a new rando variable U ([0; ]) wic is assued to be independent of every oter rando quantity. Te last two ters can be rearranged as follows: 0I(R; Ye j ) +I(R; Y j ) = 0(Ye j ) +(Ye j ;R)+(Y j ) 0 (Y j ;R) (59) = 0(Ye j ) +(Ye j ;R)+(Ye j ;e ) 0 (Ye j ;R;e ) (60) = 0I(e ; Ye j ) +I(e ; Ye j ;R): (6) Here te second equality follows because e is independent of everyting else so tat we can add it to te conditioning part of te entropy witout canging its values, and because differential entropy reains uncanged if its arguent is ultiplied by a constant coplex nuber of agnitude. Cobining tis wit (58), we yield C(E) I( ; Y) +I(R; e ; Ye j ) 0 I(e ; Ye j ) = I( ; Y) +I(Re ; Ye j ) 0 I(e ; Ye j ) (6) (63) were te last equality follows because fro Re te rando variables R and e can be gained bac. We continue wit bounding te first ter in (63) I( ; Y) =I( ; Y; Z) 0 I( ; Z j Y) (x ) (64) I( ; Y; Z) 0 (x in ) (65) = I( ; R) 0 (x in ) (66) = I ; = I ; ; R 0 (x in ) (67) + I ; R 0 (x in): (68) Here te first equality follows fro te cain rule; in te subsequent inequality we lower bound te second ter by 0(x in ) wic is defined in Appendix E and is sown tere to be independent of te input distribution X and to tend to zero as x in "; in te subsequent equality we use Z in order to extract R fro Y and ten drop (Y; Z) since given R it is independent of te oter rando variables; and te last equality follows again fro te cain rule. Siilarly, we bound te tird ter in (63) I(e ; Ye j ) I(e ; Ye ; Ze j ) (69) = I(e ; Xe ; Ze j ) (70) = I(e ; Xe j ) +I(e ; Ze j Xe ; ) (7) = I(e ; Xe j ) (7) = I e ; R; e (73)

12 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY = I e ; e 0I e ; e 0 (x in ): (8) + I e ; R Hence, plugging tese results into (63), we get e ; : (74) C(E) I(Re ; Ye j ) +I ; + I ; R 0 I e ; e 0 I e ; R e ; 0 (x in ): (75) We next bound te tird and fift utual inforation ter in (75) I ; R = R 0 R + R = R 0 R + R = R R = R 0 I e ; R 0 R e ; e ; ; e ; ;e (76) 0 R e ; ; ; (77) 0 R 0 R 0 R e ; e (78) (79) (80) =0: (8) Here, te inequality follows fro conditioning tat reduces entropy; and te second last equality olds because we ave assued to be circularly syetric, i.e., destroys te rando pase sift of e. Terefore, we are left wit te following bound: Now, we rewrite te second and tird ter as follows: I ; = 0 + = + = 0 I e ; 0 e e e ;e (83) 0 0 e (84) 0 e (85) were te second equality follows fro (3) wit a coice = e 0 n and fro te fact tat e is independent of all oter rando quantities. Tis leaves us wit C(E) I(Re ; Ye j ) + 0 e 0 (x in ): (86) Next, we let te power grow to infinity E! and use te definition of te fading nuber (9). Since Re is circularly syetric wit a agnitude distributed according to (5), we now fro [, Eq. (08) and Teore 4.8], [, Eq. (6.94) and Teore 6.5], tat Re acieves te fading nuber of a eoryless SIMO fading cannel wit partial side-inforation. In our situation we ave I(Re ; Ye j ) =I(Re ; Re + Z j ) (87) = I(Re ; Re + Z; ) (88) were serves as partial receiver side-inforation (tat is independent of te SIMO input Re ). Note tat a rando vector A is said to contain only partial side-inforation about B if (BjA) > 0, i.e., in our case we need ( j ) > 0 (89) wic is satisfied since we assue tat ( ) > 0 and F < (see [, Lea 6.6], [, Lea A.4]). Hence ( ) li I(Re ; Re + Z j ) + E" 0 e 0 (xin) 0 log + log + E (90) C(E) I(Re ; Ye j ) +I ; = li E" I(Re ; Re + Z j )

13 664 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY log + log + E 0 (x in ) + = ( j ) + 0 e (9) 0 e (9) = e + n R log 0 log 0 ( j ) + 0 e (93) = + n R log 0 log 0 ( j ): (94) Here in (9), we ave used te fact tat our coice (53) guarantees tat (x in ) tends to zero as E!(see Appendix E) and tat we acieve te SIMO fading nuber for a cannel wit input Re and output Re + Z; te subsequent equality follows fro te fading nuber of a eoryless SIMO fading cannel were te receiver as access to soe partial side-inforation [, Eq. (08)], [, Eq. (6.94)]: (HjS) = ( ^He j S) +n R log H 0 log 0 (HjS): (95) Te result now follows by coosing te distribution suc as to axiize te lower bound (94) to te fading nuber. VIII. CONCLUSION We ave derived te fading nuber of a MIMO fading cannel of general fading law including spatial, but witout teporal eory. Since te fading nuber is te second ter after te double-logaritic ter of te ig-snr expansion of cannel capacity, tis eans tat we ave precisely specified te beavior of te cannel capacity asyptotically wen te power grows to infinity. Te result sows tat te asyptotic capacity can be acieved by an input tat consists of te product of two independent rando quantities: a circularly syetric rando unit vector (te direction) and a nonnegative (i.e., real) rando variable (te agnitude). Te distribution of te rando direction is cosen suc as to axiize te fading nuber and terefore depends on te particular law of te fading process. Te nonnegative rando variable is suc tat (38) is satisfied. Tis is te well-nown coice tat also acieves te fading nuber in te SISO and SIMO case and is also used in te MISO case were it is ultiplied by a constant bea-direction. All tese special cases follow nicely fro tis new result. We ave ten derived soe new results for te iportant special situation of Gaussian fading. For te case of a scalar line-of-sigt atrix (68) assuing at least as any transit as receive antennas n R n T we ave been able to state te fading nuber precisely = n R g n (jdj ) 0 n R 0 log 0(n R ) (96) were g () denotes te expected value of a noncentral ci-square rando variable (see (6)). We see tat te asyptotic capacity only depends on te nuber of receive antennas and is growing proportionally to n R log jdj. For a general line-of-sigt atrix, we ave sown an upper bound tat grows lie infn R ;n T g log were is a certain ind of average of all singular values of te line-of-sigt atrix (see (79) and (80)). We would lie to epasize tat even toug all results on te fading nuber are asyptotic results for te teoretical situation of infinite power, tey are still of relevance for finite SNR values: it as been sown tat te approxiation C(SNR) log( + log( + SNR)) + (97) olds already for oderate values of te SNR. Actually, pulling ourselves by our bootstraps, let us consider for te oent tat (97) starts to be valid for an SNR soewere in te range of 30 to 80 db. In tis case log( + log( + SNR)) will ave a value between and 3 nats. Hence, once te capacity is appreciably above +nats, te approxiation (97) is liely to be valid [0], []. Terefore, te fading nuber can be seen as an indicator of te axial rate at wic power efficient counication is possible on te cannel. For a furter discussion about te practical relevance of te fading nuber we refer to [0] and []. APPENDIX A PROOF OF LEMMA 5 Assue tat U([0; ]), independent of every oter rando quantity. Ten I(X; Y) =I(X; Y j e ) (98) = I(Xe ; Ye j e ) (99) = I(Xe ; Xe + Z j e ) (00) = I( ~ X; ~ X + Z j e ) (0) = ( ~ X + Z j e ) 0 ( ~ X + Z j ~ X;e ) (0) = ( ~ X + Z j e ) 0 ( ~ X + Z j ~ X) (03) ( ~ X + Z) 0 ( ~ X + Z j ~ X) (04) = I( X; ~ X ~ + Z): (05) Here te first equality follows because is independent of every oter rando quantity; te tird equality follows because Z is circularly syetric; in te subsequent equality we substitute X ~ = Xe ; and te inequality follows since conditioning reduces entropy. Hence, a circularly syetric input acieves a utual inforation tat is at least as big as te original utual inforation. APPENDIX B DERIVATION OF BOUNDS (67) In tis appendix we will derive te bounds (67) on g (). We start wit te upper bound wic follows directly fro (64) and (65) and fro Jensen s inequality: g (s )= log = log log j= j= j= ju j + jj (06) ju j + jj (07) ( + j j j ) (08) = log( + s ): (09) For te lower bound we also start wit (64) and coose = s and = = =0. Ten we get g (s )= log j= ju j + j j (0)

14 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY log ju + j () = g (s ) () = log s 0 Ei 0s : (3) Here, () follows fro dropping soe nonnegative ters in te su; and in te subsequent two equalities we use te definition of g (). APPENDIX C PROOF OF COROLLARY We coose a constant n T n T atrix diag d ;...; as follows: ; ;...; (4) d n d d and ten we note tat for a unit vector =( () ;...; (n ) ) = + ~ = () were ~ H N 0; () n wit () j () j j jd j + + jd n j + j jd j. + ~ + H ~ (5) (n ) (n ) j (n +) j and were n wit. Terefore + + j (n ) j jd j (6) ( j = ) =n R log e () (7) log = log () +g n (were te last equality follows fro (64)) and ence n R log 0 ( j ) () = n R g n j () j + + j (n ) j ( ) (8) 0 n R log e: (9) Te upper bound on te fading nuber now follows fro (39); fro Teore 7 by upper bounding te -ter by log c n ; and fro te additional observations tat g () is a onotonically increasing function, tat and tat j () j + + j (n ) j (0) ( ) = j () j jd j + j (n +) j jd j j () j jd j j (n ) j + + jd n j j (n ) j + + () jd j j (n ) j + + () jd j = jd j j () j + + j (n ) j (3) = jd j = (4) were te inequality follows since jd jjd j jd n j. APPENDIX D PROOF OF PROPOSITION 3 Tis upper bound is based on te upper bound given in Corollary 8 for a coice of = n.ifn R >n T we coose for a a diag ;...; ;b;...;b (5) d d n wit b (6) n T for as given in (80), and wit a suc tat det =, i.e., a (d d n ) b : (7) For suc a coice we note tat so tat = a 0 jaj jaj + N 0; ;...; N 0; ; jd j jd n j N 0;b ;...; N 0;b (8) = b +(n R 0 n T )b (9) = n R : (30) n T Hence, using Jensen s inequality and te fact tat det =we get n R log 0 ( ) n R log 0 log det 0 ( ) (3) = n R log n R n T n =n 0 n R log e: (3) Plugging tis into te upper bound (4) of Corollary 8, we yield n R log 0 log 0(n R)+n R log n R + n T log = n T log If n R n T we coose for wit a suc tat det n T 0 n R log e (33) n T + n R log n R 0 log 0(n R ) 0 n R : = diag =, i.e., For suc a coice we note tat = a () ;...; (n ) (34) a a ;...; (35) d d n a (d d n ) : (36) + N 0; jaj jaj ;...; N 0; (37) jd j jd n j

1 Proving the Fundamental Theorem of Statistical Learning

1 Proving the Fundamental Theorem of Statistical Learning THEORETICAL MACHINE LEARNING COS 5 LECTURE #7 APRIL 5, 6 LECTURER: ELAD HAZAN NAME: FERMI MA ANDDANIEL SUO oving te Fundaental Teore of Statistical Learning In tis section, we prove te following: Teore.

More information

Supplementary Materials: Proofs and Technical Details for Parsimonious Tensor Response Regression Lexin Li and Xin Zhang

Supplementary Materials: Proofs and Technical Details for Parsimonious Tensor Response Regression Lexin Li and Xin Zhang Suppleentary Materials: Proofs and Tecnical Details for Parsionious Tensor Response Regression Lexin Li and Xin Zang A Soe preliinary results We will apply te following two results repeatedly. For a positive

More information

lecture 35: Linear Multistep Mehods: Truncation Error

lecture 35: Linear Multistep Mehods: Truncation Error 88 lecture 5: Linear Multistep Meods: Truncation Error 5.5 Linear ultistep etods One-step etods construct an approxiate solution x k+ x(t k+ ) using only one previous approxiation, x k. Tis approac enoys

More information

Derivative at a point

Derivative at a point Roberto s Notes on Differential Calculus Capter : Definition of derivative Section Derivative at a point Wat you need to know already: Te concept of liit and basic etods for coputing liits. Wat you can

More information

5.1 The derivative or the gradient of a curve. Definition and finding the gradient from first principles

5.1 The derivative or the gradient of a curve. Definition and finding the gradient from first principles Capter 5: Dierentiation In tis capter, we will study: 51 e derivative or te gradient o a curve Deinition and inding te gradient ro irst principles 5 Forulas or derivatives 5 e equation o te tangent line

More information

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

Fixed-to-Variable Length Distribution Matching

Fixed-to-Variable Length Distribution Matching Fixed-to-Variable Length Distribution Matching Rana Ali Ajad and Georg Böcherer Institute for Counications Engineering Technische Universität München, Gerany Eail: raa2463@gail.co,georg.boecherer@tu.de

More information

A KERNEL APPROACH TO ESTIMATING THE DENSITY OF A CONDITIONAL EXPECTATION. Samuel G. Steckley Shane G. Henderson

A KERNEL APPROACH TO ESTIMATING THE DENSITY OF A CONDITIONAL EXPECTATION. Samuel G. Steckley Shane G. Henderson Proceedings of te 3 Winter Siulation Conference S Cick P J Sáncez D Ferrin and D J Morrice eds A KERNEL APPROACH TO ESTIMATING THE DENSITY OF A CONDITIONAL EXPECTATION Sauel G Steckley Sane G Henderson

More information

MAT244 - Ordinary Di erential Equations - Summer 2016 Assignment 2 Due: July 20, 2016

MAT244 - Ordinary Di erential Equations - Summer 2016 Assignment 2 Due: July 20, 2016 MAT244 - Ordinary Di erential Equations - Summer 206 Assignment 2 Due: July 20, 206 Full Name: Student #: Last First Indicate wic Tutorial Section you attend by filling in te appropriate circle: Tut 0

More information

3.8 Three Types of Convergence

3.8 Three Types of Convergence 3.8 Three Types of Convergence 3.8 Three Types of Convergence 93 Suppose that we are given a sequence functions {f k } k N on a set X and another function f on X. What does it ean for f k to converge to

More information

Stationary Gaussian Markov processes as limits of stationary autoregressive time series

Stationary Gaussian Markov processes as limits of stationary autoregressive time series Stationary Gaussian Markov processes as liits of stationary autoregressive tie series Pilip A. rnst 1,, Lawrence D. Brown 2,, Larry Sepp 3,, Robert L. Wolpert 4, Abstract We consider te class, C p, of

More information

c hc h c h. Chapter Since E n L 2 in Eq. 39-4, we see that if L is doubled, then E 1 becomes (2.6 ev)(2) 2 = 0.65 ev.

c hc h c h. Chapter Since E n L 2 in Eq. 39-4, we see that if L is doubled, then E 1 becomes (2.6 ev)(2) 2 = 0.65 ev. Capter 39 Since n L in q 39-4, we see tat if L is doubled, ten becoes (6 ev)() = 065 ev We first note tat since = 666 0 34 J s and c = 998 0 8 /s, 34 8 c6 66 0 J sc 998 0 / s c 40eV n 9 9 60 0 J / ev 0

More information

A KERNEL APPROACH TO ESTIMATING THE DENSITY OF A CONDITIONAL EXPECTATION. Samuel G. Steckley Shane G. Henderson

A KERNEL APPROACH TO ESTIMATING THE DENSITY OF A CONDITIONAL EXPECTATION. Samuel G. Steckley Shane G. Henderson Proceedings of te 3 Winter Siulation Conference S Cick P J Sáncez D Ferrin and D J Morrice eds A KERNEL APPROACH TO ESTIMATING THE DENSITY OF A CONDITIONAL EXPECTATION Sauel G Steckley Sane G Henderson

More information

On Constant Power Water-filling

On Constant Power Water-filling On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives

More information

Multi-Dimensional Hegselmann-Krause Dynamics

Multi-Dimensional Hegselmann-Krause Dynamics Multi-Diensional Hegselann-Krause Dynaics A. Nedić Industrial and Enterprise Systes Engineering Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu B. Touri Coordinated Science Laboratory

More information

A SHORT INTRODUCTION TO BANACH LATTICES AND

A SHORT INTRODUCTION TO BANACH LATTICES AND CHAPTER A SHORT INTRODUCTION TO BANACH LATTICES AND POSITIVE OPERATORS In tis capter we give a brief introduction to Banac lattices and positive operators. Most results of tis capter can be found, e.g.,

More information

Optimal Jamming Over Additive Noise: Vector Source-Channel Case

Optimal Jamming Over Additive Noise: Vector Source-Channel Case Fifty-first Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 2-3, 2013 Optial Jaing Over Additive Noise: Vector Source-Channel Case Erah Akyol and Kenneth Rose Abstract This paper

More information

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator.

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator. Lecture XVII Abstract We introduce te concept of directional derivative of a scalar function and discuss its relation wit te gradient operator. Directional derivative and gradient Te directional derivative

More information

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households Volume 29, Issue 3 Existence of competitive equilibrium in economies wit multi-member ouseolds Noriisa Sato Graduate Scool of Economics, Waseda University Abstract Tis paper focuses on te existence of

More information

On Conditions for Linearity of Optimal Estimation

On Conditions for Linearity of Optimal Estimation On Conditions for Linearity of Optial Estiation Erah Akyol, Kuar Viswanatha and Kenneth Rose {eakyol, kuar, rose}@ece.ucsb.edu Departent of Electrical and Coputer Engineering University of California at

More information

i ij j ( ) sin cos x y z x x x interchangeably.)

i ij j ( ) sin cos x y z x x x interchangeably.) Tensor Operators Michael Fowler,2/3/12 Introduction: Cartesian Vectors and Tensors Physics is full of vectors: x, L, S and so on Classically, a (three-diensional) vector is defined by its properties under

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

Neural Networks Trained with the EEM Algorithm: Tuning the Smoothing Parameter

Neural Networks Trained with the EEM Algorithm: Tuning the Smoothing Parameter eural etworks Trained wit te EEM Algorit: Tuning te Sooting Paraeter JORGE M. SATOS,2, JOAQUIM MARQUES DE SÁ AD LUÍS A. ALEXADRE 3 Intituto de Engenaria Bioédica, Porto, Portugal 2 Instituto Superior de

More information

Antenna Saturation Effects on MIMO Capacity

Antenna Saturation Effects on MIMO Capacity Antenna Saturation Effects on MIMO Capacity T S Pollock, T D Abhayapala, and R A Kennedy National ICT Australia Research School of Inforation Sciences and Engineering The Australian National University,

More information

Upper and Lower Bounds on the Capacity of Wireless Optical Intensity Channels

Upper and Lower Bounds on the Capacity of Wireless Optical Intensity Channels ISIT7, Nice, France, June 4 June 9, 7 Upper and Lower Bounds on the Capacity of Wireless Optical Intensity Channels Ahed A. Farid and Steve Hranilovic Dept. Electrical and Coputer Engineering McMaster

More information

Estimating the Density of a Conditional Expectation

Estimating the Density of a Conditional Expectation Estiating te Density of a Conditional Expectation Sauel G. Steckley Sane G. Henderson David Ruppert Ran Yang Daniel W. Apley Jerey Stau Abstract In tis paper, we analyze etods for estiating te density

More information

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion Suppleentary Material for Fast and Provable Algoriths for Spectrally Sparse Signal Reconstruction via Low-Ran Hanel Matrix Copletion Jian-Feng Cai Tianing Wang Ke Wei March 1, 017 Abstract We establish

More information

The proofs of Theorem 1-3 are along the lines of Wied and Galeano (2013).

The proofs of Theorem 1-3 are along the lines of Wied and Galeano (2013). A Appendix: Proofs The proofs of Theore 1-3 are along the lines of Wied and Galeano (2013) Proof of Theore 1 Let D[d 1, d 2 ] be the space of càdlàg functions on the interval [d 1, d 2 ] equipped with

More information

A remark on a success rate model for DPA and CPA

A remark on a success rate model for DPA and CPA A reark on a success rate odel for DPA and CPA A. Wieers, BSI Version 0.5 andreas.wieers@bsi.bund.de Septeber 5, 2018 Abstract The success rate is the ost coon evaluation etric for easuring the perforance

More information

Differentiation in higher dimensions

Differentiation in higher dimensions Capter 2 Differentiation in iger dimensions 2.1 Te Total Derivative Recall tat if f : R R is a 1-variable function, and a R, we say tat f is differentiable at x = a if and only if te ratio f(a+) f(a) tends

More information

The Weierstrass Approximation Theorem

The Weierstrass Approximation Theorem 36 The Weierstrass Approxiation Theore Recall that the fundaental idea underlying the construction of the real nubers is approxiation by the sipler rational nubers. Firstly, nubers are often deterined

More information

EN40: Dynamics and Vibrations. Midterm Examination Tuesday March

EN40: Dynamics and Vibrations. Midterm Examination Tuesday March EN4: Dynaics and ibrations Midter Exaination Tuesday Marc 4 14 Scool of Engineering Brown University NAME: General Instructions No collaboration of any kind is peritted on tis exaination. You ay bring

More information

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these.

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these. Mat 11. Test Form N Fall 016 Name. Instructions. Te first eleven problems are wort points eac. Te last six problems are wort 5 points eac. For te last six problems, you must use relevant metods of algebra

More information

Copyright c 2008 Kevin Long

Copyright c 2008 Kevin Long Lecture 4 Numerical solution of initial value problems Te metods you ve learned so far ave obtained closed-form solutions to initial value problems. A closedform solution is an explicit algebriac formula

More information

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Proc. of the IEEE/OES Seventh Working Conference on Current Measureent Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Belinda Lipa Codar Ocean Sensors 15 La Sandra Way, Portola Valley, CA 98 blipa@pogo.co

More information

Physics 215 Winter The Density Matrix

Physics 215 Winter The Density Matrix Physics 215 Winter 2018 The Density Matrix The quantu space of states is a Hilbert space H. Any state vector ψ H is a pure state. Since any linear cobination of eleents of H are also an eleent of H, it

More information

HOMEWORK HELP 2 FOR MATH 151

HOMEWORK HELP 2 FOR MATH 151 HOMEWORK HELP 2 FOR MATH 151 Here we go; te second round of omework elp. If tere are oters you would like to see, let me know! 2.4, 43 and 44 At wat points are te functions f(x) and g(x) = xf(x)continuous,

More information

The Transactional Nature of Quantum Information

The Transactional Nature of Quantum Information The Transactional Nature of Quantu Inforation Subhash Kak Departent of Coputer Science Oklahoa State University Stillwater, OK 7478 ABSTRACT Inforation, in its counications sense, is a transactional property.

More information

Combining functions: algebraic methods

Combining functions: algebraic methods Combining functions: algebraic metods Functions can be added, subtracted, multiplied, divided, and raised to a power, just like numbers or algebra expressions. If f(x) = x 2 and g(x) = x + 2, clearly f(x)

More information

Chaotic Coupled Map Lattices

Chaotic Coupled Map Lattices Chaotic Coupled Map Lattices Author: Dustin Keys Advisors: Dr. Robert Indik, Dr. Kevin Lin 1 Introduction When a syste of chaotic aps is coupled in a way that allows the to share inforation about each

More information

ch (for some fixed positive number c) reaching c

ch (for some fixed positive number c) reaching c GSTF Journal of Matematics Statistics and Operations Researc (JMSOR) Vol. No. September 05 DOI 0.60/s4086-05-000-z Nonlinear Piecewise-defined Difference Equations wit Reciprocal and Cubic Terms Ramadan

More information

Approximation in Stochastic Scheduling: The Power of LP-Based Priority Policies

Approximation in Stochastic Scheduling: The Power of LP-Based Priority Policies Approxiation in Stochastic Scheduling: The Power of -Based Priority Policies Rolf Möhring, Andreas Schulz, Marc Uetz Setting (A P p stoch, r E( w and (B P p stoch E( w We will assue that the processing

More information

Distributed Subgradient Methods for Multi-agent Optimization

Distributed Subgradient Methods for Multi-agent Optimization 1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

LAB #3: ELECTROSTATIC FIELD COMPUTATION

LAB #3: ELECTROSTATIC FIELD COMPUTATION ECE 306 Revised: 1-6-00 LAB #3: ELECTROSTATIC FIELD COMPUTATION Purpose During tis lab you will investigate te ways in wic te electrostatic field can be teoretically predicted. Bot analytic and nuerical

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

Passivity based control of magnetic levitation systems: theory and experiments Λ

Passivity based control of magnetic levitation systems: theory and experiments Λ Passivity based control of agnetic levitation systes: teory and experients Λ Hugo Rodriguez a, Roeo Ortega ay and Houria Siguerdidjane b alaboratoire des Signaux et Systées bservice d Autoatique Supelec

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory Proble sets 5 and 6 Due: Noveber th Please send your solutions to learning-subissions@ttic.edu Notations/Definitions Recall the definition of saple based Radeacher

More information

Bloom Features. Kwabena Boahen Bioengineering Department Stanford University Stanford CA, USA

Bloom Features. Kwabena Boahen Bioengineering Department Stanford University Stanford CA, USA 2015 International Conference on Coputational Science and Coputational Intelligence Bloo Features Asok Cutkosky Coputer Science Departent Stanford University Stanford CA, USA asokc@stanford.edu Kwabena

More information

A1. Find all ordered pairs (a, b) of positive integers for which 1 a + 1 b = 3

A1. Find all ordered pairs (a, b) of positive integers for which 1 a + 1 b = 3 A. Find all ordered pairs a, b) of positive integers for which a + b = 3 08. Answer. The six ordered pairs are 009, 08), 08, 009), 009 337, 674) = 35043, 674), 009 346, 673) = 3584, 673), 674, 009 337)

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a ournal published by Elsevier. The attached copy is furnished to the author for internal non-coercial research and education use, including for instruction at the authors institution

More information

Numerical Differentiation

Numerical Differentiation Numerical Differentiation Finite Difference Formulas for te first derivative (Using Taylor Expansion tecnique) (section 8.3.) Suppose tat f() = g() is a function of te variable, and tat as 0 te function

More information

Parameter Fitted Scheme for Singularly Perturbed Delay Differential Equations

Parameter Fitted Scheme for Singularly Perturbed Delay Differential Equations International Journal of Applied Science and Engineering 2013. 11, 4: 361-373 Parameter Fitted Sceme for Singularly Perturbed Delay Differential Equations Awoke Andargiea* and Y. N. Reddyb a b Department

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

The Fading Number of a Multiple-Access Rician Fading Channel

The Fading Number of a Multiple-Access Rician Fading Channel XTNDD VRSION of a Paper in I TRANSACTIONS ON INFORMATION THORY, VOL. 57, NO. 8, AUGUST 011 1 The Fading Nuber of a Multiple-Access Rician Fading Channel Gu-Rong Lin and Stefan M. Moser, Senior Meber, I

More information

Estimation for the Parameters of the Exponentiated Exponential Distribution Using a Median Ranked Set Sampling

Estimation for the Parameters of the Exponentiated Exponential Distribution Using a Median Ranked Set Sampling Journal of Modern Applied Statistical Metods Volue 14 Issue 1 Article 19 5-1-015 Estiation for te Paraeters of te Exponentiated Exponential Distribution Using a Median Ranked Set Sapling Monjed H. Sau

More information

lecture 26: Richardson extrapolation

lecture 26: Richardson extrapolation 43 lecture 26: Ricardson extrapolation 35 Ricardson extrapolation, Romberg integration Trougout numerical analysis, one encounters procedures tat apply some simple approximation (eg, linear interpolation)

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx. Capter 2 Integrals as sums and derivatives as differences We now switc to te simplest metods for integrating or differentiating a function from its function samples. A careful study of Taylor expansions

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I Contents 1. Preliinaries 2. The ain result 3. The Rieann integral 4. The integral of a nonnegative

More information

Asynchronous Gossip Algorithms for Stochastic Optimization

Asynchronous Gossip Algorithms for Stochastic Optimization Asynchronous Gossip Algoriths for Stochastic Optiization S. Sundhar Ra ECE Dept. University of Illinois Urbana, IL 680 ssrini@illinois.edu A. Nedić IESE Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu

More information

1 1. Rationalize the denominator and fully simplify the radical expression 3 3. Solution: = 1 = 3 3 = 2

1 1. Rationalize the denominator and fully simplify the radical expression 3 3. Solution: = 1 = 3 3 = 2 MTH - Spring 04 Exam Review (Solutions) Exam : February 5t 6:00-7:0 Tis exam review contains questions similar to tose you sould expect to see on Exam. Te questions included in tis review, owever, are

More information

SPECTRUM sensing is a core concept of cognitive radio

SPECTRUM sensing is a core concept of cognitive radio World Acadey of Science, Engineering and Technology International Journal of Electronics and Counication Engineering Vol:6, o:2, 202 Efficient Detection Using Sequential Probability Ratio Test in Mobile

More information

Submanifold density estimation

Submanifold density estimation Subanifold density estiation Arkadas Ozakin Georgia Tec Researc Institute Georgia Insitute of Tecnology arkadas.ozakin@gtri.gatec.edu Alexander Gray College of Coputing Georgia Institute of Tecnology agray@cc.gatec.edu

More information

arxiv: v1 [cs.ds] 3 Feb 2014

arxiv: v1 [cs.ds] 3 Feb 2014 arxiv:40.043v [cs.ds] 3 Feb 04 A Bound on the Expected Optiality of Rando Feasible Solutions to Cobinatorial Optiization Probles Evan A. Sultani The Johns Hopins University APL evan@sultani.co http://www.sultani.co/

More information

Exam 1 Review Solutions

Exam 1 Review Solutions Exam Review Solutions Please also review te old quizzes, and be sure tat you understand te omework problems. General notes: () Always give an algebraic reason for your answer (graps are not sufficient),

More information

Analytic Functions. Differentiable Functions of a Complex Variable

Analytic Functions. Differentiable Functions of a Complex Variable Analytic Functions Differentiable Functions of a Complex Variable In tis capter, we sall generalize te ideas for polynomials power series of a complex variable we developed in te previous capter to general

More information

Fractional Derivatives as Binomial Limits

Fractional Derivatives as Binomial Limits Fractional Derivatives as Binomial Limits Researc Question: Can te limit form of te iger-order derivative be extended to fractional orders? (atematics) Word Count: 669 words Contents - IRODUCIO... Error!

More information

3.3 Variational Characterization of Singular Values

3.3 Variational Characterization of Singular Values 3.3. Variational Characterization of Singular Values 61 3.3 Variational Characterization of Singular Values Since the singular values are square roots of the eigenvalues of the Heritian atrices A A and

More information

A new type of lower bound for the largest eigenvalue of a symmetric matrix

A new type of lower bound for the largest eigenvalue of a symmetric matrix Linear Algebra and its Applications 47 7 9 9 www.elsevier.co/locate/laa A new type of lower bound for the largest eigenvalue of a syetric atrix Piet Van Mieghe Delft University of Technology, P.O. Box

More information

Lecture 16: Scattering States and the Step Potential. 1 The Step Potential 1. 4 Wavepackets in the step potential 6

Lecture 16: Scattering States and the Step Potential. 1 The Step Potential 1. 4 Wavepackets in the step potential 6 Lecture 16: Scattering States and the Step Potential B. Zwiebach April 19, 2016 Contents 1 The Step Potential 1 2 Step Potential with E>V 0 2 3 Step Potential with E

More information

Generalized eigenfunctions and a Borel Theorem on the Sierpinski Gasket.

Generalized eigenfunctions and a Borel Theorem on the Sierpinski Gasket. Generalized eigenfunctions and a Borel Theore on the Sierpinski Gasket. Kasso A. Okoudjou, Luke G. Rogers, and Robert S. Strichartz May 26, 2006 1 Introduction There is a well developed theory (see [5,

More information

arxiv: v2 [math.co] 3 Dec 2008

arxiv: v2 [math.co] 3 Dec 2008 arxiv:0805.2814v2 [ath.co] 3 Dec 2008 Connectivity of the Unifor Rando Intersection Graph Sion R. Blacburn and Stefanie Gere Departent of Matheatics Royal Holloway, University of London Egha, Surrey TW20

More information

Fourier Series Summary (From Salivahanan et al, 2002)

Fourier Series Summary (From Salivahanan et al, 2002) Fourier Series Suary (Fro Salivahanan et al, ) A periodic continuous signal f(t), - < t

More information

Packing polynomials on multidimensional integer sectors

Packing polynomials on multidimensional integer sectors Pacing polynomials on multidimensional integer sectors Luis B Morales IIMAS, Universidad Nacional Autónoma de México, Ciudad de México, 04510, México lbm@unammx Submitted: Jun 3, 015; Accepted: Sep 8,

More information

Support recovery in compressed sensing: An estimation theoretic approach

Support recovery in compressed sensing: An estimation theoretic approach Support recovery in copressed sensing: An estiation theoretic approach Ain Karbasi, Ali Horati, Soheil Mohajer, Martin Vetterli School of Coputer and Counication Sciences École Polytechnique Fédérale de

More information

Detection and Estimation Theory

Detection and Estimation Theory ESE 54 Detection and Estiation Theory Joseph A. O Sullivan Sauel C. Sachs Professor Electronic Systes and Signals Research Laboratory Electrical and Systes Engineering Washington University 11 Urbauer

More information

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval Unifor Approxiation and Bernstein Polynoials with Coefficients in the Unit Interval Weiang Qian and Marc D. Riedel Electrical and Coputer Engineering, University of Minnesota 200 Union St. S.E. Minneapolis,

More information

Reflection Symmetries of q-bernoulli Polynomials

Reflection Symmetries of q-bernoulli Polynomials Journal of Nonlinear Matematical Pysics Volume 1, Supplement 1 005, 41 4 Birtday Issue Reflection Symmetries of q-bernoulli Polynomials Boris A KUPERSHMIDT Te University of Tennessee Space Institute Tullaoma,

More information

Rateless Codes for MIMO Channels

Rateless Codes for MIMO Channels Rateless Codes for MIMO Channels Marya Modir Shanechi Dept EECS, MIT Cabridge, MA Eail: shanechi@itedu Uri Erez Tel Aviv University Raat Aviv, Israel Eail: uri@engtauacil Gregory W Wornell Dept EECS, MIT

More information

FAST DYNAMO ON THE REAL LINE

FAST DYNAMO ON THE REAL LINE FAST DYAMO O THE REAL LIE O. KOZLOVSKI & P. VYTOVA Abstract. In this paper we show that a piecewise expanding ap on the interval, extended to the real line by a non-expanding ap satisfying soe ild hypthesis

More information

1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x)

1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x) Calculus. Gradients and te Derivative Q f(x+) δy P T δx R f(x) 0 x x+ Let P (x, f(x)) and Q(x+, f(x+)) denote two points on te curve of te function y = f(x) and let R denote te point of intersection of

More information

Error Exponents in Asynchronous Communication

Error Exponents in Asynchronous Communication IEEE International Syposiu on Inforation Theory Proceedings Error Exponents in Asynchronous Counication Da Wang EECS Dept., MIT Cabridge, MA, USA Eail: dawang@it.edu Venkat Chandar Lincoln Laboratory,

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

Lecture 21. Numerical differentiation. f ( x+h) f ( x) h h

Lecture 21. Numerical differentiation. f ( x+h) f ( x) h h Lecture Numerical differentiation Introduction We can analytically calculate te derivative of any elementary function, so tere migt seem to be no motivation for calculating derivatives numerically. However

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximatinga function fx, wose values at a set of distinct points x, x, x,, x n are known, by a polynomial P x suc

More information

Notes on wavefunctions II: momentum wavefunctions

Notes on wavefunctions II: momentum wavefunctions Notes on wavefunctions II: momentum wavefunctions and uncertainty Te state of a particle at any time is described by a wavefunction ψ(x). Tese wavefunction must cange wit time, since we know tat particles

More information

A note on the realignment criterion

A note on the realignment criterion A note on the realignent criterion Chi-Kwong Li 1, Yiu-Tung Poon and Nung-Sing Sze 3 1 Departent of Matheatics, College of Willia & Mary, Williasburg, VA 3185, USA Departent of Matheatics, Iowa State University,

More information

Math 1241 Calculus Test 1

Math 1241 Calculus Test 1 February 4, 2004 Name Te first nine problems count 6 points eac and te final seven count as marked. Tere are 120 points available on tis test. Multiple coice section. Circle te correct coice(s). You do

More information

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials Fast Montgoery-like Square Root Coputation over GF( ) for All Trinoials Yin Li a, Yu Zhang a, a Departent of Coputer Science and Technology, Xinyang Noral University, Henan, P.R.China Abstract This letter

More information

A = h w (1) Error Analysis Physics 141

A = h w (1) Error Analysis Physics 141 Introduction In all brances of pysical science and engineering one deals constantly wit numbers wic results more or less directly from experimental observations. Experimental observations always ave inaccuracies.

More information

Fairness via priority scheduling

Fairness via priority scheduling Fairness via priority scheduling Veeraruna Kavitha, N Heachandra and Debayan Das IEOR, IIT Bobay, Mubai, 400076, India vavitha,nh,debayan}@iitbacin Abstract In the context of ulti-agent resource allocation

More information

. If lim. x 2 x 1. f(x+h) f(x)

. If lim. x 2 x 1. f(x+h) f(x) Review of Differential Calculus Wen te value of one variable y is uniquely determined by te value of anoter variable x, ten te relationsip between x and y is described by a function f tat assigns a value

More information

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Matheatical Sciences 04,, p. 7 5 ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD M a t h e a t i c s Yu. A. HAKOPIAN, R. Z. HOVHANNISYAN

More information

Symmetry Labeling of Molecular Energies

Symmetry Labeling of Molecular Energies Capter 7. Symmetry Labeling of Molecular Energies Notes: Most of te material presented in tis capter is taken from Bunker and Jensen 1998, Cap. 6, and Bunker and Jensen 2005, Cap. 7. 7.1 Hamiltonian Symmetry

More information

arxiv: v2 [stat.me] 28 Aug 2016

arxiv: v2 [stat.me] 28 Aug 2016 arxiv:509.04704v [stat.me] 8 Aug 06 Central liit teores for network driven sapling Xiao Li Scool of Mateatical Sciences Peking University Karl Roe Departent of Statistics University of Wisconsin-Madison

More information

Derivation Of The Schwarzschild Radius Without General Relativity

Derivation Of The Schwarzschild Radius Without General Relativity Derivation Of Te Scwarzscild Radius Witout General Relativity In tis paper I present an alternative metod of deriving te Scwarzscild radius of a black ole. Te metod uses tree of te Planck units formulas:

More information