Converse Bounds for Finite-Length Joint Source-Channel Coding

Size: px
Start display at page:

Download "Converse Bounds for Finite-Length Joint Source-Channel Coding"

Transcription

1 Converse Bounds for Fnte-Length Jont Source-Channel Codng Adrà Tauste Campo 1, Gonzalo Vazquez-Vlar 1, Albert Gullén Fàbregas 123, Alfonso Martnez 1 1 Unverstat Pompeu Fabra, 2 ICREA, 3 Unversty of Cambrdge Emal: atauste,gvazquez,gullen,alfonso.martnez}@eee.org Abstract Based on the hypothess-testng method, we derve lower bounds on the average error probablty of fnte-length jont source-channel codng. The extenson of the meta-converse bound of channel codng to jont source-channel codng depends on the codeboo and the decodng rule and thus, t s a pror computatonally challengng. Weaer versons of ths general bound recover nown converses n the lterature and provde computatonally feasble expressons. I. INTRODUCTION Relable communcaton of messages n the fnte bloc length regme can be characterzed by upper and lower bounds on the average error probablty of the best possble code. In order to prove the exstence of a good code, randomcodng technques are often employed to derve upper bounds on the average error probablty. In contrast, the computaton of lower bounds satsfed by every code s n general challengng snce one must optmze the bound over each possble codeboo and decodng rule. For equprobable messages, a number of lower bounds on the average error probablty [1] [5] lead to a proof of the converse part of Shannon s theorem [6] when the bloc length grows to nfnty. More recently, some of these bounds have been generalzed to non-equprobable messages usng nformaton-spectrum measures [7], [8] or the hypothesstestng method [9]. In ths paper, we elaborate the hypothess-testng method n the context of jont source-channel codng to provde lower bounds on the average error probablty. Followng the footsteps of [4], [5], we propose an extenson of the metaconverse by Polyansy et al. [5, Th. 26], whch states that every channel code wth M codewords, bloc length n and average error probablty ɛ satsfes 1 nf sup β 1 ɛ PX P X, P X Q P X Q M, 1 where β α PX P X, P X Q s the mnmum type- II error gven by the Neyman-Pearson lemma [10] for a maxmum type-i error of 1 α when testng between P X P X and P X Q, where P X s the nput dstrbuton Ths wor has been supported by the European Research Councl under ERC grant agreement A. Martnez receved fundng from the Mnstry of Economy and Compettveness Span under grant RC and from the European Unon s 7th Framewor Programme PEOPLE-2011-CIG under grant agreement nduced by the codeboo, P X s the channel law and Q s an arbtrary output dstrbuton. The central dea of our method s to consder an ndependent bnary hypothess test for every source message and obtan a lower bound on the average error probablty by applyng the Neyman-Pearson lemma [10] to each test. Ths approach ntally provdes a converse bound nvolvng a costly optmzaton over all possble codeboos and decodng rules. Moreover, we show that ths bound recovers several nown results, ncludng the nformaton-spectrum bounds [7], [8] and more mportantly, t s proven to attan Csszár s sphere-pacng exponent for jont source-channel codng [11, Th. 3]. Fnally, we weaen the converse result to obtan lower bounds on the average error probablty that can be numercally computed for some source-channel pars of nterest. A. Notaton and System Model We consder the transmsson of a length- dscrete memoryless source over a dscrete memoryless channel usng length-n bloc codes that are nown both at the transmtter and recever. The source s dstrbuted accordng to P V v = P V v, v = v 1,..., v V, where V s a dscrete alphabet wth cardnalty V. The channel law s gven by P X y x = n P Xy x, x = x 1,..., x n X n, y = y 1,..., y n n, where X and are dscrete alphabets wth cardnaltes X and, respectvely. Wthout loss of generalty we assume that the source messages are ndexed as v 1,..., v V. An encoder maps the length- source message v l to a length-n codeword x l, whch s then transmtted over the channel. We refer to the rato t /n as the transmsson rate. Based on the length-n channel output y the decoder guesses whch source message was transmtted. The decodng rule s specfed by the possbly random transformaton P : n V. The decoded message wll be denoted n the followng by z. The average error probablty ɛ s gven by V ɛ = P V v l ɛv l, 2 l=1

2 where ɛv l Pr v l } = 1 y 3 P X y x l P v l y, 4 s the error probablty when message v l s transmtted. In ths paper, we obtan tght lower bounds on the average error probablty of the best code followng a hypothesstestng approach [4], [5]. II. HPOTHESIS-TESTING APPROACH For every par v l, x l we defne a bnary hypothesstestng problem between the channel condtonal dstrbuton P X=xl and an arbtrary output dstrbuton Q l as H 0 : P X=xl, 5 H 1 : Q l. 6 We can construct a sub-optmal test for the above problem from the system descrbed n Secton I-A: for a gven source message v l, upon observaton of the channel output y, we choose H 0 f z = v l, and H 1, otherwse. The performance of ths test can be evaluated accordng to ts type-i and type- II errors. Specfcally, the probablty of choosng Q l when the true dstrbuton s P X=xl type-i error s equal to ɛv l = 1 y P X y x l P v l y. 7 Smlarly, the probablty of choosng P X=xl when the true dstrbuton s gven by Q l type-ii error s gven by Q l v l y Q l yp v l y. 8 The two types of error can be related va the Neyman- Pearson lemma [10]. Ths result states that the optmal type- II error among all possbly randomzed tests P W : n H 0, H 1 } wth a type-i error of at most 1 α s gven by β α P X=xl, Q l mn Q l P W : yp W H 0 y. y P X y x l P W H 0 y α In the rest of the paper, for ease of notaton, we shall use β α x, Q β α P X=x, Q. Consequently, the type-ii error of any test for 5 6 s lower-bounded by β α xl, Q l as long as the type-i error s no greater than 1 α. In partcular, by settng 1 α = ɛv l n 9 and combnng t wth 8 we obtan β 1 ɛvl xl, Q l l Q v l, l = 1,..., V, 10 whch upon recallng 2 gves an mplct lower bound on the average error probablty of our proposed codng scheme. In order to obtan a vald converse bound from 10 one needs to perform a challengng optmzaton over all possble codeboos x 1,..., x V } and decodng transformatons P. In contrast, any choce of Q l, l = 1,..., V, gves y 9 a converse bound as t s ndependent of the codeboo and the decoder. Alternatvely, a converse bound can be derved by defnng P accordng to the MAP decodng rule and optmzng 10 over all possble codeboos. In both cases, the dervaton of a converse bound becomes computatonally unfeasble as the bloc length ncreases. Hence, n the rest of the paper we analyze the performance of lower bounds derved from 10 whch are computable n several cases of nterest. In partcular, n the next secton we weaen 10 to re-derve a generalzed verson of the Verdú- Han lemma for source-channel codng and we show that the lower bound nduced by 2 10 attans Csszár s spherepacng exponent [11, Th. 3]. Then, n Secton IV we further weaen 10 to obtan computable fnte-length lower bounds on the average error probablty of source-channel codng. III. CONNECTION WITH PREVIOUS WORK A. Informaton-Spectrum Bounds In [8, Th. 6], we derved a lower bound on the average error probablty as a generalzaton of the Verdú-Han lemma for channel codng [3], ɛ Pr } P V V P X XV γ γy, 11 where XV denotes the mappng nduced by a specfc codeboo and γ : n R + s an arbtrary non-negatve functon. By choosng γ = γq, wth Q beng an arbtrary output dstrbuton and γ > 0, optmzng the bound over γ, Q and the P X V nduced by each codeboo, one obtans the converse bound ɛ nf sup P X V γ>0 sup Pr Q PV V P X X Q y } } < γ γ, 12 whch has been ndependently gven n [9, Eq. 34]. We next show that 12 can be seen as a consequence of 10. Frst, fx a gven codeboo x 1,..., x V. Then, by combnng the nequalty [2] 1 β 1 ɛ x, Q sup γ>0 γ Pr P X x } < γ ɛ Q 13 where Pr } s computed accordng to P X=x, wth 10 for a message-ndependent dstrbuton Q, one obtans the set of nequaltes Q v l 1 } P X x l Pr < γ l ɛv l, 14 γ l Q for γ l > 0, l = 1,..., V. By choosng γ l = γ P V v l wth γ > 0 for every l = 1,..., V such that P V v l 0, 14 s equvalent to Q v l P V v l γ } P X x l γ Pr < ɛv l, Q P V v l 15

3 for l = 1,..., V. Consder now the set of condtonal dstrbutons P X V nduced by the codeboo,.e, P X V x v j = 1 f = j and P X V x v j = 0, otherwse. By summng both sdes of 15 over l = 1,... V we fnally have } PV V P X X ɛ Pr < γ γ, 16 Q whch upon optmzaton over γ, Q and P X V yelds 12. In partcular, Han s generalzaton of the Verdú-Han lemma derved n [7, Lemma 3.2] can be recovered from 12 by settng Q = P to be the output dstrbuton nduced by a partcular codeboo and by rewrtng 12 wth the defntons of entropy densty hv log P V V, and nformaton densty X, log P X X P [12], as ɛ nf sup Pr X; hv < log γ } 1 P X V γ>0 γ. 17 B. Csszár s Sphere-Pacng Exponent Csszár showed n [11, Th. 3] that the error exponent of every source-channel code s upper-bounded by E sp R J mn te R [thv,t log V ] t, P V +E sp R, P X, 18 where e R, P V s the source relablty functon [13] and E sp R, P X max P X mn DP Q P V 19 Q:HQ R mn P X : IP X,P X R DP X P X P X 20 s the channel-codng sphere-pacng exponent [14]. When the mnmzng R n 18 les above the crtcal rate of the channel [11], [15], the bound 18 s tght and gves the actual error exponent. We next show that usng 10 wth an approprate choce of Q l recovers Csszár s result. We frst decompose the average error probablty usng the set of source-type classes T, = 1,..., N. Rewrtng 2 we have that where N ɛ = Pr T } ɛ, 21 ɛ 1 T ɛv. 22 v T Our re-dervaton reles on the next result. Lemma 1 [4, Thm. 20]: For every v l T consder the bnary hypothess test n 5 between P X=xl and the dstrbuton Q l = QT. Let a decson rule have type-i error equal to ɛv l and type-ii error equal to b. Then, there exsts a dstrbuton Q T such that, f R > 0 satsfes b γe n R+η, η > 0, γ 0, 1, 23 then A R 1 nη 2 γ e n E sp log 2 R n,p X+η ɛv l for all v l T, where A R > 0 s a functon of R ndependent of n. For every source type-class T, = 1,..., N, we defne the probablty dstrbuton Q T Q T v, v T v v T Q T, v 0, otherwse, where Q l must exst v T = QT Q T for all v l T, such that 25. In vew of 25 there T v Q v 1 T 26. Q T Otherwse, v T T v > 1 and Q would not be a probablty dstrbuton. Wthout loss of generalty and for ease of exposton we next assume that the ndexng of the message set s such that v s a source message fulfllng 26 for T, = 1,..., N. Then, we rewrte 26 as Q T v γe n R,n+ηn, 27 for γ 0, 1, ηn = K n, K > 0, and where we defned R, n 1 n log T + 1 n log γ K 28 n such that γe n R,n+ K n = T 1. We now apply Lemma 1 wth P X y x = P X y xv, b = Q T v, and R, n, whch satsfes 27 for γ 0, 1. Then, t follows from 24 that ɛv A R, n K 2 γ e n E sp R,n log 2 n,p X+ K n for all v T. By pluggng 29 nto 22 we have that 29 ɛ = 1 T ɛv 30 v T 1 1 A R, n E 2 K 2 γ e n spr,n,p X+ K n 31 where R, n R, n log 2 n. We now focus on the terms Pr } T, = 1,..., N n 21. Usng [16, Lemma 2.6] we have that PrT } can be lower-bounded as PrT } + 1 V e DP P V, 32

4 for every = 1,..., N, where P s the type assocated to the class T. Hence, combnng 21, 31 and 32, we obtan N ɛ e n tdp tn P V +E spr,n,p X +o 1,n 33 where o 1, n K n V log A R, n + log K 2 γ log 2 34 on account of 28. Fnally, by choosng K > 0 approprately, usng that A s a contnuous functon and E sp R, P X s a non-ncreasng contnuous functon wth respect to R, the proof follows along the same lnes as n [11] to conclude that lm 1 logɛ te n n R t, P V + E sp R, P X, 35 for some R [ thv, t log V ], such that R lm n R, n, whch after mnmzaton over all R [thv, t log V] yelds 18. IV. COMPUTABLE BOUNDS The am of ths secton s to show that 10 can be convenently weaened to obtan practcal converse results n several cases of nterest. Frst, observe from 10 that f β α xl, Q l s nvertble wth respect to α n an approprate range, one may formulate 10 as an explct lower bound on ɛv l for every l = 1,..., V, whch n turn gves a lower bound on ɛ after averagng over all source messages. To ths end, we mae use of the analytcal propertes of β α x, Q as a functon of α. It s nown that β α x l, Q s a pecewse-lnear, convex, and non-decreasng functon n α [0, 1] [17] that taes values n [0, β max ], where β max 1. Then, the fact that β 0 x l, Q = 0 and the convexty n α [0, 1] mples there must exst α mn [0, 1] such that the functon taes the value 0 n [0, α mn and t s strctly ncreasng n [α mn, 1]. As a consequence, the functon β α s nvertble wth respect to α n the range 0, β max ]. The aforementoned arguments can be used to defne the functon α mn, b = 0, α b x, Q a such that β a = b, b 0, β max ], 1, b β max, 1], 36 n the doman [0, 1]. From the above defnton one can chec that for gven a, b [0, 1], β a x, Q b α b x, Q a. 37 Consequently, by applyng 37 to 10 t follows that ɛv l 1 α l xl Q v, Q l l 38 for l = 1,..., V. Averagng 38 over the source messages and upon approprate optmzaton we obtan the next result. Lemma 2: The average error probablty ɛ ncurred by any codeboo s lower-bounded by ɛ 1 sup x 1,...,x V P V l=1 P V v l nf α l Q Q l v l x l, Q l }. 39 In order to provde computatonally feasble bounds, we restrct our attenton to channels for whch, when Q s approprately chosen, the functon β α Q β α x, Q and thus, α b Q α b x, Q, s ndependent of x. Channels of nterest fulfllng ths property are symmetrc channels accordng to [18, p. 94] and Q l y = Q y = n j=1 Q y j wth Q beng the capacty-achevng output dstrbuton [5]. For ths class of channels we can rewrte 39 usng the decomposton of the message set nto N source-type classes as ɛ 1 sup Q N Pr T } 1 T α Q v Q. 40 v T Although α b Q s ndependent of x, the outer sum n 40 stll depends on the codeboo and the decoder through Q. Hence, the optmzaton n 40 can be performed over all possble dstrbutons Q v, v V. Gven that α b, s concave wth respect to b [0, 1] see Appendx A ths s a convex optmzaton problem. However, snce the mnmzaton must be carred out over an exponentally large number of elements, the optmzaton soon becomes computatonally nfeasble as the message length ncreases. A possble approach to smplfy the aforementoned drawbac s to weaen 40 usng Jensen s nequalty. Theorem 1: The average error probablty of every sourcechannel code n a symmetrc channel s lower-bounded as ɛ 1 sup Q T N Pr T } α QT Q, 41 T where Q T v T Q v, = 1,... N, and Q the capacty-achevng output dstrbuton. Proof: Usng Jensen s nequalty, we obtan N ɛ 1 Pr T } 1 T α Q v Q v T 42 N 1 Pr T } α Q v Q 43 v T T N = 1 Pr T } α QT Q. 44 T Theorem 1 depends on the codeboo and the decoder only through the dstrbuton Q T, and therefore t s optmzed over all dstrbutons defned over source-type classes. Snce s

5 10 0 RCU Converse 41 VH 12 Converse RCU Converse 41 VH 12 Converse 45 Error bound, ǫ Error bound, ǫ Bloclength, n Fgure 1. Upper and lower bounds for a BMS-BSC par. Parameters: P V 1 = 0.05, P X 1 0 = P X 0 1 = 0.1, t = Bloclength, n Fgure 2. Upper and lower bounds for a BMS-BEC par. Parameters: P V 1 = 0.001, P X e 0 = P X e 1 = 0.95, t = 1. the dmenson of the doman of Q T grows polynomally wth the bloc length n, ths nvolves an exponentally less complex computaton than that of 40. Remar 1: It can be checed for nstance, by performng the method of the Lagrange multplers that the optmzng Q of 40 s unform over the values of v belongng to the same source-type class. Hence, the optmzng Q T n 41 nduces the optmzng Q of 40 and as a consequence, both bounds concde. Ths s tantamount to state that Theorem 1 also recovers the generalzaton of the Verdú-Han lemma and attans Csszár s sphere-pacng exponent. Theorem 1 can be weaened to obtan a converse bound that does not requre an optmzaton over the dstrbuton Q T. Usng the fact that α b Q s a non-decreasng functon of b [0, 1] and upper-boundng Q T 1, = 1,... N n 41 we obtan the followng result. Corollary 1: The average error probablty of every source-channel code n a symmetrc channel s lowerbounded as N ɛ 1 Pr T } α 1 Q, 45 T where Q s the capacty-achevng output dstrbuton. Equaton 45 does not depend on the decoder nor the codeboo, and thus, t gves drectly a computable converse result. Whle Corollary 1 cannot be used to recover 12 usng 13, t stll can be shown to attan Csszár s exponent by applyng the arguments n [5, Sec. III-F] for each sourcetype class. We next compare the fnte length-bounds gven n Theorem 1 and Corollary 1 for two source-channel pars: a bnary memoryless source BMS transmtted over a bnary symmetrc channel BSC and over a bnary erasure channel BEC respectvely. A. BMS-BSC In ths example we consder a source-channel par gven by a BMS wth P V 1 = 0.05 and a BSC wth crossover probablty P X 1 0 = P X 0 1 = 0.1, t = 1. We now compare the fnte length-bounds gven n 41 from Theorem 1 computed usng [19] and 45 from Corollary 1 wth respect to the Verdú-Han lemma 12, and the RCU upper bound [8] correspondng to a random-codng ensemble generated wth the product unform dstrbuton. For the BSC the capacty-achevng output dstrbuton s the unform dstrbuton. For ths choce of Q and for dstrbuted accordng to P X=x, the random varable Ψ x = P X x Q s ndependent of x, and so t s Pr P V V Ψ x < γ}. Hence, n ths case t s possble to compute a vald lower bound from the Verdú-Han lemma wthout resortng to an optmzaton over all possble codeboos. Fg. 1 shows that n ths scenaro the bound 45 s looser than 12 whle the bound 41 s tghter n the range of bloc lengths shown. Ths agrees wth the fact that whle 12 can be derved by weaenng Thm. 1, ths s not possble from Cor. 1. B. BMS-BEC We now consder an scenaro of a BMS wth P V 1 = whch needs to be transmtted over a BEC wth erasure probablty P X e 0 = P X e 1 = 0.95, t = 1. Fg. 2 shows the same plot as Fg. 1 for ths source-channel par. From the fgure we observe that n ths case the lower bound 45 s tghter than 12, hence none of these two bounds domnates n general. Smlarly to the prevous case, the bound 41 mproves the other two. APPENDIX A CONCAVIT OF THE FUNCTION α b Lemma 3: The functon α b α b, s a concave functon wth respect to b n [0, 1].

6 Proof: Consder α b, α b, where b, b [0, 1] and denote β a β a,. Snce β α s convex, ths mples λ [0, 1] that β λα b + 1 λα b λβ αb + 1 λβ αb 46 λb + 1 λb, 47 where β αb b n 47 follows from 36. If λb+1 λb β max, the monotoncty of β α mples that λα b + 1 λα b α λb+1 λb 48 on account of 36. Otherwse, f λb + 1 λb > β max, we have that λα b + 1 λα b 1 49 = α λb+1 λb, 50 where 49 follows from α b, α b [0, 1] and 50 from λb + 1 λb > β max and 36. REFERENCES [1] S. Armoto, On the converse to the codng theorem for dscrete memoryless channels corresp., IEEE Trans. Inf. Theory, vol. 19, no. 3, pp , [2] J. Wolfowtz, The codng of messages subject to chance errors, Illnos J. Math., vol. 1, pp , [3] S. Verdú and T. S. Han, A general formula for channel capacty, IEEE Trans. Inf. Theory, vol. 40, no. 4, pp , July [4] R. E. Blahut, Hypothess testng and nformaton theory, IEEE Trans. Inf. Theory, vol. IT-20, no. 4, pp , [5]. Polyansy, H. V. Poor, and S. Verdú, Channel codng rate n the fnte bloclength regme, IEEE Trans. Inf. Theory, vol. 56, no. 5, pp , [6] C. Shannon, A mathematcal theory of communcaton, Bell Syst. Tech. J., vol. 27, pp and , July and Oct [7] T. S. Han, Jont source-channel codng revsted: Informatonspectrum approach, arxv preprnt arxv: v1, [8] A. Tauste Campo, G. Vazquez-Vlar, A. Gullén Fàbregas, and A. Martnez, Random-codng jont source-channel codng bounds, n Proc. IEEE Int. Symp. on Inf. Theory, Sant Petersburg, Russa, July-Aug [9] V. Kostna and S. Verdú, Lossy jont source-channel codng n the fnte bloclength regme, arxv preprnt cs/ v1, [10] J. Neyman and E. S. Pearson, On the problem of the most effcent tests of statstcal hypotheses, Phl. Trans. R. Soc. Lond. A, vol. 231, no , p. 289, [11] I. Csszár, Jont source-channel error exponent, Probl. Contr. Inf. Theory, vol. 9, pp , [12] T. S. Han, Informaton-Spectrum Methods n Informaton Theory. Berln, Germany: Sprnger-Verlag, [13] F. Jelne, Probablstc Informaton Theory. New or: McGraw- Hll, [14] C. E. Shannon, R. G. Gallager, and E. R. Berleamp, Lower bounds to error probablty for codng on dscrete memoryless channels. I, Inf. Contr., vol. 10, no. 1, pp , [15]. hong, F. Alajaj, and L. L. Campbell, On the jont source-channel codng error exponent for dscrete memoryless systems, IEEE Trans. Inf. Theory, vol. 52, no. 4, pp , Aprl [16] I. Csszár and J. Körner, Informaton Theory: Codng Theorems for Dscrete Memoryless Systems, 2nd ed. Cambrdge Unversty Press, [17] H. V. Poor, An Introducton to Sgnal Detecton and Estmaton. New or: Sprnger-Verlag, [18] R. G. Gallager, Informaton Theory and Relable Communcaton. New or: John Wley & Sons, Inc., [19] J. Löfberg, ALMIP : A toolbox for modelng and optmzaton n MATLAB, n Proc. of the CACSD Conference, Tape, Tawan, [Onlne]. Avalable:

Lecture 3: Shannon s Theorem

Lecture 3: Shannon s Theorem CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Error Probability for M Signals

Error Probability for M Signals Chapter 3 rror Probablty for M Sgnals In ths chapter we dscuss the error probablty n decdng whch of M sgnals was transmtted over an arbtrary channel. We assume the sgnals are represented by a set of orthonormal

More information

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem. Lecture 14 (03/27/18). Channels. Decodng. Prevew of the Capacty Theorem. A. Barg The concept of a communcaton channel n nformaton theory s an abstracton for transmttng dgtal (and analog) nformaton from

More information

Maximizing the number of nonnegative subsets

Maximizing the number of nonnegative subsets Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum

More information

Finding Dense Subgraphs in G(n, 1/2)

Finding Dense Subgraphs in G(n, 1/2) Fndng Dense Subgraphs n Gn, 1/ Atsh Das Sarma 1, Amt Deshpande, and Rav Kannan 1 Georga Insttute of Technology,atsh@cc.gatech.edu Mcrosoft Research-Bangalore,amtdesh,annan@mcrosoft.com Abstract. Fndng

More information

Perfect Competition and the Nash Bargaining Solution

Perfect Competition and the Nash Bargaining Solution Perfect Competton and the Nash Barganng Soluton Renhard John Department of Economcs Unversty of Bonn Adenauerallee 24-42 53113 Bonn, Germany emal: rohn@un-bonn.de May 2005 Abstract For a lnear exchange

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution. Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14 APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce

More information

ECE 534: Elements of Information Theory. Solutions to Midterm Exam (Spring 2006)

ECE 534: Elements of Information Theory. Solutions to Midterm Exam (Spring 2006) ECE 534: Elements of Informaton Theory Solutons to Mdterm Eam (Sprng 6) Problem [ pts.] A dscrete memoryless source has an alphabet of three letters,, =,, 3, wth probabltes.4,.4, and., respectvely. (a)

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.

More information

P exp(tx) = 1 + t 2k M 2k. k N

P exp(tx) = 1 + t 2k M 2k. k N 1. Subgaussan tals Defnton. Say that a random varable X has a subgaussan dstrbuton wth scale factor σ< f P exp(tx) exp(σ 2 t 2 /2) for all real t. For example, f X s dstrbuted N(,σ 2 ) then t s subgaussan.

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES

TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES SVANTE JANSON Abstract. We gve explct bounds for the tal probabltes for sums of ndependent geometrc or exponental varables, possbly wth dfferent

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

The lower and upper bounds on Perron root of nonnegative irreducible matrices

The lower and upper bounds on Perron root of nonnegative irreducible matrices Journal of Computatonal Appled Mathematcs 217 (2008) 259 267 wwwelsevercom/locate/cam The lower upper bounds on Perron root of nonnegatve rreducble matrces Guang-Xn Huang a,, Feng Yn b,keguo a a College

More information

The Expectation-Maximization Algorithm

The Expectation-Maximization Algorithm The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.

More information

Games of Threats. Elon Kohlberg Abraham Neyman. Working Paper

Games of Threats. Elon Kohlberg Abraham Neyman. Working Paper Games of Threats Elon Kohlberg Abraham Neyman Workng Paper 18-023 Games of Threats Elon Kohlberg Harvard Busness School Abraham Neyman The Hebrew Unversty of Jerusalem Workng Paper 18-023 Copyrght 2017

More information

Eigenvalues of Random Graphs

Eigenvalues of Random Graphs Spectral Graph Theory Lecture 2 Egenvalues of Random Graphs Danel A. Spelman November 4, 202 2. Introducton In ths lecture, we consder a random graph on n vertces n whch each edge s chosen to be n the

More information

A Note on Bound for Jensen-Shannon Divergence by Jeffreys

A Note on Bound for Jensen-Shannon Divergence by Jeffreys OPEN ACCESS Conference Proceedngs Paper Entropy www.scforum.net/conference/ecea- A Note on Bound for Jensen-Shannon Dvergence by Jeffreys Takuya Yamano, * Department of Mathematcs and Physcs, Faculty of

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Wreless Informaton Transmsson System Lab. Chapter 7 Channel Capacty and Codng Insttute of Communcatons Engneerng atonal Sun Yat-sen Unversty Contents 7. Channel models and channel capacty 7.. Channel models

More information

Vapnik-Chervonenkis theory

Vapnik-Chervonenkis theory Vapnk-Chervonenks theory Rs Kondor June 13, 2008 For the purposes of ths lecture, we restrct ourselves to the bnary supervsed batch learnng settng. We assume that we have an nput space X, and an unknown

More information

Lecture Space-Bounded Derandomization

Lecture Space-Bounded Derandomization Notes on Complexty Theory Last updated: October, 2008 Jonathan Katz Lecture Space-Bounded Derandomzaton 1 Space-Bounded Derandomzaton We now dscuss derandomzaton of space-bounded algorthms. Here non-trval

More information

Channel optimization for binary hypothesis testing

Channel optimization for binary hypothesis testing Channel optmzaton for bnary hypothess testng Gancarlo Baldan Munther Dahleh MIT, Laboratory for Informaton and Decson systems, Cambrdge 039 USA emal: gbaldan@mt.edu) MIT, Laboratory for Informaton and

More information

Randić Energy and Randić Estrada Index of a Graph

Randić Energy and Randić Estrada Index of a Graph EUROPEAN JOURNAL OF PURE AND APPLIED MATHEMATICS Vol. 5, No., 202, 88-96 ISSN 307-5543 www.ejpam.com SPECIAL ISSUE FOR THE INTERNATIONAL CONFERENCE ON APPLIED ANALYSIS AND ALGEBRA 29 JUNE -02JULY 20, ISTANBUL

More information

A new construction of 3-separable matrices via an improved decoding of Macula s construction

A new construction of 3-separable matrices via an improved decoding of Macula s construction Dscrete Optmzaton 5 008 700 704 Contents lsts avalable at ScenceDrect Dscrete Optmzaton journal homepage: wwwelsevercom/locate/dsopt A new constructon of 3-separable matrces va an mproved decodng of Macula

More information

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Chapter 7 Channel Capacty and Codng Contents 7. Channel models and channel capacty 7.. Channel models Bnary symmetrc channel Dscrete memoryless channels Dscrete-nput, contnuous-output channel Waveform

More information

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all

More information

EGR 544 Communication Theory

EGR 544 Communication Theory EGR 544 Communcaton Theory. Informaton Sources Z. Alyazcoglu Electrcal and Computer Engneerng Department Cal Poly Pomona Introducton Informaton Source x n Informaton sources Analog sources Dscrete sources

More information

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Lossy Compression. Compromise accuracy of reconstruction for increased compression. Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost

More information

Bounds on the Effective-length of Optimal Codes for Interference Channel with Feedback

Bounds on the Effective-length of Optimal Codes for Interference Channel with Feedback Bounds on the Effectve-length of Optmal Codes for Interference Channel wth Feedback Mohsen Hedar EECS Department Unversty of Mchgan Ann Arbor,USA Emal: mohsenhd@umch.edu Farhad Shran ECE Department New

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Welfare Properties of General Equilibrium. What can be said about optimality properties of resource allocation implied by general equilibrium?

Welfare Properties of General Equilibrium. What can be said about optimality properties of resource allocation implied by general equilibrium? APPLIED WELFARE ECONOMICS AND POLICY ANALYSIS Welfare Propertes of General Equlbrum What can be sad about optmalty propertes of resource allocaton mpled by general equlbrum? Any crteron used to compare

More information

A new Approach for Solving Linear Ordinary Differential Equations

A new Approach for Solving Linear Ordinary Differential Equations , ISSN 974-57X (Onlne), ISSN 974-5718 (Prnt), Vol. ; Issue No. 1; Year 14, Copyrght 13-14 by CESER PUBLICATIONS A new Approach for Solvng Lnear Ordnary Dfferental Equatons Fawz Abdelwahd Department of

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

n ). This is tight for all admissible values of t, k and n. k t + + n t

n ). This is tight for all admissible values of t, k and n. k t + + n t MAXIMIZING THE NUMBER OF NONNEGATIVE SUBSETS NOGA ALON, HAROUT AYDINIAN, AND HAO HUANG Abstract. Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what

More information

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering / Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons

More information

CSCE 790S Background Results

CSCE 790S Background Results CSCE 790S Background Results Stephen A. Fenner September 8, 011 Abstract These results are background to the course CSCE 790S/CSCE 790B, Quantum Computaton and Informaton (Sprng 007 and Fall 011). Each

More information

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1 Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons

More information

Geometry of Müntz Spaces

Geometry of Müntz Spaces WDS'12 Proceedngs of Contrbuted Papers, Part I, 31 35, 212. ISBN 978-8-7378-224-5 MATFYZPRESS Geometry of Müntz Spaces P. Petráček Charles Unversty, Faculty of Mathematcs and Physcs, Prague, Czech Republc.

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Hidden Markov Models & The Multivariate Gaussian (10/26/04) CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models

More information

Edge Isoperimetric Inequalities

Edge Isoperimetric Inequalities November 7, 2005 Ross M. Rchardson Edge Isopermetrc Inequaltes 1 Four Questons Recall that n the last lecture we looked at the problem of sopermetrc nequaltes n the hypercube, Q n. Our noton of boundary

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

State Amplification and State Masking for the Binary Energy Harvesting Channel

State Amplification and State Masking for the Binary Energy Harvesting Channel State Amplfcaton and State Maskng for the Bnary Energy Harvestng Channel Kaya Tutuncuoglu, Omur Ozel 2, Ayln Yener, and Sennur Ulukus 2 Department of Electrcal Engneerng, The Pennsylvana State Unversty,

More information

Excess Error, Approximation Error, and Estimation Error

Excess Error, Approximation Error, and Estimation Error E0 370 Statstcal Learnng Theory Lecture 10 Sep 15, 011 Excess Error, Approxaton Error, and Estaton Error Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton So far, we have consdered the fnte saple

More information

Refined Coding Bounds for Network Error Correction

Refined Coding Bounds for Network Error Correction Refned Codng Bounds for Network Error Correcton Shenghao Yang Department of Informaton Engneerng The Chnese Unversty of Hong Kong Shatn, N.T., Hong Kong shyang5@e.cuhk.edu.hk Raymond W. Yeung Department

More information

Economics 101. Lecture 4 - Equilibrium and Efficiency

Economics 101. Lecture 4 - Equilibrium and Efficiency Economcs 0 Lecture 4 - Equlbrum and Effcency Intro As dscussed n the prevous lecture, we wll now move from an envronment where we looed at consumers mang decsons n solaton to analyzng economes full of

More information

A CHARACTERIZATION OF ADDITIVE DERIVATIONS ON VON NEUMANN ALGEBRAS

A CHARACTERIZATION OF ADDITIVE DERIVATIONS ON VON NEUMANN ALGEBRAS Journal of Mathematcal Scences: Advances and Applcatons Volume 25, 2014, Pages 1-12 A CHARACTERIZATION OF ADDITIVE DERIVATIONS ON VON NEUMANN ALGEBRAS JIA JI, WEN ZHANG and XIAOFEI QI Department of Mathematcs

More information

An application of generalized Tsalli s-havrda-charvat entropy in coding theory through a generalization of Kraft inequality

An application of generalized Tsalli s-havrda-charvat entropy in coding theory through a generalization of Kraft inequality Internatonal Journal of Statstcs and Aled Mathematcs 206; (4): 0-05 ISS: 2456-452 Maths 206; (4): 0-05 206 Stats & Maths wwwmathsjournalcom Receved: 0-09-206 Acceted: 02-0-206 Maharsh Markendeshwar Unversty,

More information

STEINHAUS PROPERTY IN BANACH LATTICES

STEINHAUS PROPERTY IN BANACH LATTICES DEPARTMENT OF MATHEMATICS TECHNICAL REPORT STEINHAUS PROPERTY IN BANACH LATTICES DAMIAN KUBIAK AND DAVID TIDWELL SPRING 2015 No. 2015-1 TENNESSEE TECHNOLOGICAL UNIVERSITY Cookevlle, TN 38505 STEINHAUS

More information

Entropy of Markov Information Sources and Capacity of Discrete Input Constrained Channels (from Immink, Coding Techniques for Digital Recorders)

Entropy of Markov Information Sources and Capacity of Discrete Input Constrained Channels (from Immink, Coding Techniques for Digital Recorders) Entropy of Marov Informaton Sources and Capacty of Dscrete Input Constraned Channels (from Immn, Codng Technques for Dgtal Recorders). Entropy of Marov Chans We have already ntroduced the noton of entropy

More information

On Network Coding of Independent and Dependent Sources in Line Networks

On Network Coding of Independent and Dependent Sources in Line Networks On Network Codng of Independent and Dependent Sources n Lne Networks Mayank Baksh, Mchelle Effros, WeHsn Gu, Ralf Koetter Department of Electrcal Engneerng Department of Electrcal Engneerng Calforna Insttute

More information

Online Appendix: Reciprocity with Many Goods

Online Appendix: Reciprocity with Many Goods T D T A : O A Kyle Bagwell Stanford Unversty and NBER Robert W. Stager Dartmouth College and NBER March 2016 Abstract Ths onlne Appendx extends to a many-good settng the man features of recprocty emphaszed

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Appendix B. Criterion of Riemann-Stieltjes Integrability

Appendix B. Criterion of Riemann-Stieltjes Integrability Appendx B. Crteron of Remann-Steltes Integrablty Ths note s complementary to [R, Ch. 6] and [T, Sec. 3.5]. The man result of ths note s Theorem B.3, whch provdes the necessary and suffcent condtons for

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

Computation of Higher Order Moments from Two Multinomial Overdispersion Likelihood Models

Computation of Higher Order Moments from Two Multinomial Overdispersion Likelihood Models Computaton of Hgher Order Moments from Two Multnomal Overdsperson Lkelhood Models BY J. T. NEWCOMER, N. K. NEERCHAL Department of Mathematcs and Statstcs, Unversty of Maryland, Baltmore County, Baltmore,

More information

Comments on Detecting Outliers in Gamma Distribution by M. Jabbari Nooghabi et al. (2010)

Comments on Detecting Outliers in Gamma Distribution by M. Jabbari Nooghabi et al. (2010) Comments on Detectng Outlers n Gamma Dstrbuton by M. Jabbar Nooghab et al. (21) M. Magdalena Lucn Alejandro C. Frery September 17, 215 arxv:159.55v1 [stat.co] 16 Sep 215 Ths note shows that the results

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

6. Stochastic processes (2)

6. Stochastic processes (2) Contents Markov processes Brth-death processes Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 Markov process Consder a contnuous-tme and dscrete-state stochastc process X(t) wth state space

More information

Exercise Solutions to Real Analysis

Exercise Solutions to Real Analysis xercse Solutons to Real Analyss Note: References refer to H. L. Royden, Real Analyss xersze 1. Gven any set A any ɛ > 0, there s an open set O such that A O m O m A + ɛ. Soluton 1. If m A =, then there

More information

6. Stochastic processes (2)

6. Stochastic processes (2) 6. Stochastc processes () Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 6. Stochastc processes () Contents Markov processes Brth-death processes 6. Stochastc processes () Markov process

More information

REAL ANALYSIS I HOMEWORK 1

REAL ANALYSIS I HOMEWORK 1 REAL ANALYSIS I HOMEWORK CİHAN BAHRAN The questons are from Tao s text. Exercse 0.0.. If (x α ) α A s a collecton of numbers x α [0, + ] such that x α

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

Department of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING

Department of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING MACHINE LEANING Vasant Honavar Bonformatcs and Computatonal Bology rogram Center for Computatonal Intellgence, Learnng, & Dscovery Iowa State Unversty honavar@cs.astate.edu www.cs.astate.edu/~honavar/

More information

Lecture 4. Instructor: Haipeng Luo

Lecture 4. Instructor: Haipeng Luo Lecture 4 Instructor: Hapeng Luo In the followng lectures, we focus on the expert problem and study more adaptve algorthms. Although Hedge s proven to be worst-case optmal, one may wonder how well t would

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Conjugacy and the Exponential Family

Conjugacy and the Exponential Family CS281B/Stat241B: Advanced Topcs n Learnng & Decson Makng Conjugacy and the Exponental Famly Lecturer: Mchael I. Jordan Scrbes: Bran Mlch 1 Conjugacy In the prevous lecture, we saw conjugate prors for the

More information

Quantum and Classical Information Theory with Disentropy

Quantum and Classical Information Theory with Disentropy Quantum and Classcal Informaton Theory wth Dsentropy R V Ramos rubensramos@ufcbr Lab of Quantum Informaton Technology, Department of Telenformatc Engneerng Federal Unversty of Ceara - DETI/UFC, CP 6007

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

Bounds on the bias terms for the Markov reward approach

Bounds on the bias terms for the Markov reward approach Bounds on the bas terms for the Markov reward approach Xnwe Ba 1 and Jasper Goselng 1 arxv:1901.00677v1 [math.pr] 3 Jan 2019 1 Department of Appled Mathematcs, Unversty of Twente, P.O. Box 217, 7500 AE

More information

On the Correlation between Boolean Functions of Sequences of Random Variables

On the Correlation between Boolean Functions of Sequences of Random Variables On the Correlaton between Boolean Functons of Sequences of Random Varables Farhad Shran Chaharsoogh Electrcal Engneerng and Computer Scence Unversty of Mchgan Ann Arbor, Mchgan, 48105 Emal: fshran@umch.edu

More information

Research Article Green s Theorem for Sign Data

Research Article Green s Theorem for Sign Data Internatonal Scholarly Research Network ISRN Appled Mathematcs Volume 2012, Artcle ID 539359, 10 pages do:10.5402/2012/539359 Research Artcle Green s Theorem for Sgn Data Lous M. Houston The Unversty of

More information

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore Sesson Outlne Introducton to classfcaton problems and dscrete choce models. Introducton to Logstcs Regresson. Logstc functon and Logt functon. Maxmum Lkelhood Estmator (MLE) for estmaton of LR parameters.

More information

Affine transformations and convexity

Affine transformations and convexity Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/

More information

Complete subgraphs in multipartite graphs

Complete subgraphs in multipartite graphs Complete subgraphs n multpartte graphs FLORIAN PFENDER Unverstät Rostock, Insttut für Mathematk D-18057 Rostock, Germany Floran.Pfender@un-rostock.de Abstract Turán s Theorem states that every graph G

More information

Rate-Memory Trade-off for the Two-User Broadcast Caching Network with Correlated Sources

Rate-Memory Trade-off for the Two-User Broadcast Caching Network with Correlated Sources Rate-Memory Trade-off for the Two-User Broadcast Cachng Network wth Correlated Sources Parsa Hassanzadeh, Antona M. Tulno, Jame Llorca, Elza Erkp arxv:705.0466v [cs.it] May 07 Abstract Ths paper studes

More information

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2) 1/16 MATH 829: Introducton to Data Mnng and Analyss The EM algorthm (part 2) Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 20, 2016 Recall 2/16 We are gven ndependent observatons

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Natural Language Processing and Information Retrieval

Natural Language Processing and Information Retrieval Natural Language Processng and Informaton Retreval Support Vector Machnes Alessandro Moschtt Department of nformaton and communcaton technology Unversty of Trento Emal: moschtt@ds.untn.t Summary Support

More information

General viscosity iterative method for a sequence of quasi-nonexpansive mappings

General viscosity iterative method for a sequence of quasi-nonexpansive mappings Avalable onlne at www.tjnsa.com J. Nonlnear Sc. Appl. 9 (2016), 5672 5682 Research Artcle General vscosty teratve method for a sequence of quas-nonexpansve mappngs Cuje Zhang, Ynan Wang College of Scence,

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

10-801: Advanced Optimization and Randomized Methods Lecture 2: Convex functions (Jan 15, 2014)

10-801: Advanced Optimization and Randomized Methods Lecture 2: Convex functions (Jan 15, 2014) 0-80: Advanced Optmzaton and Randomzed Methods Lecture : Convex functons (Jan 5, 04) Lecturer: Suvrt Sra Addr: Carnege Mellon Unversty, Sprng 04 Scrbes: Avnava Dubey, Ahmed Hefny Dsclamer: These notes

More information