Error Probability for M Signals

Size: px
Start display at page:

Download "Error Probability for M Signals"

Transcription

1 Chapter 3 rror Probablty for M Sgnals In ths chapter we dscuss the error probablty n decdng whch of M sgnals was transmtted over an arbtrary channel. We assume the sgnals are represented by a set of orthonormal functons. Wrtng down an expresson for the error probablty n terms of an -dmensonal ntegral s straghtforward. However, evaluatng the ntegrals nvolved n the expresson n all but a few specal cases s very dffcult or mpossble f s farly large (e.g. 4). For the specal case of orthogonal sgnals n the last chapter we derved the error probablty as a sngle ntegral. Because of the dffculty of evaluatng the error probablty n general bounds are needed to determne the performance. Dfferent bounds have dfferent complexty of evaluaton. Ths frst bound we derve s known as the Gallager bound. We apply ths bound to the case of orthogonal sgnals (for whch the true answer s already known). The Gallager bound has the property that when the number of sgnals become large the bound becomes tght. However, the bound s farly dffcult to evaluate for many sgnal sets. A specal case of the Gallager bound s the Unon-Bhattacharayya bound. Ths s smpler than the Gallager bound to evaluate but also s looser than the Gallager bound. The last bound consdered s the unon bound. Ths bound s tghter than the Unon- Bhattacharayya bound and the Gallager bound for suffcently hgh sgnal-to-nose ratos. Fnally we consder a smple random codng bound on the ensemble of all sgnal sets usng the Unon-Bhattacharayya bound. The general sgnal set we consder has the form s t s j jϕ j t M The optmum recever does a correlaton wth the orthonormal waveforms to form the decson varables. r j T r t ϕ j t dt j The decson regons are for equally lkely sgnals gven by R r : r p j r j The error probablty s then determned by M P j j R jh For all but a few small dmensonal sgnals or sgnals wth specal structures (such as orthogonal sgnal sets) the exact error probablty s very dffcult to calculate. 3-

2 3- CHAPTR 3. RROR PROBABILITY FOR M SIGALS ϕ t ϕ t r t ϕ t dt r r t r t ϕ t dt r Fnd s wth smallest ϕ t r s r t ϕ t dt r Fgure 3.: Optmum Recever n Addtve Whte Gaussan ose. rror Probablty for Orthogonal Sgnals Represent the M sgnals n terms of M orthogonal functon ϕ t as follows s t ϕ t s t ϕ t s M t ϕ M t As shown n the prevous chapter we need to fnd the largest value of r t s j t for j M. Instead we wll normalze ths and determne the largest value of r j r t s j t. To determne the error probablty we need to determne the statstcs of r j. Assume sgnal s s transmtted. Then T r j r t ϕ j t dt r jh T T T T r t ϕ j t dth r t H ϕ j t dt s t δ j n t ϕ j t dt ϕ t ϕ j t dt

3 . 3-3 The varance of r j s determned as follows. Gven H r j r j H r j r j H H T T T T T T T T n t ϕ j t dt ϕ j n t n s ϕ j t ϕ j s dtds K t s ϕ j t ϕ j s dtds δ t s ϕ j t ϕ j s dtds t ϕ j t dt Furthermore, each of these random varables s Gaussan (and ndependent). x Φ x P error P correct M P H H π P correct P c P H H P r r j j H P r r j j H r M j j P r r jh r M j j Φ r π exp r M Φ r dr π e u du. ow let u r. Then P c exp u π M Φ u du exp u π M Φ u du M Φ u π M Φ u e u du where the last step follows from usng an ntegraton by parts approach. Later on we wll fnd an upper bound on the above that s more nsghtful. It s possble to determne (usng L Hosptal s rule) the lmtng behavor of the error probablty as M. In general f we have M decson varables for an M-ary hypothess testng problem that are condtonally ndependent gven the true hypothess and there s a densty (dstrbuton) of the decson varable for the true statstc denoted f x (F x ) and a densty and dstrbuton functon for the other decson varables ( f x F x then the probablty of correct s P c f x M F x dx

4 3-4 CHAPTR 3. RROR PROBABILITY FOR M SIGALS The probablty of error s P e M f x F M F x F M x dx x f x dx The last formula s many tmes easer to compute numercally than the frst because the former s the dfference between two numbers that are very close (for small error probabltes).. Gallager Bound In ths secton we derve an upper bound on the error probablty for M sgnals receved n some form of nose. Let R R P c r : r p j r j r : r p j r for some j P H H P H H P R H ow R For λ let r : p j r r r : r r R Clam: Then R R. Proof: If r R then p j mples that and thus r R. Thus we have shown that R for some j p j r r λ for some j p j r r λ. Thus for some j, p j r r R. ow we use ths to upper bound the error probablty. λ whch where For r For r R and we have R and we have P R H P R H R M I I R R r dr p j r r p j r r r r λ λ R R R r dr

5 . 3-5 Thus I R j p j r r λ Applyng ths bound to the expresson for the error probablty we obtan R M R M p j r j p r r λ λ r dr p j j r λ dr for and λ. If we let λ (ths s the value that mnmzes the bound, see Gallager problem 5.6) the resultng bound s known as the Gallager bound. R M r p j j r If we let we obtan what s known as the Bhattacharayya bound. The average error probablty s then wrtten as R M R M P e r p j j r r p j r dr M π P e dr. xample of Gallager bound for M-ary orthogonal sgnals n AWG. r π e j r π e π e M j M k M j r r j k j π e j r π e r k π e r j dr π e r j π e r j π e r k r e e r e j e e r exp r exp j j dr dr dr

6 3-6 CHAPTR 3. RROR PROBABILITY FOR M SIGALS Let z g z exp where z r. Then M z e j j e g z ow t s easy to show (by completng the square) that g z π e g z j g z j exp e z exp π j g z j z dz Let f x x where. Then f x s a concave functon and thus by Jensen s nequalty we have that f X f X Thus Thus j g z j j g z j j g z j M g z j P e M e g z M exp exp lnm ow we would lke to mnmze the bound over the parameter keepng n mnd that the bound s only vald for. Let a and b lnm and Then f a f b a b Snce the mnmum occurs at an nteror pont of the nterval f n whch case the bound becomes exp lnm 4 lnm dz

7 M=8-8 Pes M=8 M= b/ (db) Fgure 3.: Comparson of Gallager Bound and exact error probablty for orthogonal sgnals. In each group the upper curve s the bound and the lower curve s the exact error probablty If lnm 4 then mn n whch case the upper bound becomes P e exp lnm. If lnm then mn n whch case the upper bound becomes sgnals n whte Gaussan nose s exp In summary the Gallager bound for M orthogonal lnm lnm lnm 4lnM exp lnm 4lnM ormally a communcaton engneer s more concerned wth the energy transmtted per bt rather than the energy transmtted per sgnal,. If we let b be the energy transmtted per bt then these are related as follows b log M Thus the bound on the error probablty can be expressed n terms of the energy transmtted per bt as exp log M exp log M b b ln where exp x denotes x. ote that as M, P e f b b ln ln b probablty and the Gallager bound for M orthogonal sgnals for M ln b 4ln 4ln ln = -.59dB. Below we plot the exact error 3. Bt error probablty So far we have examned the symbol error probablty for orthogonal sgnals. Usually the number of such sgnals s a power of, e.g. 4, 8, 6, 3,... If so then each transmsson of a sgnal s carryng log M bts of nformaton.

8 3-8 CHAPTR 3. RROR PROBABILITY FOR M SIGALS In ths case a communcaton engneer s usually nterested n the bt error probablty as opposed to the symbol error probablty. Let d s s j be the (ucldean) dstance between s and s j,.e s t s j t dt d s s j l s l s j l ow consder any sgnal set for whch the dstance between every par of sgnals s the same. Orthogonal sgnal sets wth equal energy satsfy ths condton. Let k log M. If s s transmtted there are M other sgnals to whch an error can be made. The number of sgnals whch cause an error of bts out of the k s k. Snce all sgnals are the same dstance from s the condtonal probablty of a symbol error causng bts to be n error s k M So the average number of bt error gven a symbol error s k So the probablty of bt error gven symbol error s So k k M k M k k k M k k P b k k and ths s true for any equdstant, equenergy sgnal set. 4. Unon Bound Assume Let Then π M M R r : r p j r for all j R r : r p j r for some j M r : r p j r j R j r : r p j r P r R H P r R j H j P R j H where P R j H P r p j r H

9 Ths s the unon bound. We now consder the bound for an arbtrary sgnal set n addtve whte Gaussan nose. Let For addtve whte Gaussan nose r p j r s t r s l l ϕ l t M l l exp exp r l s l π exp r l s l r l s jl r s s j j where r s s j l r l s l s jl and k l s kl for k M Thus P R j H P r s s j j H To do ths calculaton we need to calculate the statstcs of the random varable r s s j. The mean and varance are calculated as follows. Also r s s j s a Gaussan random varable. Thus r s s j H n s s s j s s j Var r s s j H Var n s s s j s s j P r s s j j Thus the unon bound on the error probablty s gven as Φ j s s j s s j Q s s j j s s j s s j Q j s j s Q s s j,.e. the square of the ucldean dstance. ote that s s j d We now use the followng to derve the Unon-Bhattacharyya bound. Ths s an alternate way of obtanng ths bound. We could have started wth the Unon-Bhattacharyya bound derved from the Gallager bound, but we would get the same answer.

10 3- CHAPTR 3. RROR PROBABILITY FOR M SIGALS Fact: Q x e x e x x. (To prove ths let X and X be ndependent Gaussan random varables mean varance. Then show Q x P X Raylegh densty; see page 9 of Proaks) Usng ths fact leads to the bound x X x 4 P X X j exp s s j 4 Ths s the Unon Bhattacharyya bound for an addtve whte Gaussan nose channel. 5. Random Codng ow consder M communcaton systems correspondng to all possble sgnals where s j s M x. Use the fact the X X has Consder the average error probablty, averaged over all possble selectons of sgnal sets For example: Let 3 M. There are possble sets of sgnals wth each sgnal a lnear combnaton of three orthogonal sgnals wth the coeffcents requred to be one of two values. Set number s t ϕ t ϕ t ϕ 3 t s t ϕ t ϕ t ϕ 3 t Set number s t ϕ t ϕ t ϕ 3 t s t ϕ t ϕ t ϕ 3 t.... Set number 64 s t ϕ t ϕ t ϕ 3 t s t ϕ t ϕ t ϕ 3 t Let P e k be the error probablty of sgnal set k gven H. Then P e M M k P e k If α then at least one of the of the M sgnals sets must have P e k α (otherwse P e k α for all k α; contradcton). In other words there exsts a sgnal set wth α. Ths s known as the random codng argument. Let s j M j be ndependent dentcally dstrbuted random varables wth P s j P s j and P e s where the expectaton s wth respect to the random varables s. Let X j s s j l s l s jl. Then snce P s l s jl P s l s jl. s exp 4 s s j P X j 4m P s and s j dffer n m places out of m

11 5. 3- So 4 exp s s j X e j 4 Let R log where R log M X e j 4 e then m m e m4 4 m m e m e log e exp R R M R M R s the number of bts transmtted per dmenson and s the sgnal energy per dmenson. We have shown that there exst a sgnal set for whch the average value of the error probablty for the th sgnal s small. Thus we have shown that as goes to the error probablty gven s was transmtted goes to zero f the rate s less than the cutoff rate R. Ths however does not mply that there exst a code s s M such that P e P e M are smultaneously small. It s possble that s small for some code for whch P e j s large. We now show that we can smultaneously make each of the error probabltes small smultaneously. Frst chose a code wth M R codewords for whch the average error probablty s less than say ε for large. If more than R of these codewords has ε then the average error probablty would be greater than ε, a contradcton. Thus at least M R of the codewords must have ε. So delete the codewords that have ε (less than half). We obtan a code wth (at least) R codewords wth as n for R R. Thus we have proved the followng. Theorem: There exst a sgnal set wth M sgnals n dmensons wth P e R R P e as provded R R. ote: s the energy per dmenson. ach sgnal then has energy and s transmttng log M bts of nformaton so that b log M R s the energy per bt of nformaton.,.e. From the theorem, relable communcaton (P e ) s possble provded R R For log exp b R R R log e b R R e b R e b R R ln R R b R ln R b R ln R R R ln P e f b ln ote: M orthogonal sgnals have P e f b ln. The rate of orthogonal sgnals s R log M log M M as M The theorem guarantees exstence of sgnals wth log M R and P e as M.

12 3- CHAPTR 3. RROR PROBABILITY FOR M SIGALS b / (db) Achevable Regon Code Rate Fgure 3.3: Cutoff Rate for Bnary Input-Contnuous Output Channel P e 5 n=5 n= n= n= n= n= b / (db) Fgure 3.4: rror Probabltes Based on Cutoff Rate for Bnary Input-Contnuous Output Channel for Rate / codes

13 P e 5 n=5 n= n= n= n= / (db) b Fgure 3.5: rror Probabltes Based on Cutoff Rate for Bnary Input-Contnuous Output Channel for Rate /8 codes P e n= 5 n= n=5 n= n= / (db) b Fgure 3.6: rror Probabltes Based on Cutoff Rate for Bnary Input-Contnuous Output Channel for Rate 3/4 codes

14 3-4 CHAPTR 3. RROR PROBABILITY FOR M SIGALS. xample of Gallager bound for M-ary sgnals n AWG. In ths secton we evaluate the Gallager bound for an arbtrary sgnal set n addtve whte Gaussan nose channel. As usual assume the sgnal set transmtted has the form s t µ j jφ j t M The optmal recever does a correlaton wth each of the orthonormal functons to produce the decson statstc r r. The condtonal densty functon of r gven sgnal s t transmtted s gven by r k π e r k µ k If we substtute ths nto the general form of the Gallager bound we obtan j j k k k k π e π e k k exp µ exp r k µ k r k µ j k π e r e k r k µ j k µ j k π exp exp µ r k r k µ k µ k dr dr rk r k µ k µ k r k µ j k µ j k exp µ j k k dr r k π exp µ r k k r exp k µ j k dr k exp µ j π exp µ r k k r exp k µ j k dr µ µ µ j exp µ exp µ j exp d exp µ µ j µ j When the sgnals are all orthogonal to each other then d for j and µ j and the bound becomes P e exp j

15 Unon-Bhatttacharyya Bound Pec - Unon Bound b/ (db) Fgure 3.7: Comparson of Gallager Bound, Unon Bound and Unon Bhattacharyya Bound for the Hammng Code wth BPSK Modulaton M exp Ths s dentcal to the prevous expresson. ow we consder a couple of dfferent sgnal sets. The frst sgnal set has 6 sgnals n seven dmensons. The energy n each dmenson s so the total energy transmtted s 7. The energy transmtted per nformaton bt s b 7 4 The geometry of the sgnal set s such that for any sgnal there are seven other sgnals at ucldean dstance, seven other sgnals at ucldean dstance 6 and one other sgnal at dstance 8. All sgnals have energy 7. (Ths s called the Hammng code). The fact that the sgnal set s geometrcally unform s due to the lnearty of the code. We plot the Gallager bound for. The Unon-Bhattacharyya bound s the Gallger bound wth. The second sgnal set has 56 sgnals n 6 dmensons wth sgnals at dstance 4, 3 sgnals at dstance 3, sgnals at dstance 4 and sgnal at dstance 64. In ths case b. As can be seen from the fgures the unon bound s the tghtest bound except at very low sgnal-to-nose ratos where the Gallager bound stays below. At reasonable sgnal-to-nose ratos the optmum n the Gallager bound s and thus t reduces to the Unon-Bhattacharyya bound. 6. Problems M. Usng L Hosptals rule on the log of Φ show that f b log M then lm Φ M x b ln M b ln (3.)

16 3-6 CHAPTR 3. RROR PROBABILITY FOR M SIGALS Unon-Bhattacharyya Bound Pec - Unon Bound b/ (db) Fgure 3.8: Comparson of Gallager Bound, Unon Bound and Unon Bhattacharyya Bound for the ordstrom- Robnson code wth BPSK Modulaton

17 and consequently lm M M orthogonal sgnals. P e s f b ln and f b ln where P e s the error probablty for. (a) Show that for any equenergy, equdstant (real) sgnal set, s s j a constant for equenergy mples s (b) For any equenergy sgnal set show that where s a constant and equdstant mples s s j s a constant). M M j j s s j s s j s s j ave M M j M j. (ote: 3. (R codng theorem for dscrete memoryless channels) Consder the dscrete memoryless channel (DMC) whch has nput alphabet A a a a A (wth A beng the number of letters n A and s fnte) and output alphabet B b b b B (wth B beng the number of letters n B and s fnte). As n the Gaussan nose channel the channel s characterzed by a set of transton probabltes p b a a A b B such that f we transmt a sgnal s t where wth s l s t A then the receved sgnal has the form wth p r l b s l a p b a for a r t A, b B and s l lφ l t r l l φ l t p r r s s l p r l s l ow we come to the R codng theorem for a dscrete (fnte alphabet) memoryless (equaton () s satsfed) channel (DMC). Prove there exst M sgnals n dmensons such that where P e M M R R log R M R log J J mn J a a b B p x J X X p b a p b a and n () X and X are ndependent, dentcally dstrbuted random varables wth common dstrbuton p x, Step : Let M and let s s s

18 3-8 CHAPTR 3. RROR PROBABILITY FOR M SIGALS and The decoder wll not decde s f Let and s s s p r s p r s R r : p r s R r : p r s (ote that R and R may not be dsjont). Show that p r s p r s p r s j r R j p r s p r s all r l p r ls l p r ls l r l B l J s l s l Step : Apply the unon bound to obtan for M sgnals (codewords) l J s l s j l Step 3: Average over all possble sgnal sets where the sgnals are chosen ndependently accordng to the dstrbuton that acheves J (.e. that dstrbuton p x on A such that J J X X (treat s j : M j as..d. random varables wth dstrbuton p x ) to show that P e P e R M Step 4: Complete the proof. 4. Show for a bnary symmetrc channel defned as that A B p b a p p R log p p a b a b 5. A set of 6 sgnals s constructed n 7 dmenson usng only two possble coeffcents,.e. s j. Let A k j : s s j 4k.e. A k s the number of sgnal pars wth squared dstance 4k. The sgnals are chosen so that A k 6 k k k 3 4 k k 7 Fnd the unon bound and the unon-bhattacharyya bound on the error probablty of the optmum recever n addtve whte Gaussan nose wth two sded power spectral densty.

19 A modulator uses two orthonormal sgnals (φ t and φ t ) to transmt 3 bts of nformaton (8 possble equally lkely sgnals) over an addtve whte Gaussan nose channel wth power spectral densty. The sgnals are gven as s t φ t y 3 φ t s t φ t y 3 φ t s 3 t φ t y φ t s 4 t φ t yφ t s 5 t φ t y φ t s 6 t φ t y 3 φ t s 7 t φ t y 3 φ t s 8 t φ t y 3 φ t (a) Determne the optmum value of the parameter y to mnmze the average sgnal power transmtted. Draw the optmum decson regons for the sgnal set (n two dmensons). (b) Determne the unon bound to the average error probablty n terms of energy per nformaton bt b to nose densty rato. (c) Can you tghten ths bound? ϕ t 3 y 3 ϕ t 3 7. Consder an addtve (nonwhte) Gaussan nose channel wth one of two sgnals transmtted. Assume the nose has covarance functon K s t. Usng the Bhattacharray bound show that the error probablty when transmttng one of two sgnals (s t or s t ) can be bounded by K P e e s s 8

20 3- CHAPTR 3. RROR PROBABILITY FOR M SIGALS If the nose s whte, what does the bound become? 8. Consder a Posson channel wth one of 4 sgnals transmtted. Let the sgnals be as shown below. Assume when the sgnal s present that the ntensty of the photon process s λ and when the sgnal s not present the ntensty s λ. That s the receved sgnal durng the nterval T s Posson wth parameter λ f the laser s on and λ f the laser s off. Fnd the optmal recever for mnmzng the probablty of error for a sgnal (as opposed to a bt). Fnd an upper bound on the error probablty. s t λ s t 4T t λ s 3 t T 5T t λ s 4 t T 6T t λ 3T 7T t 9. A sgnal set conssts of 56 sgnals n 6 dmensons wth the coeffcents beng ether or. The dstance structure s gven as A k j : s s j 4k 56 k 867 k k k 56 k 6 otherwse These sgnals are transmtted wth equal probablty over an addtve whte Gaussan nose channel. De-

21 6. 3- termne the unon bound on the error probablty. Determne the unon-bhattacharyya bound on the error probablty. xpress your answer n terms of the energy transmtted per bt. What s the rate of the code n bts/dmenson?. A communcaton system uses a dmensons and a code rate of R bts/dmenson. The goal s not low error probablty but hgh throughput (expected number of successfully receved nformaton bts per coded bt n a block on length ). If we use a low code rate then we have hgh success probablty for a packet but few nformaton bts. If we use a hgh code rate then we have a low success probablty but a larger number of bts transmtted. Assume the channel s an addtve whte Gaussan nose channel and the nput s restrcted to bnary modulaton (each coeffcent n the orthonormal expanson s ether or. Assume as well that the error probablty s related to the block length, energy per dmenson and code rate va the cutoff rate theorem (soft decsons). Fnd (and plot) the throughput for code rates varyng from. to.9 n steps of. as a functon of the energy per nformaton bt. (Use Matlab to plot the throughput). Assume 5. Be sure to normalze the energy per coded bt to the energy per nformaton bt. Compare the throughput of hard decson decodng (BPSK and AWG) and soft decson decodng.

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Chapter 7 Channel Capacty and Codng Contents 7. Channel models and channel capacty 7.. Channel models Bnary symmetrc channel Dscrete memoryless channels Dscrete-nput, contnuous-output channel Waveform

More information

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Wreless Informaton Transmsson System Lab. Chapter 7 Channel Capacty and Codng Insttute of Communcatons Engneerng atonal Sun Yat-sen Unversty Contents 7. Channel models and channel capacty 7.. Channel models

More information

Lecture 3: Shannon s Theorem

Lecture 3: Shannon s Theorem CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts

More information

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem. Lecture 14 (03/27/18). Channels. Decodng. Prevew of the Capacty Theorem. A. Barg The concept of a communcaton channel n nformaton theory s an abstracton for transmttng dgtal (and analog) nformaton from

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

ECE 534: Elements of Information Theory. Solutions to Midterm Exam (Spring 2006)

ECE 534: Elements of Information Theory. Solutions to Midterm Exam (Spring 2006) ECE 534: Elements of Informaton Theory Solutons to Mdterm Eam (Sprng 6) Problem [ pts.] A dscrete memoryless source has an alphabet of three letters,, =,, 3, wth probabltes.4,.4, and., respectvely. (a)

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Lossy Compression. Compromise accuracy of reconstruction for increased compression. Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost

More information

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD he Gaussan classfer Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world X observatons g decson functon L[g,y] loss of predctng y wth g Bayes decson rule s

More information

Signal space Review on vector space Linear independence Metric space and norm Inner product

Signal space Review on vector space Linear independence Metric space and norm Inner product Sgnal space.... Revew on vector space.... Lnear ndependence... 3.3 Metrc space and norm... 4.4 Inner product... 5.5 Orthonormal bass... 7.6 Waveform communcaton system... 9.7 Some examples... 6 Sgnal space

More information

Consider the following passband digital communication system model. c t. modulator. t r a n s m i t t e r. signal decoder.

Consider the following passband digital communication system model. c t. modulator. t r a n s m i t t e r. signal decoder. PASSBAND DIGITAL MODULATION TECHNIQUES Consder the followng passband dgtal communcaton system model. cos( ω + φ ) c t message source m sgnal encoder s modulator s () t communcaton xt () channel t r a n

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

EGR 544 Communication Theory

EGR 544 Communication Theory EGR 544 Communcaton Theory. Informaton Sources Z. Alyazcoglu Electrcal and Computer Engneerng Department Cal Poly Pomona Introducton Informaton Source x n Informaton sources Analog sources Dscrete sources

More information

Communication with AWGN Interference

Communication with AWGN Interference Communcaton wth AWG Interference m {m } {p(m } Modulator s {s } r=s+n Recever ˆm AWG n m s a dscrete random varable(rv whch takes m wth probablty p(m. Modulator maps each m nto a waveform sgnal s m=m

More information

Digital Modems. Lecture 2

Digital Modems. Lecture 2 Dgtal Modems Lecture Revew We have shown that both Bayes and eyman/pearson crtera are based on the Lkelhood Rato Test (LRT) Λ ( r ) < > η Λ r s called observaton transformaton or suffcent statstc The crtera

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

ECONOMICS 351*-A Mid-Term Exam -- Fall Term 2000 Page 1 of 13 pages. QUEEN'S UNIVERSITY AT KINGSTON Department of Economics

ECONOMICS 351*-A Mid-Term Exam -- Fall Term 2000 Page 1 of 13 pages. QUEEN'S UNIVERSITY AT KINGSTON Department of Economics ECOOMICS 35*-A Md-Term Exam -- Fall Term 000 Page of 3 pages QUEE'S UIVERSITY AT KIGSTO Department of Economcs ECOOMICS 35* - Secton A Introductory Econometrcs Fall Term 000 MID-TERM EAM ASWERS MG Abbott

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE

ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE School of Computer and Communcaton Scences Handout 0 Prncples of Dgtal Communcatons Solutons to Problem Set 4 Mar. 6, 08 Soluton. If H = 0, we have Y = Z Z = Y

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Learning Theory: Lecture Notes

Learning Theory: Lecture Notes Learnng Theory: Lecture Notes Lecturer: Kamalka Chaudhur Scrbe: Qush Wang October 27, 2012 1 The Agnostc PAC Model Recall that one of the constrants of the PAC model s that the data dstrbuton has to be

More information

6. Stochastic processes (2)

6. Stochastic processes (2) Contents Markov processes Brth-death processes Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 Markov process Consder a contnuous-tme and dscrete-state stochastc process X(t) wth state space

More information

6. Stochastic processes (2)

6. Stochastic processes (2) 6. Stochastc processes () Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 6. Stochastc processes () Contents Markov processes Brth-death processes 6. Stochastc processes () Markov process

More information

Expected Value and Variance

Expected Value and Variance MATH 38 Expected Value and Varance Dr. Neal, WKU We now shall dscuss how to fnd the average and standard devaton of a random varable X. Expected Value Defnton. The expected value (or average value, or

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Lecture 4 Hypothesis Testing

Lecture 4 Hypothesis Testing Lecture 4 Hypothess Testng We may wsh to test pror hypotheses about the coeffcents we estmate. We can use the estmates to test whether the data rejects our hypothess. An example mght be that we wsh to

More information

Convergence of random processes

Convergence of random processes DS-GA 12 Lecture notes 6 Fall 216 Convergence of random processes 1 Introducton In these notes we study convergence of dscrete random processes. Ths allows to characterze phenomena such as the law of large

More information

An Application of Fuzzy Hypotheses Testing in Radar Detection

An Application of Fuzzy Hypotheses Testing in Radar Detection Proceedngs of the th WSES Internatonal Conference on FUZZY SYSEMS n pplcaton of Fuy Hypotheses estng n Radar Detecton.K.ELSHERIF, F.M.BBDY, G.M.BDELHMID Department of Mathematcs Mltary echncal Collage

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression 11 MACHINE APPLIED MACHINE LEARNING LEARNING MACHINE LEARNING Gaussan Mture Regresson 22 MACHINE APPLIED MACHINE LEARNING LEARNING Bref summary of last week s lecture 33 MACHINE APPLIED MACHINE LEARNING

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

Why Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one)

Why Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one) Why Bayesan? 3. Bayes and Normal Models Alex M. Martnez alex@ece.osu.edu Handouts Handoutsfor forece ECE874 874Sp Sp007 If all our research (n PR was to dsappear and you could only save one theory, whch

More information

Differentiating Gaussian Processes

Differentiating Gaussian Processes Dfferentatng Gaussan Processes Andrew McHutchon Aprl 17, 013 1 Frst Order Dervatve of the Posteror Mean The posteror mean of a GP s gven by, f = x, X KX, X 1 y x, X α 1 Only the x, X term depends on the

More information

Simulation and Random Number Generation

Simulation and Random Number Generation Smulaton and Random Number Generaton Summary Dscrete Tme vs Dscrete Event Smulaton Random number generaton Generatng a random sequence Generatng random varates from a Unform dstrbuton Testng the qualty

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

Statistics Chapter 4

Statistics Chapter 4 Statstcs Chapter 4 "There are three knds of les: les, damned les, and statstcs." Benjamn Dsrael, 1895 (Brtsh statesman) Gaussan Dstrbuton, 4-1 If a measurement s repeated many tmes a statstcal treatment

More information

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Maxmum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models

More information

Classification as a Regression Problem

Classification as a Regression Problem Target varable y C C, C,, ; Classfcaton as a Regresson Problem { }, 3 L C K To treat classfcaton as a regresson problem we should transform the target y nto numercal values; The choce of numercal class

More information

Research Article Green s Theorem for Sign Data

Research Article Green s Theorem for Sign Data Internatonal Scholarly Research Network ISRN Appled Mathematcs Volume 2012, Artcle ID 539359, 10 pages do:10.5402/2012/539359 Research Artcle Green s Theorem for Sgn Data Lous M. Houston The Unversty of

More information

Lecture 3. Ax x i a i. i i

Lecture 3. Ax x i a i. i i 18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

First Year Examination Department of Statistics, University of Florida

First Year Examination Department of Statistics, University of Florida Frst Year Examnaton Department of Statstcs, Unversty of Florda May 7, 010, 8:00 am - 1:00 noon Instructons: 1. You have four hours to answer questons n ths examnaton.. You must show your work to receve

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Pulse Coded Modulation

Pulse Coded Modulation Pulse Coded Modulaton PCM (Pulse Coded Modulaton) s a voce codng technque defned by the ITU-T G.711 standard and t s used n dgtal telephony to encode the voce sgnal. The frst step n the analog to dgtal

More information

Assuming that the transmission delay is negligible, we have

Assuming that the transmission delay is negligible, we have Baseband Transmsson of Bnary Sgnals Let g(t), =,, be a sgnal transmtted over an AWG channel. Consder the followng recever g (t) + + Σ x(t) LTI flter h(t) y(t) t = nt y(nt) threshold comparator Decson ˆ

More information

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering / Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons

More information

Probability Theory. The nth coefficient of the Taylor series of f(k), expanded around k = 0, gives the nth moment of x as ( ik) n n!

Probability Theory. The nth coefficient of the Taylor series of f(k), expanded around k = 0, gives the nth moment of x as ( ik) n n! 8333: Statstcal Mechancs I Problem Set # 3 Solutons Fall 3 Characterstc Functons: Probablty Theory The characterstc functon s defned by fk ep k = ep kpd The nth coeffcent of the Taylor seres of fk epanded

More information

Credit Card Pricing and Impact of Adverse Selection

Credit Card Pricing and Impact of Adverse Selection Credt Card Prcng and Impact of Adverse Selecton Bo Huang and Lyn C. Thomas Unversty of Southampton Contents Background Aucton model of credt card solctaton - Errors n probablty of beng Good - Errors n

More information

Probability and Random Variable Primer

Probability and Random Variable Primer B. Maddah ENMG 622 Smulaton 2/22/ Probablty and Random Varable Prmer Sample space and Events Suppose that an eperment wth an uncertan outcome s performed (e.g., rollng a de). Whle the outcome of the eperment

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Lecture 20: Hypothesis testing

Lecture 20: Hypothesis testing Lecture : Hpothess testng Much of statstcs nvolves hpothess testng compare a new nterestng hpothess, H (the Alternatve hpothess to the borng, old, well-known case, H (the Null Hpothess or, decde whether

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14 APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce

More information

Statistics and Probability Theory in Civil, Surveying and Environmental Engineering

Statistics and Probability Theory in Civil, Surveying and Environmental Engineering Statstcs and Probablty Theory n Cvl, Surveyng and Envronmental Engneerng Pro. Dr. Mchael Havbro Faber ETH Zurch, Swtzerland Contents o Todays Lecture Overvew o Uncertanty Modelng Random Varables - propertes

More information

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Mamum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models for

More information

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

Finding Dense Subgraphs in G(n, 1/2)

Finding Dense Subgraphs in G(n, 1/2) Fndng Dense Subgraphs n Gn, 1/ Atsh Das Sarma 1, Amt Deshpande, and Rav Kannan 1 Georga Insttute of Technology,atsh@cc.gatech.edu Mcrosoft Research-Bangalore,amtdesh,annan@mcrosoft.com Abstract. Fndng

More information

See Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition)

See Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition) Count Data Models See Book Chapter 11 2 nd Edton (Chapter 10 1 st Edton) Count data consst of non-negatve nteger values Examples: number of drver route changes per week, the number of trp departure changes

More information

Low Complexity Soft-Input Soft-Output Hamming Decoder

Low Complexity Soft-Input Soft-Output Hamming Decoder Low Complexty Soft-Input Soft-Output Hammng Der Benjamn Müller, Martn Holters, Udo Zölzer Helmut Schmdt Unversty Unversty of the Federal Armed Forces Department of Sgnal Processng and Communcatons Holstenhofweg

More information

CSCE 790S Background Results

CSCE 790S Background Results CSCE 790S Background Results Stephen A. Fenner September 8, 011 Abstract These results are background to the course CSCE 790S/CSCE 790B, Quantum Computaton and Informaton (Sprng 007 and Fall 011). Each

More information

Lecture 4: November 17, Part 1 Single Buffer Management

Lecture 4: November 17, Part 1 Single Buffer Management Lecturer: Ad Rosén Algorthms for the anagement of Networs Fall 2003-2004 Lecture 4: November 7, 2003 Scrbe: Guy Grebla Part Sngle Buffer anagement In the prevous lecture we taled about the Combned Input

More information

Chapter 13: Multiple Regression

Chapter 13: Multiple Regression Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to

More information

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1 Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons

More information

b ), which stands for uniform distribution on the interval a x< b. = 0 elsewhere

b ), which stands for uniform distribution on the interval a x< b. = 0 elsewhere Fall Analyss of Epermental Measurements B. Esensten/rev. S. Errede Some mportant probablty dstrbutons: Unform Bnomal Posson Gaussan/ormal The Unform dstrbuton s often called U( a, b ), hch stands for unform

More information

Homework Assignment 3 Due in class, Thursday October 15

Homework Assignment 3 Due in class, Thursday October 15 Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.

More information

Lecture 2: Prelude to the big shrink

Lecture 2: Prelude to the big shrink Lecture 2: Prelude to the bg shrnk Last tme A slght detour wth vsualzaton tools (hey, t was the frst day... why not start out wth somethng pretty to look at?) Then, we consdered a smple 120a-style regresson

More information

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement Markov Chan Monte Carlo MCMC, Gbbs Samplng, Metropols Algorthms, and Smulated Annealng 2001 Bonformatcs Course Supplement SNU Bontellgence Lab http://bsnuackr/ Outlne! Markov Chan Monte Carlo MCMC! Metropols-Hastngs

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands Content. Inference on Regresson Parameters a. Fndng Mean, s.d and covarance amongst estmates.. Confdence Intervals and Workng Hotellng Bands 3. Cochran s Theorem 4. General Lnear Testng 5. Measures of

More information

Economics 130. Lecture 4 Simple Linear Regression Continued

Economics 130. Lecture 4 Simple Linear Regression Continued Economcs 130 Lecture 4 Contnued Readngs for Week 4 Text, Chapter and 3. We contnue wth addressng our second ssue + add n how we evaluate these relatonshps: Where do we get data to do ths analyss? How do

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

2016 Wiley. Study Session 2: Ethical and Professional Standards Application

2016 Wiley. Study Session 2: Ethical and Professional Standards Application 6 Wley Study Sesson : Ethcal and Professonal Standards Applcaton LESSON : CORRECTION ANALYSIS Readng 9: Correlaton and Regresson LOS 9a: Calculate and nterpret a sample covarance and a sample correlaton

More information

Statistical analysis using matlab. HY 439 Presented by: George Fortetsanakis

Statistical analysis using matlab. HY 439 Presented by: George Fortetsanakis Statstcal analyss usng matlab HY 439 Presented by: George Fortetsanaks Roadmap Probablty dstrbutons Statstcal estmaton Fttng data to probablty dstrbutons Contnuous dstrbutons Contnuous random varable X

More information

Department of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING

Department of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING MACHINE LEANING Vasant Honavar Bonformatcs and Computatonal Bology rogram Center for Computatonal Intellgence, Learnng, & Dscovery Iowa State Unversty honavar@cs.astate.edu www.cs.astate.edu/~honavar/

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

Bayesian decision theory. Nuno Vasconcelos ECE Department, UCSD

Bayesian decision theory. Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world observatons decson functon L[,y] loss of predctn y wth the epected value of the

More information

Introduction to Information Theory, Data Compression,

Introduction to Information Theory, Data Compression, Introducton to Informaton Theory, Data Compresson, Codng Mehd Ibm Brahm, Laura Mnkova Aprl 5, 208 Ths s the augmented transcrpt of a lecture gven by Luc Devroye on the 3th of March 208 for a Data Structures

More information

Robert Eisberg Second edition CH 09 Multielectron atoms ground states and x-ray excitations

Robert Eisberg Second edition CH 09 Multielectron atoms ground states and x-ray excitations Quantum Physcs 量 理 Robert Esberg Second edton CH 09 Multelectron atoms ground states and x-ray exctatons 9-01 By gong through the procedure ndcated n the text, develop the tme-ndependent Schroednger equaton

More information

OPTIMAL COMBINATION OF FOURTH ORDER STATISTICS FOR NON-CIRCULAR SOURCE SEPARATION. Christophe De Luigi and Eric Moreau

OPTIMAL COMBINATION OF FOURTH ORDER STATISTICS FOR NON-CIRCULAR SOURCE SEPARATION. Christophe De Luigi and Eric Moreau OPTIMAL COMBINATION OF FOURTH ORDER STATISTICS FOR NON-CIRCULAR SOURCE SEPARATION Chrstophe De Lug and Erc Moreau Unversty of Toulon LSEET UMR CNRS 607 av. G. Pompdou BP56 F-8362 La Valette du Var Cedex

More information

Ph 219a/CS 219a. Exercises Due: Wednesday 23 October 2013

Ph 219a/CS 219a. Exercises Due: Wednesday 23 October 2013 1 Ph 219a/CS 219a Exercses Due: Wednesday 23 October 2013 1.1 How far apart are two quantum states? Consder two quantum states descrbed by densty operators ρ and ρ n an N-dmensonal Hlbert space, and consder

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

RELIABILITY ASSESSMENT

RELIABILITY ASSESSMENT CHAPTER Rsk Analyss n Engneerng and Economcs RELIABILITY ASSESSMENT A. J. Clark School of Engneerng Department of Cvl and Envronmental Engneerng 4a CHAPMAN HALL/CRC Rsk Analyss for Engneerng Department

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

MIMA Group. Chapter 2 Bayesian Decision Theory. School of Computer Science and Technology, Shandong University. Xin-Shun SDU

MIMA Group. Chapter 2 Bayesian Decision Theory. School of Computer Science and Technology, Shandong University. Xin-Shun SDU Group M D L M Chapter Bayesan Decson heory Xn-Shun Xu @ SDU School of Computer Scence and echnology, Shandong Unversty Bayesan Decson heory Bayesan decson theory s a statstcal approach to data mnng/pattern

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

Probability Theory (revisited)

Probability Theory (revisited) Probablty Theory (revsted) Summary Probablty v.s. plausblty Random varables Smulaton of Random Experments Challenge The alarm of a shop rang. Soon afterwards, a man was seen runnng n the street, persecuted

More information

One-sided finite-difference approximations suitable for use with Richardson extrapolation

One-sided finite-difference approximations suitable for use with Richardson extrapolation Journal of Computatonal Physcs 219 (2006) 13 20 Short note One-sded fnte-dfference approxmatons sutable for use wth Rchardson extrapolaton Kumar Rahul, S.N. Bhattacharyya * Department of Mechancal Engneerng,

More information

Entropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or

Entropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or Sgnal Compresson Sgnal Compresson Entropy Codng Entropy codng s also known as zero-error codng, data compresson or lossless compresson. Entropy codng s wdely used n vrtually all popular nternatonal multmeda

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 31 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 6. Rdge regresson The OLSE s the best lnear unbased

More information

Supplement to Clustering with Statistical Error Control

Supplement to Clustering with Statistical Error Control Supplement to Clusterng wth Statstcal Error Control Mchael Vogt Unversty of Bonn Matthas Schmd Unversty of Bonn In ths supplement, we provde the proofs that are omtted n the paper. In partcular, we derve

More information

Lecture 3: Probability Distributions

Lecture 3: Probability Distributions Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the

More information

Chapter 8 SCALAR QUANTIZATION

Chapter 8 SCALAR QUANTIZATION Outlne Chapter 8 SCALAR QUANTIZATION Yeuan-Kuen Lee [ CU, CSIE ] 8.1 Overvew 8. Introducton 8.4 Unform Quantzer 8.5 Adaptve Quantzaton 8.6 Nonunform Quantzaton 8.7 Entropy-Coded Quantzaton Ch 8 Scalar

More information

Refined Coding Bounds for Network Error Correction

Refined Coding Bounds for Network Error Correction Refned Codng Bounds for Network Error Correcton Shenghao Yang Department of Informaton Engneerng The Chnese Unversty of Hong Kong Shatn, N.T., Hong Kong shyang5@e.cuhk.edu.hk Raymond W. Yeung Department

More information

Markov chains. Definition of a CTMC: [2, page 381] is a continuous time, discrete value random process such that for an infinitesimal

Markov chains. Definition of a CTMC: [2, page 381] is a continuous time, discrete value random process such that for an infinitesimal Markov chans M. Veeraraghavan; March 17, 2004 [Tp: Study the MC, QT, and Lttle s law lectures together: CTMC (MC lecture), M/M/1 queue (QT lecture), Lttle s law lecture (when dervng the mean response tme

More information

Complete subgraphs in multipartite graphs

Complete subgraphs in multipartite graphs Complete subgraphs n multpartte graphs FLORIAN PFENDER Unverstät Rostock, Insttut für Mathematk D-18057 Rostock, Germany Floran.Pfender@un-rostock.de Abstract Turán s Theorem states that every graph G

More information

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2) 1/16 MATH 829: Introducton to Data Mnng and Analyss The EM algorthm (part 2) Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 20, 2016 Recall 2/16 We are gven ndependent observatons

More information