Channel optimization for binary hypothesis testing

Size: px
Start display at page:

Download "Channel optimization for binary hypothesis testing"

Transcription

1 Channel optmzaton for bnary hypothess testng Gancarlo Baldan Munther Dahleh MIT, Laboratory for Informaton and Decson systems, Cambrdge 039 USA emal: MIT, Laboratory for Informaton and Decson systems, Cambrdge 039 USA emal: Abstract: In ths paper we consder the classcal bnary hypothess testng problem where the d samples are obtaned through a channel. Our goal s to study the relatonshp between the channel capacty and the goodness of the estmaton measured by the Chernoff nformaton n order to get an upper bound on the estmaton performances as well as some nsght on the structure of the optmal channel. Keywords: Optmal Estmaton; Hypotheses; Capacty.. INTRODUCTION The bnary hypothess testng problem s probably the smplest estmaton problem one can consder. In the classcal setup a sequence of samples x s drawn from an unknown n dmensonal probablty dstrbuton whch can be ether p Hypothess H ) wth pror probablty π or p Hypothess H ) wth pror probablty π. The problem s to nfer from the samples whch of the two hypothess s correct or, to be precse, the most lkely. Thsproblemsverywellknownandsoptmallysolvedusng the lkelhood rato test LRT) as shown, for example, n []. Furthermore, applyng a large devaton prncple, an asymptotc analyss can be performed to show that the probablty of error n the estmaton decays exponentally n the number of samples wth a rate gven by the so called Chernoff Informaton. Inthspaper,wewllconsderanextensontothsproblem motvated by the fact that each collected sample s always obtaned through a measurng system that can affect the estmaton process. To model the effects due to the measurng system we wll consder that the observatons at the sourceareavalableonlythroughafntecapacty,dscrete, memoryless stochastc channel. Our goal s to address the queston of desgnng such a channelto maxmze goodness of the estmaton as measured by the decay rate of the probablty of error as well as obtanng a relatonshp between the capacty of the channel and the qualty of the estmaton.. BASIC DEFINITIONS In ths secton we brefly revew some basc quanttes defned n Informaton theory. These quanttes wll be used throughout the whole paper and some approxmatons wll be ntroduced to make ther defntons more tractable. Ths work was supported under NSF/EFRI grant and under AFOSR/MURI grant R6756-G TheKullbackLeblerdstancesoneofthemostmportant quanttes n nformaton theory and measure the dstance between two probablty dstrbuton p and q defned over the same alphabet X. The Kullback dstance s defned as: Dp q) = px)log px) qx), ) x X where the logarthm wll be always consdered n base e. Snce from defnton ) t s clear that Dp q) doesn t depend on the alphabet but just on the two dstrbutons themselves we wll usually adopt the notaton: Dp q) = log, ) wherensthecardnaltyofx andweomtthedependence on the alphabet. Anothermeasureofthedstancebetweentwodstrbutons p and q whch s closely related to the Bnary Hypothess testng problem, s the so called Chernoff nformaton whose defnton s: Cp,q) = Dp λ p) = Dp λ q), 3) where p λ s a probablty dstrbuton defned as: p λ = p λ q λ n pλ q λ and λ s such that Dp λ p) = Dp λ q). We wll often consder dscrete, memoryless, stochastc channels mappng the alphabet X nto a fnte alphabet Y whose cardnalty s m. Ths knd of channels s completely descrbed by a condtonal probablty dstrbuton: Wy x) = PY = y X = x), 4) whch can be regarded as an m n stochastc matrx and wll be often denoted just by W. To measure the capacty of such a channel we wll use the standard nformaton defnton:, C = max p x IX;Y) 5) Copyrght by the Internatonal Federaton of Automatc Control IFAC) 484

2 where X s a random varable such that X p x and Y Wp x s the correspondng random varable obtaned through the channel. IX;Y) s called mutual nformaton between X and Y and s defned as: IX;Y) = Dp xy p x p y ) = E x [DW x) p y )]. 6) 3. PROBLEM FORMULATION The problem we are tryng to face s, essentally, an optmzaton one and, therefore, to provde a correct formulaton, we have to dentfy three major components: optmzaton varables, cost functon and constrants. In ths secton we wll defne these components tryng to motvate the choces made. 3. Optmzaton varables. We model our sample source as a dscrete random varable X over a fnte alphabet X such that X = n. The mass dstrbuton of X depend on the unknown hypothess: X { p under H p under H, 7) where p,p R n and P[H ] = π, P[H ] = π. The channel through whch we obtan the measurement s supposed to be a dscrete, memoryless, stochastc channel W mappng the alphabet X nto a fnte alphabet Y whose cardnalty s m. Both the dmenson of the output alphabet m and the channel W tself, wll be regarded as optmzaton varables, thus allowng a complete flexblty n the choce of the most sutable channel. 3. Constrants. Wthout any further assumpton on the class of feasble channels,anyoptmzatonproblemwouldbesolvedbythe choce m = n and W = I that makes the random varable X perfectlymeasurableasftherewasnochannelatall.to make the scenaro more realstc we decded to ntroduce a constrant n the capacty of the channel as measured by the usual mutual nformaton between X and Y: maxix;y) C. 8) p x We made ths choce because the capacty of a channel s a reasonable abstracton of ts qualty and s often the most crtcal specfcaton for a communcaton system. 3.3 Cost functon. Sncetherandomprocessobservableafterthechannely n s stll..d. wth a dstrbuton that can be ether q = Wp or q = Wp dependng on the true hypothess, t s reasonable to measure the qualty of the estmaton usng a standard technque for bnary hypothess testng appled to the process y n. We chose to optmze the asymptotc performance of the system n terms of the probablty of error. Specfcally t s well known that for a Bnary hypothess testng problem there exst a sequence of optmal estmators Ĥ n : Y n {,}, desgned usng a log-lkelhood rato, such that they mnmze the probablty of error gven n samples: P e n)=pĥny,...,y n ) = H = )π + PĤny,...,y n ) = H = )π. Moreover t has been shown that P e n) decays exponentally wth n at a rate gven by the Chernoff nformaton Cq,q ), that s: n n logp en) = Cq,q ). Our goal s then to maxmze the Chernoff Informaton Cq,q ) = CWp,Wp ) n order for the probablty of error to decay as fast as possble. The complete formulaton of the optmzaton problem can be wrtten as: s.t. max CWp,Wp ) W,m max p X IX;Y) C T W = T W,j 0 In secton 5 we wll assume that the capacty C s small enough so that the cost functon and the constrants n 9) can be approxmated by more tractable expresson thus leadng to an approxmatng optmzaton problem vald for small capactes. By explctly solvng ths problem we wll gan some nsght regardng the structure of the solutons of 9) n the small C regme. In the next secton we wll ntroduce some basc tools useful to perform the requred approxmatons. 4. EUCLIDEAN APPROXIMATIONS In ths secton we present some approxmatons to the quanttes defned n secton. To obtan these approxmatons we follow the dea known as Eucldean Informaton theory and presented n detals n []. We start consderng a smple Taylor expanson log+x) = x x +ϕx), where ϕx) = ox ) when x tends to 0. Applyng ths expanson to the defnton of the Kullback dstance ) we get Dp q)= = log log + q ) 9) = ) ) q ϕ p p = p q [p] n ) q p ϕ 0) where [p] s a dagonal matrx whose dagonal elements are gven by, =,...,n. We can smplfy the expresson n 0) by notcng that the last summaton s an nfntesmal of a superor order 485

3 wth respect to p q [p] as proved by the followng nequalty: n ϕ q p q [p] ) ϕ qj p j p j ) q p j p j ) j p j qj p ϕ j p j 0, ) where j s the ndex such that s maxmum and can be regarded as a functon of p. Furthermore the quanttes p q and p q are nfntesmal of [p] [q] the same order as p q snce from the nequaltes : t follows that mn p q [p] p q max [q], p q [p] p q = ) [q] smply applyng the squeeze theorem. By vrtue of the last two observatons the expresson n 0) can be fnally wrtten as Dp q) = ) p q [q] +o p q, ) [q] and we can now use ths expresson to approxmate both the defnton of capacty 5) and Chernoff nformaton. RegardngtheChernoffnformatonwecanapproxmatet wth an easer Kullback dstance as stated n the followng proposton: Proposton. If two probablty dstrbutons p and q, defned on the same alphabet X, are close enough then the followng approxmaton holds: More formally we have: Cp,q) 4 Dp q). Proof: See appendx A. Cp,q) Dp q) = 4. Regardng the defnton of channel capacty5), t s well known see [3]) that f p s the optmal nput dstrbuton achevng the capacty and p 0 s the correspondng output dstrbuton, we have DW p 0 ) = C : p > 0 and DW p 0 ) < C : p = 0. By vrtue of ths consderaton, under the assumpton of a small C, all the condtonal dstrbutons W wll be close to p 0 and the dstances are well approxmated by the expresson ) thus obtanng: W p 0 C. 3) [p It s easy to see that the converse s true as well; f we fx a pont p 0 on the smplex and choose n probablty vector W satsfyng the constrants 3), the resultng channel wll have a capacty less than C. Therefore condtons 3) are an alternatve formulaton of the channel capacty constrant 8) and ther only dsadvantage s that they requre a new arbtrary probablty vector p 0. For results on boundng a rato of two quadratc form we refer to [4] 5. NOISY CHANNEL SOLUTION Wth the term nosy channel we mean a channel whose capacty C s small. In ths secton we am to approxmate, under the assumpton C <<, the general problem 9) wth a more tractable optmzaton problem, whose soluton can be computed explctly and wll allow us to understandthebehavorof9)nthenosychannelregme. If C << we can take advantage of the constrants 3) snce they mply that Wp and Wp are close no matter what p and p are. If Wp and Wp are close, by vrtue of proposton,we can use the approxmaton: CWp,Wp ) = 4 DWp Wp ) and therefore maxmzng the Chernoff nformaton turns out to be equvalent to maxmzng the Kullback dstance DWp Wp ). Fnally, usng agan equaton ), we can approxmate the Chernoff nformaton va an Eucldean dstance: CWp,Wp ) = 4 DWp Wp ) = 8 Wp Wp [p. 4) Usng the approxmaton for the capacty constrant 3) and the result n 4), the orgnal problem 9) can be approxmated by: max W,m,p 0 8 Wp p ) [p s.t. W p 0 C, 5) [p T W = T W,j 0 and the advantage of ths formulaton s that t leads to an analytcal soluton as stated n the followng proposton. Proposton. Choose arbtrarly m and p 0 n the m-dmensonal smplex and then consder an arbtrary probablty vector w A such that w A p 0 = C as [p well as the only other vector w B whose dstance from p 0 s C and s opposte to w A wth respect to p 0, that s w B = p 0 w A. Next consder the followng channel: W = { wa f w B f < =,...,n, 6) then channel 6) s the optmal soluton of 5) and the assocated optmal cost s Proof: 4 C p p 7) Let s consder m and p 0 fxed. We wll prove the statement showng frst that the expresson 7) s an upper bound to the optmal value and then that W acheves that bound. To bound the cost we ll use the fact that p p adds up to zero and therefore f A s a matrx wth all the columns equal to each others then Ap p ) = 0. Formally we obtan: 486

4 Wp p ) [p = W [p 0 p )p p ) [p = W p 0 ) ) [p W p 0 [p C = C p p whch s equvalent to 8 Wp p ) [p 4 C p p. To prove that W acheves ths bound let s start defnng the quantty α = ), : p and notcng that, snce p p adds up to zero, we also have: α = ), : <p α = p p. Now, wth some algebra we get: 8 W p p ) = [p = 8 w A )+w B ) : p = 8 αw A αw B [p = 8 αw A p 0 ) [p = α C = 4 C p p. : <p [p Remarkably, the optmal value we obtaned consderng m and p 0 fxed turned out to be completely ndependent of m and p 0 and, therefore, problem 5) s solved by a trpletw,m,p 0 )wheremandp 0 canbechosenarbtrary provded that the defnton of W n 6) yelds a well defned stochastc matrx. A graphcal depcton of w A and w B, used to construct the optmal channel, s reported n fgure ; we pont out that, snce C s consdered small, t s always possble to determne such a par of vectors nsde the smplex. The result just proven shows that for small capacty the behavor of the Chernoff bound s lnear n C and s proportonaltothel dstancebetweenthetwohypothess. In the next secton we wll present some observatons, basedjustonsmulatons,regardngthebehavorforlarger C. Namely p 0 must be chosen far from the smplex borders so that w A and w B fall nsde the smplex. w A p 0 w B Fg.. Poston of w A and w B n the smplex wth respect to p 0 6. LARGE CAPACITY BEHAVIOR As the capacty ncreases the problem 5) s no longer approxmatng the orgnal optmzaton problem 9). In the general case fndng an analytcal soluton to 9) s unrealstc but we can stll make some remarks. In ths secton we wll pont out some of these nterestng features and we wll present a numercal result. For each m the soluton of 9) as a functon of C s monotone ncreasng and the optmal channel has always capacty C. Ths s true because the cost functon can be shown to be convex and W belongs to a convex set by vrtue of convexty of IY,X) wth respect to the channel. For each m the performances are not mprovng for C logm because the maxmum capacty achevable wth an m dmensonal output alphabet s always less than logm. Moreover f m = n then for C n we obtan exactly the Chernoff nformaton snce among the feasble channels there s the dentty channel I whchallowstomeasurethesamplesdrectlyfromthe source. Fnally the curves obtaned for m > n seem to be dentcal to the one obtaned for m = n. Interestngly, for some choces of the Hypothess p and p, the Chernoff nformaton s reached wth m = n) before the t C = logn In fgure we show the soluton of problem 9) where we keptmasaparameter.onlytwodfferentvaluesofmhave been taken nto account but t s stll possble to observe some of the behavors just ponted out. 7. CONCLUSIONS AND FUTURE WORK In ths paper we consdered a modfed verson of the bnary hypothess testng problem where the samples are measured through a channel. We looked for the best possble channel among those wth a ted capacty and we showed that, f the channel has a small capacty, ths optmzaton problem can be approxmated by a quadratc one. The optmal soluton for the approxmatng problem acheves an error exponent gven by 4 C p p where C s the capacty of the channel whle p and p are the two hypothess. In the small C regme we were also able to provde an explct formula for the optmal channel. It s not yet formally proved, although clearly supported by smulatons, that the optmal soluton of the 487

5 Error exponent log) Channel capacty nants) m= m=3 Small Capacty result Chernoff Informaton Cp,p ) Fg.. Soluton of problem 9) wth n = 3, p = [ ] and p = [ ] approxmatng problem converges to the soluton of the orgnal problem as C tends to 0. We are currently workng on some generalzatons to the m-ary case as well as some non d-based models lke hdden Markov models. 8. ACKNOWLEDGMENT The autors wsh to thank Mesrob I. Ohannessan for hs suggestons and heln provng proposton. REFERENCES [] Cov:98 T. M. Cover, J. A. Thomas. Elements of Informaton Theory. Wley Interscence Publcaton, 99. [] Euc:008 S. Borade, L. Zheng. Eucldean Informaton Theory. Communcatons, 008 IEEE Internatonal Zurch Semnar on pages 4 7, 008. [3] Gal:968 R. Gallager Informaton Theory and Relable Communcaton Wley, 968. [4] QuadB:999 F. Calskan, C. Hajyev Sensor fault detecton n flght control systems based on the Kalman flter nnovaton sequence Proceedngs of the Insttuton of Mechancal Engneers, Part I: Journal of Systems and Control Engneerng volume 3, ssue 3, pages 43 48, 999. [5] BhattB:943 A. Bhattacharyya, On a measure of dvergence between two statstcal populatons defned by ther probablty dstrbuton. Bulletn of the Calcutta Mathematcal Socety volume 35 pages 99-0, 943. log3) Appendx A. PROOF OF PROPOSITION We have to prove that: Cp,q) Dp q) = 4. Frst of all we want to pont out that, f some of the components of q are equal to zero, then Dp q) s not defned unless the same components of p are zero as well and the t p s taken over the subspace n whch = 0 whenever = 0. For ths reason, wthout loss of generalty, we can restrct our analyss to the case > 0. Snce the defnton of the Chernoff nformaton n 3) s not n a closed form, n ths secton we ll provde an explct expresson to approxmate t when p and q are close enough. In order to easly deal wth equaton 3) let us ntroduce some notaton conventons: D q λ) = Dp λ q), D p λ) = Dp λ p), ˆD q λ) = pλ q [q], ˆD p λ) = pλ p [p]. In [] t s shown that the functon D p λ) s monotone decreasng n λ [0, ] whle D q λ) s ncreasng n the same nterval; moreover there exst a unque λ [0, ] such that D p λ ) = D q λ ). It s also easy to show that ˆD p λ) s monotone decreasng and ˆD q λ) s monotone ncreasng n [0, ] and that the unque value of λ satsfyng the equaton ˆD p λ) = ˆD q λ) s λ = /. In fact, f we denote wth φ = n p the Bhattacharyya coeffcent, we have: ˆD q /)= [ ] p q φ q = [ ] p p φ +q φ q = [ ] p φ +q p φ [ ] = n ), A.) p whch can be shown, wth the same argumentaton, to be equal to ˆD p /). We ll now show that the expresson just found n A.) can be regarded as an approxmaton of the Chernoff nformaton whose dstance from the latter s nfntesmal of a superor order wth respect to p q [q]. Let us start examnng the dfference D q ˆD q whch, usng the unform bound p λ q [q] p q [q] λ and by vrtue of equaton ), turns out to be small as p q: D q λ) ˆD q λ)=δ q λ) =o p λ q [q] ) =o p q [q] ) λ. A.) Usng the same argumentaton and the result n ) we can derve a smlar result for the dfference D p ˆD p : 488

6 D p λ) ˆD p λ)=δ p λ) =o p λ p [p] ) =o q p [p] ) λ =o p q [q] ) λ. A.3) If we ntroduce now the two functons: fλ) = D p λ) D q λ) ˆfλ) = ˆD p λ) ˆD q λ), keepng n mnd that ˆf/) = 0, we can obtan the followng bound on f/): f /) = f /) ˆf /) = D p /) ˆD p /)+ ˆD q /) D q /) δ p /) + δ q /). A.4) Usng the fact that dd q df λ and the results n dλ A.), A.3), A.4), we can now show that the dstance between ˆD q /) and the Chernoff nformaton Cp,q) = D q λ ) s small as p q: ˆD q /) D q λ ) D q /) D q λ ) + δ q /) dλ f/) fλ ) + δ q /) = f/) + δ q /) δ p /) + δ q /) =o p q [q] ) A.5) The result found n A.5) allows us to wrte the Chernoff nformaton n an explct form sutable to our purposes; more precsely: [ ] Cp,q)= ) n ) +o p q [q] p q ) =Ĉp,q)+o p q A.6) [q] [ ] n ) p = n =F p,q). = p + q n ) ) n p A.8) Before consderng the t let s compute a Taylor expanson of the functon F around q. After some straghtforward computatons we obtan: F p= q = 0, F = 0 =,...,n, p= q F p = + ) =,...,n, p= q 4 q n F = j, p j p= q 4q n therefore we have: Ĉ p,q)= p q) [ q] + ) p q)+o p q )! 4 q n = 8 p q) M q p q)+o p q ) Collectng the results obtaned so far the t we want to compute s trval: Cp,q) Dp q) = Ĉp,q) p q [q] 8 = p q) M q p q)+o p q ) p q p = q) M q p q) 4. In order to compute the t Cp,q) Dp q) on the n- dmensonal smplex, let s frst reduce the dmenson to an n- dmensonal space where we get rd of the constrant n =. In ths lower dmensonal space the approxmate expresson for the Kullback dstance becomes: p q [q] = n ) + ) n q n = p q) [ q] + ) p q) q n = p q) M q p q), A.7) where p and q are n dmensonal vectors equal to the frstn elementsofthevectorspandq whle sthen dmensonalvectorofall.theapproxmateexpressonfor the Chernoff nformaton becomes: q n 489

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

Error Probability for M Signals

Error Probability for M Signals Chapter 3 rror Probablty for M Sgnals In ths chapter we dscuss the error probablty n decdng whch of M sgnals was transmtted over an arbtrary channel. We assume the sgnals are represented by a set of orthonormal

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Lecture 3: Shannon s Theorem

Lecture 3: Shannon s Theorem CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts

More information

Some modelling aspects for the Matlab implementation of MMA

Some modelling aspects for the Matlab implementation of MMA Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton

More information

Inductance Calculation for Conductors of Arbitrary Shape

Inductance Calculation for Conductors of Arbitrary Shape CRYO/02/028 Aprl 5, 2002 Inductance Calculaton for Conductors of Arbtrary Shape L. Bottura Dstrbuton: Internal Summary In ths note we descrbe a method for the numercal calculaton of nductances among conductors

More information

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering / Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution. Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem. Lecture 14 (03/27/18). Channels. Decodng. Prevew of the Capacty Theorem. A. Barg The concept of a communcaton channel n nformaton theory s an abstracton for transmttng dgtal (and analog) nformaton from

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES BÂRZĂ, Slvu Faculty of Mathematcs-Informatcs Spru Haret Unversty barza_slvu@yahoo.com Abstract Ths paper wants to contnue

More information

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.

More information

Chapter 8 Indicator Variables

Chapter 8 Indicator Variables Chapter 8 Indcator Varables In general, e explanatory varables n any regresson analyss are assumed to be quanttatve n nature. For example, e varables lke temperature, dstance, age etc. are quanttatve n

More information

Complete subgraphs in multipartite graphs

Complete subgraphs in multipartite graphs Complete subgraphs n multpartte graphs FLORIAN PFENDER Unverstät Rostock, Insttut für Mathematk D-18057 Rostock, Germany Floran.Pfender@un-rostock.de Abstract Turán s Theorem states that every graph G

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Maxmum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM An elastc wave s a deformaton of the body that travels throughout the body n all drectons. We can examne the deformaton over a perod of tme by fxng our look

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

1 Matrix representations of canonical matrices

1 Matrix representations of canonical matrices 1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:

More information

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,

More information

DUE: WEDS FEB 21ST 2018

DUE: WEDS FEB 21ST 2018 HOMEWORK # 1: FINITE DIFFERENCES IN ONE DIMENSION DUE: WEDS FEB 21ST 2018 1. Theory Beam bendng s a classcal engneerng analyss. The tradtonal soluton technque makes smplfyng assumptons such as a constant

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law: CE304, Sprng 2004 Lecture 4 Introducton to Vapor/Lqud Equlbrum, part 2 Raoult s Law: The smplest model that allows us do VLE calculatons s obtaned when we assume that the vapor phase s an deal gas, and

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

CS : Algorithms and Uncertainty Lecture 14 Date: October 17, 2016

CS : Algorithms and Uncertainty Lecture 14 Date: October 17, 2016 CS 294-128: Algorthms and Uncertanty Lecture 14 Date: October 17, 2016 Instructor: Nkhl Bansal Scrbe: Antares Chen 1 Introducton In ths lecture, we revew results regardng follow the regularzed leader (FTRL.

More information

Linear Feature Engineering 11

Linear Feature Engineering 11 Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19

More information

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur Analyss of Varance and Desgn of Exerments-I MODULE III LECTURE - 2 EXPERIMENTAL DESIGN MODELS Dr. Shalabh Deartment of Mathematcs and Statstcs Indan Insttute of Technology Kanur 2 We consder the models

More information

Maximizing the number of nonnegative subsets

Maximizing the number of nonnegative subsets Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Hidden Markov Models & The Multivariate Gaussian (10/26/04) CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Vapnik-Chervonenkis theory

Vapnik-Chervonenkis theory Vapnk-Chervonenks theory Rs Kondor June 13, 2008 For the purposes of ths lecture, we restrct ourselves to the bnary supervsed batch learnng settng. We assume that we have an nput space X, and an unknown

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Learning Theory: Lecture Notes

Learning Theory: Lecture Notes Learnng Theory: Lecture Notes Lecturer: Kamalka Chaudhur Scrbe: Qush Wang October 27, 2012 1 The Agnostc PAC Model Recall that one of the constrants of the PAC model s that the data dstrbuton has to be

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

Perfect Competition and the Nash Bargaining Solution

Perfect Competition and the Nash Bargaining Solution Perfect Competton and the Nash Barganng Soluton Renhard John Department of Economcs Unversty of Bonn Adenauerallee 24-42 53113 Bonn, Germany emal: rohn@un-bonn.de May 2005 Abstract For a lnear exchange

More information

THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan.

THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan. THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY Wllam A. Pearlman 2002 References: S. Armoto - IEEE Trans. Inform. Thy., Jan. 1972 R. Blahut - IEEE Trans. Inform. Thy., July 1972 Recall

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Lecture 2: Prelude to the big shrink

Lecture 2: Prelude to the big shrink Lecture 2: Prelude to the bg shrnk Last tme A slght detour wth vsualzaton tools (hey, t was the frst day... why not start out wth somethng pretty to look at?) Then, we consdered a smple 120a-style regresson

More information

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA 4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Stat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors

Stat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors Stat60: Bayesan Modelng and Inference Lecture Date: February, 00 Reference Prors Lecturer: Mchael I. Jordan Scrbe: Steven Troxler and Wayne Lee In ths lecture, we assume that θ R; n hgher-dmensons, reference

More information

Basically, if you have a dummy dependent variable you will be estimating a probability.

Basically, if you have a dummy dependent variable you will be estimating a probability. ECON 497: Lecture Notes 13 Page 1 of 1 Metropoltan State Unversty ECON 497: Research and Forecastng Lecture Notes 13 Dummy Dependent Varable Technques Studenmund Chapter 13 Bascally, f you have a dummy

More information

ECONOMICS 351*-A Mid-Term Exam -- Fall Term 2000 Page 1 of 13 pages. QUEEN'S UNIVERSITY AT KINGSTON Department of Economics

ECONOMICS 351*-A Mid-Term Exam -- Fall Term 2000 Page 1 of 13 pages. QUEEN'S UNIVERSITY AT KINGSTON Department of Economics ECOOMICS 35*-A Md-Term Exam -- Fall Term 000 Page of 3 pages QUEE'S UIVERSITY AT KIGSTO Department of Economcs ECOOMICS 35* - Secton A Introductory Econometrcs Fall Term 000 MID-TERM EAM ASWERS MG Abbott

More information

Time-Varying Systems and Computations Lecture 6

Time-Varying Systems and Computations Lecture 6 Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Lecture 4 Hypothesis Testing

Lecture 4 Hypothesis Testing Lecture 4 Hypothess Testng We may wsh to test pror hypotheses about the coeffcents we estmate. We can use the estmates to test whether the data rejects our hypothess. An example mght be that we wsh to

More information

Affine transformations and convexity

Affine transformations and convexity Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2 Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

3.1 ML and Empirical Distribution

3.1 ML and Empirical Distribution 67577 Intro. to Machne Learnng Fall semester, 2008/9 Lecture 3: Maxmum Lkelhood/ Maxmum Entropy Dualty Lecturer: Amnon Shashua Scrbe: Amnon Shashua 1 In the prevous lecture we defned the prncple of Maxmum

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

Edge Isoperimetric Inequalities

Edge Isoperimetric Inequalities November 7, 2005 Ross M. Rchardson Edge Isopermetrc Inequaltes 1 Four Questons Recall that n the last lecture we looked at the problem of sopermetrc nequaltes n the hypercube, Q n. Our noton of boundary

More information

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method Appled Mathematcal Scences, Vol. 7, 0, no. 47, 07-0 HIARI Ltd, www.m-hkar.com Comparson of the Populaton Varance Estmators of -Parameter Exponental Dstrbuton Based on Multple Crtera Decson Makng Method

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Appendix for Causal Interaction in Factorial Experiments: Application to Conjoint Analysis

Appendix for Causal Interaction in Factorial Experiments: Application to Conjoint Analysis A Appendx for Causal Interacton n Factoral Experments: Applcaton to Conjont Analyss Mathematcal Appendx: Proofs of Theorems A. Lemmas Below, we descrbe all the lemmas, whch are used to prove the man theorems

More information

Supplementary Notes for Chapter 9 Mixture Thermodynamics

Supplementary Notes for Chapter 9 Mixture Thermodynamics Supplementary Notes for Chapter 9 Mxture Thermodynamcs Key ponts Nne major topcs of Chapter 9 are revewed below: 1. Notaton and operatonal equatons for mxtures 2. PVTN EOSs for mxtures 3. General effects

More information

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Wreless Informaton Transmsson System Lab. Chapter 7 Channel Capacty and Codng Insttute of Communcatons Engneerng atonal Sun Yat-sen Unversty Contents 7. Channel models and channel capacty 7.. Channel models

More information

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Chapter 7 Channel Capacty and Codng Contents 7. Channel models and channel capacty 7.. Channel models Bnary symmetrc channel Dscrete memoryless channels Dscrete-nput, contnuous-output channel Waveform

More information

Why Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one)

Why Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one) Why Bayesan? 3. Bayes and Normal Models Alex M. Martnez alex@ece.osu.edu Handouts Handoutsfor forece ECE874 874Sp Sp007 If all our research (n PR was to dsappear and you could only save one theory, whch

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

A Local Variational Problem of Second Order for a Class of Optimal Control Problems with Nonsmooth Objective Function

A Local Variational Problem of Second Order for a Class of Optimal Control Problems with Nonsmooth Objective Function A Local Varatonal Problem of Second Order for a Class of Optmal Control Problems wth Nonsmooth Objectve Functon Alexander P. Afanasev Insttute for Informaton Transmsson Problems, Russan Academy of Scences,

More information

Ph 219a/CS 219a. Exercises Due: Wednesday 23 October 2013

Ph 219a/CS 219a. Exercises Due: Wednesday 23 October 2013 1 Ph 219a/CS 219a Exercses Due: Wednesday 23 October 2013 1.1 How far apart are two quantum states? Consder two quantum states descrbed by densty operators ρ and ρ n an N-dmensonal Hlbert space, and consder

More information

Linear Regression Analysis: Terminology and Notation

Linear Regression Analysis: Terminology and Notation ECON 35* -- Secton : Basc Concepts of Regresson Analyss (Page ) Lnear Regresson Analyss: Termnology and Notaton Consder the generc verson of the smple (two-varable) lnear regresson model. It s represented

More information

An (almost) unbiased estimator for the S-Gini index

An (almost) unbiased estimator for the S-Gini index An (almost unbased estmator for the S-Gn ndex Thomas Demuynck February 25, 2009 Abstract Ths note provdes an unbased estmator for the absolute S-Gn and an almost unbased estmator for the relatve S-Gn for

More information

Statistics II Final Exam 26/6/18

Statistics II Final Exam 26/6/18 Statstcs II Fnal Exam 26/6/18 Academc Year 2017/18 Solutons Exam duraton: 2 h 30 mn 1. (3 ponts) A town hall s conductng a study to determne the amount of leftover food produced by the restaurants n the

More information

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS IJRRAS 8 (3 September 011 www.arpapress.com/volumes/vol8issue3/ijrras_8_3_08.pdf NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS H.O. Bakodah Dept. of Mathematc

More information

The Second Anti-Mathima on Game Theory

The Second Anti-Mathima on Game Theory The Second Ant-Mathma on Game Theory Ath. Kehagas December 1 2006 1 Introducton In ths note we wll examne the noton of game equlbrum for three types of games 1. 2-player 2-acton zero-sum games 2. 2-player

More information

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests Smulated of the Cramér-von Mses Goodness-of-Ft Tests Steele, M., Chaselng, J. and 3 Hurst, C. School of Mathematcal and Physcal Scences, James Cook Unversty, Australan School of Envronmental Studes, Grffth

More information

COMPOSITE BEAM WITH WEAK SHEAR CONNECTION SUBJECTED TO THERMAL LOAD

COMPOSITE BEAM WITH WEAK SHEAR CONNECTION SUBJECTED TO THERMAL LOAD COMPOSITE BEAM WITH WEAK SHEAR CONNECTION SUBJECTED TO THERMAL LOAD Ákos Jósef Lengyel, István Ecsed Assstant Lecturer, Professor of Mechancs, Insttute of Appled Mechancs, Unversty of Mskolc, Mskolc-Egyetemváros,

More information

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore Sesson Outlne Introducton to classfcaton problems and dscrete choce models. Introducton to Logstcs Regresson. Logstc functon and Logt functon. Maxmum Lkelhood Estmator (MLE) for estmaton of LR parameters.

More information

Channel Encoder. Channel. Figure 7.1: Communication system

Channel Encoder. Channel. Figure 7.1: Communication system Chapter 7 Processes The model of a communcaton system that we have been developng s shown n Fgure 7.. Ths model s also useful for some computaton systems. The source s assumed to emt a stream of symbols.

More information

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1 C/CS/Phy9 Problem Set 3 Solutons Out: Oct, 8 Suppose you have two qubts n some arbtrary entangled state ψ You apply the teleportaton protocol to each of the qubts separately What s the resultng state obtaned

More information

Entropy of Markov Information Sources and Capacity of Discrete Input Constrained Channels (from Immink, Coding Techniques for Digital Recorders)

Entropy of Markov Information Sources and Capacity of Discrete Input Constrained Channels (from Immink, Coding Techniques for Digital Recorders) Entropy of Marov Informaton Sources and Capacty of Dscrete Input Constraned Channels (from Immn, Codng Technques for Dgtal Recorders). Entropy of Marov Chans We have already ntroduced the noton of entropy

More information

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem. prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove

More information

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14 APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce

More information

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all

More information

FUZZY GOAL PROGRAMMING VS ORDINARY FUZZY PROGRAMMING APPROACH FOR MULTI OBJECTIVE PROGRAMMING PROBLEM

FUZZY GOAL PROGRAMMING VS ORDINARY FUZZY PROGRAMMING APPROACH FOR MULTI OBJECTIVE PROGRAMMING PROBLEM Internatonal Conference on Ceramcs, Bkaner, Inda Internatonal Journal of Modern Physcs: Conference Seres Vol. 22 (2013) 757 761 World Scentfc Publshng Company DOI: 10.1142/S2010194513010982 FUZZY GOAL

More information