the absence of noise and fading and also show the validity of the proposed algorithm with channel constraints through simulations.

Size: px
Start display at page:

Download "the absence of noise and fading and also show the validity of the proposed algorithm with channel constraints through simulations."

Transcription

1 1 Channel Aware Decentralzed Detecton n Wreless Sensor Networks Mlad Kharratzadeh mlad.kharratzadeh@mal.mcgll.ca Department of Electrcal & Computer Engneerng McGll Unversty Montreal, Canada Abstract We study the problem of decentralzed detecton n wreless sensor networks. Frst, we wll revew the classcal framework for decentralzed detecton. The classcal framework does not take nto account the power constrants of wreless sensor networks and also the characterstcs of wreless channels such as fadng and nose. Next, we wll revew the algorthms that consder these constrants. We assume that these algorthms know the channel fadng and nose statstcs, and hence the term channel-aware. Two man approaches are approxmatng the fuson rule based on the value of SNR, and sensor censorng, where not all the perpheral nodes send ther decsons to the central node. In the end, we wll propose a fully decentralzed channel aware algorthm for decentralzed detecton whch s based on gosspng. We wll show the valdty of the proposed algorthm by provng the convergence and also performng some smulatons. I. INTRODUCTION Wreless Sensor Networks consst of a set of sensors, whch are capable of sensng, computaton, and communcaton and are spatally dstrbuted n order to cooperatvely montor physcal or envronmental condtons. The ablty to detect events of nterest s essental to the success of emergng sensor network technology. Detecton often serves as the ntal goal of a sensng system. For example n applcatons where we want to estmate attrbutes such as poston and velocty, the frst step s to ascertan the presence of an obect. Moreover, n some applcatons such as survellance, the detecton of an ntruder s the only purpose of the sensor system. In stuatons where the perpheral nodes do some preprocessng on ther observaton before sendng data to a central node (called fuson center), the correspondng decson makng problem s termed decentralzed detecton. Assume that there are M hypotheses on the state of the envronment and each one of the sensors observes some relevant nformaton about t. In a centralzed scheme, each sensor transmts all of ts observaton (wthout any addtonal processng) to the fuson center, whch solves a classcal hypotheses testng problem and decdes on one of the M hypotheses. However, n a classcal decentralzed scheme, each sensor wll do some preprocessng and send a summary of ts observaton, whch s chosen from a fnte alphabet, to the fuson center. Then, the fuson center wll decde on one of the M hypotheses, based on the messages t has receved. The descrbed centralzed and decentralzed schemes, dffer n some aspects. Frst, t s clear that the performance of the decentralzed scheme s suboptmal n comparson wth the centralzed scheme due to loss of nformaton n the nodes local preprocesses. On the other hand the communcaton requrements of decentralzed scheme s much smaller than those of centralzed one. The reason s that nstead of sendng raw volumnous observaton, each node sends a summary of ts observaton, taken from a fnte alphabet. So, n bref, the decentralzed detecton offers a great reducton n communcaton requrements, at the expense of some performance reducton. However, t turns out that the performance reducton s often neglgble 1]. In the decentralzed scheme, whle the fuson center faces a classcal hypotheses testng problem (f we look at the messages receved from other nodes as ts observatons), the problem s more complex for the perpheral sensors. One may expect that each sensor should decde ndependently and make decson only based on ts own observaton and use ts own lkelhood rato test. Ths s not true n general. When the detectors decde n a way to acheve a system-wde optmzaton, they often use dfferent strateges than n cases where the ont costs of ther decsons separates nto a cost for each. Even under a condtonal ndependence assumpton (whch means that the observatons of dfferent sensors are ndependent from each other under the same hypothess), fndng optmal decson-makng algorthms (based on the observaton) at the sensor nodes remans, n most cases, a dffcult task. Ths optmzaton problem s known to be tractable only under restrctve assumptons regardng the observaton space and the topology of the underlyng network 1]. In secton II we wll explan the problem formulaton and brefly revew the works that study the classcal framework 1], 2], 3]. Next, n secton III we wll revew 4] and 5] where some channel aware algorthms for decentralzed detecton have been ntroduced. To be specfc, the optmum lkelhood rato has been approxmated n low and hgh SNR cases there. Secton IV s devoted to the revew of sensor censorng whch s one of the man approaches n the ltearture for nteractng wth channel constrants 6], 7]. Sensor censorng s a scheme n whch, not all the nodes send ther decsons to the fuson center and only the sensors wth nformatve observatons wll communcate wth the central node. Fnally, as the orgnal part of ths work, we wll propose a channel aware algorthm for decentralzed detecton n secton V whch s based on gosspng. We wll prove the convergence of the algorthm n

2 2 the absence of nose and fadng and also show the valdty of the proposed algorthm wth channel constrants through smulatons. II. PROBLEM DEFINITION AND CLASSICAL FRAMEWORK Here, we wll focus on bnary hypotheses testng problem whch s stated as follows. There are N perpheral sensors each of whch sends ts message ( summary of ts observaton whch s chosen from a bnary alphabet) to the fuson center drectly (.e. parallel or star topology). The two hypotheses are H 0 and H 1 wth pror probabltes Pr(H 0 ) and Pr(H 1 ), respectvely. Each of these hypotheses nduces a dfferent ont probablty dstrbuton for the observatons of perpheral sensors: Under H 1 : Pr(r H 1 ) (1) Under H 0 : Pr(r H 0 ) (2) where r, = 1,..., N are the observatons of perpheral nodes. Each sensor receves an observaton r, whch s a realzaton of a random varable whose pdf s known from (1) and (2), and evaluates a message u = γ (r ), and transmts t to the fuson center. We call the functon γ, as the decson rule of the sensor. The fuson center receves all these messages and usng ts own decson rule (whch s also called fuson rule), γ 0, decdes on one of the two possble hypotheses. The set of all these decson rules s called strategy. The classcal framework, whch has been extensvely explored (e.g., see 1], 2], 3]), does not adequately take nto account mportant features of the wreless sensor networks such as nodes power constrants and lmts on the channel capacty. Also, t s assumed that the sent messages from the perpheral nodes wll be receved by the fuson center correctly, hence gnorng nose, ntereference, and fadng n communcaton channels. In ths secton we wll brefly revew the results of the classcal framework, and n the followng sectons, we wll study the effect of the mentoned constrants. Here, we wll revew the results from 1], 3]. In the Bayesan formulaton, we are gven a cost functon C(U 0, U 1,..., U N, H) (3) where C(u 0, u 1,..., u N, H ) s the cost assocated wth the event that hypothess H, = 0, 1 s true, the messages from the perpheral sensors are u 1,..., u N, and the fuson center decson s u 0. Our obectve n Bayesan framework s to fnd a strategy, γ that mnmzes the expectaton of the cost functon: J(γ) EC(U 0, U 1,..., U N, H)]. Note that the cost functon defned above s a functon of the fuson decson as well as the perpheral nodes messages. However, for dfferent purposes we can defne dfferent cost functons that only depend on some of decsons. For example, f C only depends on U 0, the performance of the system s udged on the bass of the fuson center s decson. The mnmum probablty of error crteron for the fnal decson les n ths type of cost functons. As another case, we may wsh to nterpret each sensor s message as a local decson, based on the true hypothess. Then, wth the sutable choce of C, we can penalze ncorrect decsons by the fuson center as well as the perpheral sensors. As an extreme case, the cost functon mght be ndependent of the fuson center s decson and be only a functon of the decsons of perpheral nodes and we only need to optmze wth respect to γ, = 1,..., N. Ths can happen n the case of a pror fxed fuson decson rule. The man result n 1] s the followng theorem, whch we nclude here wthout proof: Theorem 1. (a) Fx some 0 and suppose that γ has been fxed for all. Then γ mnmzes J(γ) f and only f where γ (r ) = argmn d=0,1 1 Pr(H r ) a (H, d) w.p. 1 (4) =0 a (H, d) = EC(γ 0 (U 1,..., U 1, d, U +1,..., U N ), U 1,..., U 1, d, U +1,..., U N, H ) H ] (5) (b) Suppose that the decson rules for perpheral nodes have been fxed, then γ 0 mnmzes J(γ) f and only f γ 0 (U 1,..., U N ) = argmn d=0,1 1 Pr(H U 1,..., U N ) =0 C(d, U 1,..., U N, H ) w.p. 1 (6) Ths means that wth person-by-person optmzaton (fxng all the sensors decson rules except one, and try to mnmze J wth respect to that sensor decson rule) we wll get a set of lkelhood rato tests whose thresholds depend on each other. Thus, n order to fnd the person-by-person optmal strategy, we need to solve a system of 4N nonlnear equatons n as many unknowns. Therefore, the acheved decson rule s equvalent to dvdng the observatons space nto two regons (because we are consderng bnary hypotheses testng), and decdng to send u = d, d = 0 or 1 to the fuson center f the vector of lkelhood belongs to the correspondng regon. Each regon s specfed by a set of lnear equaltes and therefore s a polyhedral. Note that ths structure s the same as the structure of the optmal decson rule for the classcal bnary hypotheses testng. The Neyman-Pearson formulaton of the problem has been studed n 3] and smlar results have been derved, that we do not menton them here. In the next secton we wll revew the extenson of the classcal parallel fuson structure, by ncorporatng the fadng channel layer that s present n wreless sensor networks. III. DECISION FUSION UNDER FADING CHANNEL ASSUMPTION In ths secton, we wll revew 4] and 5] whch explore decson fuson algorthms that take nto account channel fadng effects. Here we assume that we have flat fadng channels and the fuson center knows the fadng coeffcent (envelope), and hence t s channel-aware. Although t s always possble to reduce the effects of nose and fadng by ncreasng SNR n channel aware systems, n wreless

3 3 sensor networks t s desrable to reduce the communcaton power (because WSNs are energy constraned). The problem formulaton s almost smlar to the prevous secton, except for the fact that the messages sent from the perpheral sensors to the fuson center pass through channels wrh flat fadng and are corrupted by nose (see Fg. 1). Here, we do not look at the perpheral sensors decson rules and only modfy the fuson center decson to cope wth nose and fadng. Fg. 1. H 0, H 1 Sensor 1 Sensor 2 u u 1 2 h1 h2 n n 1 2 y1 y 2... Fuson Center hn n N yn Sensor N Prallel fuson model n the presence of fadng and nosy channel Assume that the kth local sensor makes a bnary decson u {+1, 1} wth P f = Pr(u = 1 H 0 ), P d = Pr(u = 1 H 1 ), called false alarm and detecton probabltes respectvely. If u s transmtted from sensor through a fadng channel, then the output of the channel (nput to the fuson center) would be: y = h u + n (7) where h s the real valued (postve) fadng coeffcent (envelope) of the channel between sensor and the fuson center. n s zero mean Gaussan nose wth varance. In the followng, we try to derve a fuson rule based on y, = 1,..., N, that can determne whch hypothess s true wth the best achevable performance. Assumng full knowledge of the fadng channels, h, and the local sensors performance, P f and P d, we can wrte the lkelhood rato at the fuson center as: Λ(y) = f(y H 1) f(y H 0 ) P d e (y h ) 2 2σ = 2 + (1 P d )e (y +h )2 2 (8) =1 P f e (y h )2 2 + (1 P f )e (y +h )2 2 where y s the vector of receved messages from all the sensors at the fuson center. Now, we try to smplfy the above expresson n two specal cases: hgh SNR (.e., 0) and low SNR (.e., ). 1) Hgh SNR ( 0): In ths case the authors proposed a two-stage approxmaton n 4]. The lkelhood rato descrbed n (8) consders the channel effects and the sensors specfcatons smultaneously. An alternatve s to separate ths nto a u N two stage procedure. Frst, nfer about u usng y, and then apply the optmum fuson rule based on the acheved value of u. Gven the model ntroduced n (7), the maxmum lkelhood estmaton of u s: ũ = sgn(y ) Now, f we rewrte (8), we have: Λ = sgn(y )=1 sgn(y )= 1 P d e 2y h + (1 P d ) P f e 2y h + (1 P f ) P d + (1 P d )e 2y h P f + (1 P f )e 2y h Because of the hgh SNR assumpton ( 0), we have: lm Λ = 0 sgn(y )=1 P d P f sgn(y )= 1 1 P d 1 P f In above dervaton we used the fact that as goes to zero, the exponental terms wth a postve power over wll go to nfnty and we can gnore other terms that are added to them. Takng logarthms from both sdes, we wll acheve the followng smplfed log-lkelhood rato: Λ 1 = lm log Λ = log 0 sgn(y )=1 Pd P f ] + sgn(y )= 1 ] 1 Pd log 1 P f Note that Λ 1 does not depend on the channel statstcs at all and s only a functon of P d and P f for = 1,..., N. 2) Low SNR ( ): The hgh SNR assumpton s not realstc, because of the power constrant of the wreless sensor networks. So, n ths case we consder the more realstc case of low SNR. Rewrtng (8), we have: =1 P d + (1 P d )e 2y h P f + (1 P f )e 2y h For we have e (2yh)/σ2 1 and thus we can approxmate t by the frst-order Taylor seres expanson: Therefore: lm Λ = lm 0 0 e (2yh)/σ2 = lm =1 0 =1 Takng logarthm from both sdes: lm log Λ = lm 0 0 lm 1 2y h P d + (1 P d )(1 2yh ) P f + (1 P f )(1 2yh ) 1 (1 P d ) 2yh 1 (1 P f ) 2yh N =1 N 0 =1 log 1 (1 P d ) 2y ] h log 1 (1 P f ) 2y h ]

4 4 Usng the approxmaton log(1 + x) x for x 0, and smplfyng, we have: lm log Λ = N (P d P f ) 2y h 0 =1 If we assume that P d P f s constant for all sensors, and elmnate t from the expresson as well as the constant term, and also multplyng by a constant term 1/K we wll acheve a lkelhood rato at the fuson center whch s analogous to maxmum rato combner (MRC): Λ 2 = 1 N h y K =1 In fact, Λ 2 s the frst order approxmaton of the log-lkelhood rato based fuson rule and s asymptotcally accurate for low SNR ( ). Note that the acheved expresson for Λ 2 does not depend on the sensor specfcatons and s only a functon of the channel statstcs. The valdty of the above approxmatons and performance evaluaton have been studed n 4], whch we skp here. The same authors, also study the same problem for mult-hop wreless sensor networks n 5] and found smlar approxmatons for the fuson rule. The dervatons of formulas are smlar to the ones revewed above and thus we do not repeat them here. In the setup we revewed n ths secton, we assumed that all the perpheral nodes send ther decsons to the fuson center. In the next secton we wll revew another approach used n lterature for nteractng wth channel constrants whch s called sensor censorng. IV. SENSOR CENSORING Here we brefly revew the sensor censorng approach. We ust revew the general concepts and do not go nto detals. The dea of censorng sensors was ntroduced by Rago, et al. n 6]. In ths scheme, sensors are assumed to censor ther observaton so that each sensor sends to the fuson center only nformatve observatons. How should we defne nformatve to mnmze the probablty of error was studed n 6] and t was shown that wth condtonally ndependent observatons, the sensors should send ther decsons to fuson center f and only f ther local lkelhood rato do not fall nto a certan sngle nterval. The exact expressons for the ntervals for dfferent frameworks such as Neyman-Pearson and Bayesan are derved n 6]. The mportant fact s that these ntervals depend on the avalable communcaton rates, so, when the avalable rate s low, the ntervals that defnes the no-send regons are larger, and only a few sensors wll transmt to the fuson center. Also, t was shown, through experments, that the performance of censorng scheme s very close to optmal even wth qute severe communcaton rates. The drect consequence of censorng s the savng n communcaton. Also, t wll help the sensors save energy by not sendng unnformatve messages. Fuson of censored decsons transmtted over fadng channels n wreless sensor networks s studed n 7], 8]. In these works t s assumed that the local sensors employ a sensor censorng scheme wth known thresholds and the man focus s on developement of fuson algorthms whle the channels have flat fadng. Both cases, one assumng the knowledge of channel fadng envelope and the other the fadng statstcs, are studed and the optmal lkelhood rato test s derved under each scenaro. In summary, to save power and also decrease communcaton rate, perpheral sensors can employ censorng schemes so that they do not transmt to fuson center f ther lkelhood rato s n a certan nterval 6]. Then, knowng these ntervals and also the nformaton about the channel, the fuson center can use ts modfed lkelhood rato test 7], 8] to decde on the true hypothess. It s shown that the performance of ths scheme s close to optmal. V. GOSSIP-BASED ALGORITHM In ths secton, we are gong to propose a gossp-based fully decentralzed detecton algorthm. To the best of our knowledge, the proposed methods s novel. The algorthms that we studed untl now, were mostly sem-decentralzed, n the sense that all the perpheral sensors transmt a quantzed functon of ther observatons to a central node (fuson center). So, t s clear that we stll have the problems of centralzed systems such as sngle pont of falure, data management (floodng more data to fuson center than t can process), and securty. Also, n practcal wreless sensor networks, because of the power constrants, nodes are only capable of short-range communcatons. Thus, each node can communcate wth only a few other nodes that are close to t. Our proposed gossp-based algorthm, tres to reach a consensus among all the nodes by only local communcatons. Frst, we wll propose our method for the deal channels between the nodes (.e., no fadng or nose) and show that t converges and the soluton s exactly the same as the soluton of the centralzed scheme, and hence globally optmum. Then, we wll modfy t for the case where there s nose and channels between nodes have flat fadng and show by smulaton that our algorthm stll converges and fnd the optmum soluton. Assume that we have a wreless sensor network, wth N sensors, and wthout any central node. Also, assume that we have a bnary hypotheses testng problem. Sensor S can communcates only wth a few other nodes that are n ts communcaton range. We call these nodes the neghbors of sensor S and denote them by the set V. The notaton ntroduced n secton II stll holds. Our proposed algorthm for fully decentralzed detecton s brefly as follows: 1- Each sensor receves an observaton and based on that, computes ts lkelhood rato. Let us denote ths lkelhood rato by : = f(r H 1 ) f(r H 0 ), = 1,..., N 2- The sensors make ntal decsons based on ther observatons. Note that they make ther decsons under ether Bayesan or Neyman-Pearson crteron. Also, note that, snce we have a bnary hypothess testng, under ether of the frameworks, all the nodes mplement a lkelhood rato test. 3- Gosspng: After calculatng the lkelhood ratos and makng ntal decsons, the nodes perform several

5 5 rounds of gosspng. In each round, the nodes communcate two-by-two and update ther lkelhood ratos. If the nodes and communcate wth each other (or gossp ) at teraton k, then, the update procedure s as follows: = = (9) (10) where denotes the value of the lkelhood rato of sensor S at teraton k of the algorthm (0 k). 4- Based on ther new lkelhood ratos, the sensors update ther decsons. As mentoned n step 2, all the nodes mplement a lkelhood rato test wth fxed thresholds. In our algorthm, the lkelhood ratos are updated and the thresholds of the tests are remaned fxed and also are the same for all the sensors (λ 1/N ). 5- Steps 3 and 4 are repeated untl all the sensors decde on the same hypothess (denoted as convergence ). In the followng, frst, we wll show that the mentoned algorthm wll converge to the same decson as the centralzed scheme n the case where there s no nose or fadng. Then we wll modfy our algorthm for the case where there exsts nose and channels have flat fadng. A. Analyss of Proposed Algorthm Havng N condtonally ndependent observatons (r 1,..., r N ), and assumng a Bayes cost or Neyman- Pearson performance crteron, the optmum (centralzed) test for the bnary hypotheses testng s well known to be: =1 Λ(r ) H1 λ (11) H 0 where n NP framework, λ can be found based on the probablty of error crteron, and n Bayesan framework t can be found based on the pror probabltes of the two hypotheses and the costs. By optmum decson, we mean a decson whch mnmzes the probablty of the error. In a fully decentralzed sensor network, each node s only aware of ts own lkelhood rato and can perform a test lke: Λ (r ) H1 H 0 In our method we have equal thresholds for all the nodes, and choose them n a way such that ther product s equal to the approprate λ n (11). Therefore, λ = λ 1/N, = 1,..., N. These thresholds reman fxed through the algorthm. However, each node updates ts lkelhood rato n each teraton of the algorthm. The procedure s as follows: n each teraton, nodes gossp two-by-two wth each other and update ther lkelhood ratos as descrbed n (9) and (10). Note that although the ndvdual lkelhood ratos change, the product of the lkelhood ratos of all nodes reman fxed through the algorthm. In other words: =1 = =1 λ, 1 k (12) It s clear that only has the classcal defnton of the lkelhood rato for sensor S, and hence equals to f(y H1) f(y H 0). However, we stll call the updated versons, for 1 k, lkelhood ratos for ease of notaton. Then, based on the updated versons of lkelhood ratos, the nodes revse ther decsons. The algorthm wll converge f all the nodes have the same decsons. Assume that at stage k all the nodes have the same decsons. Wthout loss of generalty, assume that they all decde on H 1. Thus: > λ = λ 1/N, = 1,..., N Multplyng all the nequaltes, we have: =1 =1 > λ > λ (usng (12)) Whch shows that H 1 s also the soluton of the centralzed test, and hence optmum (mnmzes the probablty of error). Thus, f all the nodes agree wth each other and reach a consensus decson, then ther decson matches the centralzed decson exactly. However, a bg queston remans; Does ths algorthm converge or not? Note that, n order to prove that the mentoned algorthm converges, t s enough to show that as k : =1 1/N, = 1,..., N (13) The reason s that n ths case, all the lkelhood ratos become equal. Also, we know that the thresholds are equal by defnton. Therefore, all the nodes perform the same lkelhood rato test and thus get the same result (convergence). The type of convergence that we are concerned about n (13), s convergence n expectaton. In the followng theorem, we prove that (13) s true and hence, our proposed algorthm converges. Theorem 2. Assume that we have N nodes wth ntal values, = 1,..., N. Consder the followng gossp algorthm: At each teraton, nodes choose one of ther neghbors at random and gossp wth each other. When two nodes gossp wth each other, they update ther values accordng to the followng procedure: = (14) = (15) where denotes the value of the lkelhood rato of sensor S at teraton k of the algorthm (0 k). Ths gossp algorthm converges n expectaton and we have: lm E k ] = =1 1/N, = 1,..., N (16)

6 6 The proof can be found n Appendx A. Another mportant queston s: How fast does the algorthm converge? or How many messages do we need to send among sensors n order to converge? Ths can hghly affect the energy consumpton of the sensors (the more transmtted messages, the more battery consumpton), and also the delay of the system. Actually there are many fast gossp algorthms that we can employ. For example, the algorthm proposed n 9], acheves ɛ-accuracy wth hgh probablty after O ( ) n log log n log kn ɛ messages. B. Consderng Channel Constrants Untl now, we dd not consder the effects of nose and fadng n our algorthm. Now, we are gong to consder these effects. Snce, the nodes are communcatng only two-by-two, we only need to consder the followng case: Assume that nodes and communcate wth each other at teraton k over a channel wth flat fadng and addtve whte Gaussan nose (see Fg. 2). Also assume that node sends ts lkelhood rato,, to node. If we name the message receved at sensor from sensor as y (k), then we can wrte: C. Performance Evaluaton For evaluatng the performance of the proposed algorthm, we performed t on random geometrc graph wth 25 nodes. In our experment the two hyotheses have equal pror probabltes. At the begnng, each node takes an observaton whch s corrupted by nose and computse ts log-lkelhood rato. Then all nodes try to reach a consus by gosspng ther lkelhood ratos n several rounds (n our program, we performed teratons). We consdered two measure of performance: probablty of detecton, Pr(H 1 H 1 ), and probablty of false alarm, Pr(H 1 H 0 ). The former shows the probablty of detectng the H 1 hypotheses and we want t to be as hgh as possble. The latter s the probablty of detctng H 1 whle H 0 s true, and clearly we want t to be as low as possble. We plotted these two measures for dfferent SNRs. Actually, we computed the average of these measures over all the nodes and for several repettons of the algorthm (5000 tmes). It s clear from Fg. 3 and Fg. 4 that as SNR ncreases, we have the hgher probablty of detecton and lower probablty of false alarm. And, of course, f the SNR s large enough, the algorthm works perfectly (Pr(detecton) = 1 and Pr(false alarm) = 0). y (k) = h + n where h s the fadng coeffcent (envelope) of the channel Sensor (k ) h n (k) y Sensor Probablty of Detecton Fg. 2. Gosspng wth channel constrants (AWGN and flat fadng channel) between sensors and, and n s Gaussan nose wth zero mean and varance. Upon recevng y (k), sensor tres to estmate. Snce h s known to sensor (cahnnel aware), the maxmum lkelhood of s smply: = y(k) (17) h SNR Fg. 3. Probablty of detecton versus SNR. As SNR ncreases, the probablty of detecton approaches So, the algorthm s exactly as before, except for the update procedure, whch now changes to: = = (y (k 1) (y (k 1) /h ) (18) /h ) (19) Probablty of False Alarm Note that the only dfference from the smple case s that and are replaced by ther ML estmates for the other sensor, whch s derved as n (17). Provng theoretcally that ths algorthm converges to the same decson as the centralzed scheme s more complcated n ths case. However, n the next subsecton we wll show ths by some smulatons SNR Fg. 4. Probablty of false alarm versus SNR. As SNR ncreases, the probablty of false alarm approaches 0

7 7 The result s that for large enough SNR (about 10 here), the algorthm works well and the event of nterest s detected wth probablty one. Also, the probablty of wrong detecton goes to zero as we ncrease the SNR. So, we showed that even wth channel constrants our proposed algorthm converges to the same decson as the centralzed scheme. D. Comparson Wth Other Algorthms The man dfference of our proposed algorthm wth the prevous ones s that t s fully decentralzed. In other words, there s no central node (fuson center) n our algorthm, whle n the other ones, all the nodes send ther messages to the fuson center and the fnal detecton process s performed centrally. Also, after performng the gossp-based algorthms, all the nodes are aware of the fnal decson whle n the other algorthms only central node knows about the detected hypothess. However, n our algorthm all the nodes should be aware of the channel fadng coeffcents, whle n the other ones only the fuson center needs to have such nformaton about the channel. In case of performance, our algorthm performs slghtly better than the prevous ones. For example the algorthms proposed n 4] acheve same probabltes of detecton as our proposed algorthm, wth hgher SNRs. VI. CONCLUSION In ths proect, we revewed the problem of channel aware decentralzed detecton n wreless sensor networks. Frst, we revewed the classcal framework, where we do not consder power and channel constrants and showed that wth personby-person optmzaton we wll get a set of lkelhood rato tests whose thresholds depend on each other. Then, we revewed some channel aware algorthms that take nto account the constrants of the wreless sensor networks n real world, such as power constrants, nose, and fadng. Frst, we derved some approxmatons for the fuson center lkelhood rato n the presence of nose and flat fadng channels n the cases of large and low SNR. Then, we brefly revewed the sensor censorng approach n whch nodes only transmt to the fuson center f ther observatons are nformatve (ther lkelhood rato do not belong to a certan nterval). Fnally, we proposed a totally decentralzed and channel aware algorthm for decentralzed detecton, whch s based on gosspng. We proved the optmalty and convergence of the algorthm n the case where there s no nose or fadng. The performance of the proposed algorthm for the case of AWGN and flat fadng channels have been evaluated through smulatons and t was shown that for large enough SNRs, the algorthm performs perfectly (Pr(detecton)=1 and Pr(false alarm)=0). REFERENCES 1] J. N. Tstskls, Decentralzed Detecton, Advances n Sgnal Processng, vol. 2, pp , ] R. Tenney and N. Sandell, Detecton wth dstrbuted sensors, Aerospace and Electronc Systems, IEEE Transactons on, vol. AES- 17, no. 4, pp , ] Q. Yan and R. Blum, Dstrbuted sgnal detecton under the Neyman- Pearson crteron, Informaton Theory, IEEE Transactons on, vol. 47, pp , May ] B. Chen, R. Jang, T. Kasetkasem, and P. Varshney, Channel aware decson fuson n wreless sensor networks, Sgnal Processng, IEEE Transactons on, vol. 52, no. 12, pp , ] Y. Ln, B. Chen, and P. Varshney, Decson fuson rules n multhop wreless sensor networks, Aerospace and Electronc Systems, IEEE Transactons on, vol. 41, no. 2, pp , ] C. Rago, P. Wllett, and Y. Bar-Shalom, Censorng sensors: a lowcommuncaton-rate scheme for dstrbuted detecton, Aerospace and Electronc Systems, IEEE Transactons on, vol. 32, no. 2, pp , ] R. Jang and B. Chen, Fuson of censored decsons n wreless sensor networks, Wreless Communcatons, IEEE Transactons on, vol. 4, no. 6, pp , ] R. Jang and B. Chen, Decson fuson wth censored sensors, n Acoustcs, Speech, and Sgnal Processng, Proceedngs. (ICASSP 04). IEEE Internatonal Conference on, vol. 2, pp vol.2, May ] K. Tsanos and M. Rabbat, Fast decentralzed averagng va multscale gossp, Proc. IEEE Dstrbuted Computng n Sensor Systems, June ] S. Boyd, A. Ghosh, B. Prabhakar, and D. Shah, Randomzed gossp algorthms, IEEE/ACM Trans. Netw., vol. 14, pp , June APPENDIX A PROOF OF THEOREM 2 We use the result of 10] to prove Theorem 2. It was shown n 10] that: Lemma 1. Assume that we have N nodes wth ntal values x (0), = 1,..., N. Consder the followng gossp algorthm: At each teraton, nodes choose one of ther neghbors at random and gossp wth each other. When two nodes ( and ) gossp wth each other, they update ther values accordng to the followng procedure: x (k) = x (k) = x(k 1) + x (k 1) 2 (20) where x (k) denotes the value of the sensor at teraton k of the algorthm (0 k). Ths gossp algorthm converges n expectaton and we have: lm E k x (k) ] = N =1 x(0) ], = 1,..., N (21) N The proof of the lemma can be found n 10]. Now, f we do the followng replacement: x (k) log( ) for = 1,..., N and 0 k (22) then Lemma 1 transforms to Theorem 2, because the update procedure n Theorem 2: = (23) s equvalent to ( ) ( ) ( ) log log + log = 2 whch s the update rule of Lemma 1. Also, we have: 1/N ) N log =1 (Λ log (0) = N =1 (24) (25) whch shows that Theorem 2 s equvalent to Lemma 1 n the log doman and thus the proof s complete.

8 8 APPENDIX B MATLAB CODES 1 % Wreless Proect 2 % Gossp-Based Decentralzed Detecton 3 % Mlad Kharratzadeh 4 % Aprl, % Intalzatons 7 n = 25; 8 SNR = logspace(-1,2,200); 9 K=5000; 10 A = RGG(n,0.4); % Adacency Matrx 11 H = abs(raylrnd(ones(n,n))).* A; 12 LLR = zeros(n,1); % Local Log-Lkelhood Rato 13 U = zeros(n,1); % Local Sensors Decsons 14 d = sum(a,2)'; 15 sz = sze(a); 16 edges = fnd(a == 1); 17 nel = numel(edges); 18 l=1; 19 P_Detecton = zeros(200,1); 20 P_FalseAlarm = zeros(200,1); % Performng Algorthm 23 for snr=snr 24 detected = 0; 25 falsealarm = 0; 26 hyp1 = 0; 27 hyp0 = 0; 28 sgma = 1/sqrt(snr); 29 for =1:K 30 hyp = rand>0.5; 31 r = hyp*ones(n,1) + sgma*randn(n,1); % Observaton at sensors 32 LLR = (2*r-1)/(2*sgmaˆ2); % Calculatng local Log-LRs 33 for =1: rand_, rand_] = nd2sub(sz, edges(rand(nel))); 35 temp = LLR(rand_)+LLR(rand_); 36 % Updatng log-lkelhood ratos 37 LLR(rand_)=(temp+(sgma*randn)/H(rand_,rand_))/2; 38 LLR(rand_)=(temp+(sgma*randn)/H(rand_,rand_))/2; 39 end 40 U = (LLR>0); % Makng fnal decsons 41 f hyp==1 42 detected = detected + sum(u==1); 43 hyp1 = hyp1 + 1; 44 else 45 falsealarm = falsealarm + sum(u==1); 46 hyp0 = hyp0 + 1; 47 end 48 end P_Detecton(l) = detected / (hyp1*n); 51 P_FalseAlarm(l) = falsealarm / (hyp0*n); 52 l = l+1; 53 end semlogx(snr,p_detecton); 56 hold on; 57 semlogx(snr,p_falsealarm); 58 hold off; 59 fgure; 60 plot(sort(p_falsealarm),sort(p_detecton));

9 9 1 % Producng a random geometrc graph and returnng ts adacency matrx 2 functon A = RGG(n,r) 3 4 A = zeros(n,n); 5 X = rand(n,1); 6 Y = rand(n,1); 7 8 % Plottng the graph: 9 10 % scatter(x,y,'red','flled'); 11 % hold on; for =1:n 14 for =:n 15 f (((X()-X())ˆ2 + (Y()-Y())ˆ2) rˆ2) && ( ) 16 A(,) = 1; 17 % lne(x() X()], Y() Y()],'Color','k','LneWdth',1.5); 18 % hold on; 19 end 20 end 21 end % hold off; 24 A = A + A';

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1]

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1] DYNAMIC SHORTEST PATH SEARCH AND SYNCHRONIZED TASK SWITCHING Jay Wagenpfel, Adran Trachte 2 Outlne Shortest Communcaton Path Searchng Bellmann Ford algorthm Algorthm for dynamc case Modfcatons to our algorthm

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

On Centralized Composite Detection with Distributed Sensors

On Centralized Composite Detection with Distributed Sensors On Centralzed Composte Detecton wth Dstrbuted Sensors Cuchun Xu, and Steven Kay Abstract For composte hypothess testng, the generalzed lkelhood rato test (GLRT and the Bayesan approach are two wdely used

More information

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud Resource Allocaton wth a Budget Constrant for Computng Independent Tasks n the Cloud Wemng Sh and Bo Hong School of Electrcal and Computer Engneerng Georga Insttute of Technology, USA 2nd IEEE Internatonal

More information

Lecture 3: Shannon s Theorem

Lecture 3: Shannon s Theorem CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Wreless Informaton Transmsson System Lab. Chapter 7 Channel Capacty and Codng Insttute of Communcatons Engneerng atonal Sun Yat-sen Unversty Contents 7. Channel models and channel capacty 7.. Channel models

More information

Homework Assignment 3 Due in class, Thursday October 15

Homework Assignment 3 Due in class, Thursday October 15 Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Chapter 7 Channel Capacty and Codng Contents 7. Channel models and channel capacty 7.. Channel models Bnary symmetrc channel Dscrete memoryless channels Dscrete-nput, contnuous-output channel Waveform

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Maxmum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models

More information

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA 4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

Power Allocation for Distributed BLUE Estimation with Full and Limited Feedback of CSI

Power Allocation for Distributed BLUE Estimation with Full and Limited Feedback of CSI Power Allocaton for Dstrbuted BLUE Estmaton wth Full and Lmted Feedback of CSI Mohammad Fanae, Matthew C. Valent, and Natala A. Schmd Lane Department of Computer Scence and Electrcal Engneerng West Vrgna

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

Chapter Newton s Method

Chapter Newton s Method Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve

More information

Some modelling aspects for the Matlab implementation of MMA

Some modelling aspects for the Matlab implementation of MMA Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton

More information

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm Desgn and Optmzaton of Fuzzy Controller for Inverse Pendulum System Usng Genetc Algorthm H. Mehraban A. Ashoor Unversty of Tehran Unversty of Tehran h.mehraban@ece.ut.ac.r a.ashoor@ece.ut.ac.r Abstract:

More information

Chapter 13: Multiple Regression

Chapter 13: Multiple Regression Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to

More information

Error Probability for M Signals

Error Probability for M Signals Chapter 3 rror Probablty for M Sgnals In ths chapter we dscuss the error probablty n decdng whch of M sgnals was transmtted over an arbtrary channel. We assume the sgnals are represented by a set of orthonormal

More information

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS HCMC Unversty of Pedagogy Thong Nguyen Huu et al. A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS Thong Nguyen Huu and Hao Tran Van Department of mathematcs-nformaton,

More information

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD he Gaussan classfer Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world X observatons g decson functon L[g,y] loss of predctng y wth g Bayes decson rule s

More information

Curve Fitting with the Least Square Method

Curve Fitting with the Least Square Method WIKI Document Number 5 Interpolaton wth Least Squares Curve Fttng wth the Least Square Method Mattheu Bultelle Department of Bo-Engneerng Imperal College, London Context We wsh to model the postve feedback

More information

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Mamum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models for

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

The optimal delay of the second test is therefore approximately 210 hours earlier than =2.

The optimal delay of the second test is therefore approximately 210 hours earlier than =2. THE IEC 61508 FORMULAS 223 The optmal delay of the second test s therefore approxmately 210 hours earler than =2. 8.4 The IEC 61508 Formulas IEC 61508-6 provdes approxmaton formulas for the PF for smple

More information

Low Complexity Soft-Input Soft-Output Hamming Decoder

Low Complexity Soft-Input Soft-Output Hamming Decoder Low Complexty Soft-Input Soft-Output Hammng Der Benjamn Müller, Martn Holters, Udo Zölzer Helmut Schmdt Unversty Unversty of the Federal Armed Forces Department of Sgnal Processng and Communcatons Holstenhofweg

More information

Average Decision Threshold of CA CFAR and excision CFAR Detectors in the Presence of Strong Pulse Jamming 1

Average Decision Threshold of CA CFAR and excision CFAR Detectors in the Presence of Strong Pulse Jamming 1 Average Decson hreshold of CA CFAR and excson CFAR Detectors n the Presence of Strong Pulse Jammng Ivan G. Garvanov and Chrsto A. Kabachev Insttute of Informaton echnologes Bulgaran Academy of Scences

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

An Application of Fuzzy Hypotheses Testing in Radar Detection

An Application of Fuzzy Hypotheses Testing in Radar Detection Proceedngs of the th WSES Internatonal Conference on FUZZY SYSEMS n pplcaton of Fuy Hypotheses estng n Radar Detecton.K.ELSHERIF, F.M.BBDY, G.M.BDELHMID Department of Mathematcs Mltary echncal Collage

More information

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering / Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons

More information

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland

More information

Tracking with Kalman Filter

Tracking with Kalman Filter Trackng wth Kalman Flter Scott T. Acton Vrgna Image and Vdeo Analyss (VIVA), Charles L. Brown Department of Electrcal and Computer Engneerng Department of Bomedcal Engneerng Unversty of Vrgna, Charlottesvlle,

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

Digital Modems. Lecture 2

Digital Modems. Lecture 2 Dgtal Modems Lecture Revew We have shown that both Bayes and eyman/pearson crtera are based on the Lkelhood Rato Test (LRT) Λ ( r ) < > η Λ r s called observaton transformaton or suffcent statstc The crtera

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

JAB Chain. Long-tail claims development. ASTIN - September 2005 B.Verdier A. Klinger

JAB Chain. Long-tail claims development. ASTIN - September 2005 B.Verdier A. Klinger JAB Chan Long-tal clams development ASTIN - September 2005 B.Verder A. Klnger Outlne Chan Ladder : comments A frst soluton: Munch Chan Ladder JAB Chan Chan Ladder: Comments Black lne: average pad to ncurred

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

x = , so that calculated

x = , so that calculated Stat 4, secton Sngle Factor ANOVA notes by Tm Plachowsk n chapter 8 we conducted hypothess tests n whch we compared a sngle sample s mean or proporton to some hypotheszed value Chapter 9 expanded ths to

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Hidden Markov Models

Hidden Markov Models Hdden Markov Models Namrata Vaswan, Iowa State Unversty Aprl 24, 204 Hdden Markov Model Defntons and Examples Defntons:. A hdden Markov model (HMM) refers to a set of hdden states X 0, X,..., X t,...,

More information

Appendix B: Resampling Algorithms

Appendix B: Resampling Algorithms 407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles

More information

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore Sesson Outlne Introducton to classfcaton problems and dscrete choce models. Introducton to Logstcs Regresson. Logstc functon and Logt functon. Maxmum Lkelhood Estmator (MLE) for estmaton of LR parameters.

More information

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur Analyss of Varance and Desgn of Experment-I MODULE VII LECTURE - 3 ANALYSIS OF COVARIANCE Dr Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Any scentfc experment s performed

More information

Multigradient for Neural Networks for Equalizers 1

Multigradient for Neural Networks for Equalizers 1 Multgradent for Neural Netorks for Equalzers 1 Chulhee ee, Jnook Go and Heeyoung Km Department of Electrcal and Electronc Engneerng Yonse Unversty 134 Shnchon-Dong, Seodaemun-Ku, Seoul 1-749, Korea ABSTRACT

More information

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The

More information

Min Cut, Fast Cut, Polynomial Identities

Min Cut, Fast Cut, Polynomial Identities Randomzed Algorthms, Summer 016 Mn Cut, Fast Cut, Polynomal Identtes Instructor: Thomas Kesselhem and Kurt Mehlhorn 1 Mn Cuts n Graphs Lecture (5 pages) Throughout ths secton, G = (V, E) s a mult-graph.

More information

Consider the following passband digital communication system model. c t. modulator. t r a n s m i t t e r. signal decoder.

Consider the following passband digital communication system model. c t. modulator. t r a n s m i t t e r. signal decoder. PASSBAND DIGITAL MODULATION TECHNIQUES Consder the followng passband dgtal communcaton system model. cos( ω + φ ) c t message source m sgnal encoder s modulator s () t communcaton xt () channel t r a n

More information

Chapter 5 Multilevel Models

Chapter 5 Multilevel Models Chapter 5 Multlevel Models 5.1 Cross-sectonal multlevel models 5.1.1 Two-level models 5.1.2 Multple level models 5.1.3 Multple level modelng n other felds 5.2 Longtudnal multlevel models 5.2.1 Two-level

More information

A Hybrid Variational Iteration Method for Blasius Equation

A Hybrid Variational Iteration Method for Blasius Equation Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

Why Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one)

Why Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one) Why Bayesan? 3. Bayes and Normal Models Alex M. Martnez alex@ece.osu.edu Handouts Handoutsfor forece ECE874 874Sp Sp007 If all our research (n PR was to dsappear and you could only save one theory, whch

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

Basically, if you have a dummy dependent variable you will be estimating a probability.

Basically, if you have a dummy dependent variable you will be estimating a probability. ECON 497: Lecture Notes 13 Page 1 of 1 Metropoltan State Unversty ECON 497: Research and Forecastng Lecture Notes 13 Dummy Dependent Varable Technques Studenmund Chapter 13 Bascally, f you have a dummy

More information

The Second Anti-Mathima on Game Theory

The Second Anti-Mathima on Game Theory The Second Ant-Mathma on Game Theory Ath. Kehagas December 1 2006 1 Introducton In ths note we wll examne the noton of game equlbrum for three types of games 1. 2-player 2-acton zero-sum games 2. 2-player

More information

Exercises of Chapter 2

Exercises of Chapter 2 Exercses of Chapter Chuang-Cheh Ln Department of Computer Scence and Informaton Engneerng, Natonal Chung Cheng Unversty, Mng-Hsung, Chay 61, Tawan. Exercse.6. Suppose that we ndependently roll two standard

More information

DUE: WEDS FEB 21ST 2018

DUE: WEDS FEB 21ST 2018 HOMEWORK # 1: FINITE DIFFERENCES IN ONE DIMENSION DUE: WEDS FEB 21ST 2018 1. Theory Beam bendng s a classcal engneerng analyss. The tradtonal soluton technque makes smplfyng assumptons such as a constant

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence

More information

Time-Varying Systems and Computations Lecture 6

Time-Varying Systems and Computations Lecture 6 Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

Stat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors

Stat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors Stat60: Bayesan Modelng and Inference Lecture Date: February, 00 Reference Prors Lecturer: Mchael I. Jordan Scrbe: Steven Troxler and Wayne Lee In ths lecture, we assume that θ R; n hgher-dmensons, reference

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

Joint Statistical Meetings - Biopharmaceutical Section

Joint Statistical Meetings - Biopharmaceutical Section Iteratve Ch-Square Test for Equvalence of Multple Treatment Groups Te-Hua Ng*, U.S. Food and Drug Admnstraton 1401 Rockvlle Pke, #200S, HFM-217, Rockvlle, MD 20852-1448 Key Words: Equvalence Testng; Actve

More information

Lecture 6: Introduction to Linear Regression

Lecture 6: Introduction to Linear Regression Lecture 6: Introducton to Lnear Regresson An Manchakul amancha@jhsph.edu 24 Aprl 27 Lnear regresson: man dea Lnear regresson can be used to study an outcome as a lnear functon of a predctor Example: 6

More information

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests Smulated of the Cramér-von Mses Goodness-of-Ft Tests Steele, M., Chaselng, J. and 3 Hurst, C. School of Mathematcal and Physcal Scences, James Cook Unversty, Australan School of Envronmental Studes, Grffth

More information

Finite Mixture Models and Expectation Maximization. Most slides are from: Dr. Mario Figueiredo, Dr. Anil Jain and Dr. Rong Jin

Finite Mixture Models and Expectation Maximization. Most slides are from: Dr. Mario Figueiredo, Dr. Anil Jain and Dr. Rong Jin Fnte Mxture Models and Expectaton Maxmzaton Most sldes are from: Dr. Maro Fgueredo, Dr. Anl Jan and Dr. Rong Jn Recall: The Supervsed Learnng Problem Gven a set of n samples X {(x, y )},,,n Chapter 3 of

More information

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem. Lecture 14 (03/27/18). Channels. Decodng. Prevew of the Capacty Theorem. A. Barg The concept of a communcaton channel n nformaton theory s an abstracton for transmttng dgtal (and analog) nformaton from

More information

Ensemble Methods: Boosting

Ensemble Methods: Boosting Ensemble Methods: Boostng Ncholas Ruozz Unversty of Texas at Dallas Based on the sldes of Vbhav Gogate and Rob Schapre Last Tme Varance reducton va baggng Generate new tranng data sets by samplng wth replacement

More information

Channel optimization for binary hypothesis testing

Channel optimization for binary hypothesis testing Channel optmzaton for bnary hypothess testng Gancarlo Baldan Munther Dahleh MIT, Laboratory for Informaton and Decson systems, Cambrdge 039 USA emal: gbaldan@mt.edu) MIT, Laboratory for Informaton and

More information

Lecture 20: November 7

Lecture 20: November 7 0-725/36-725: Convex Optmzaton Fall 205 Lecturer: Ryan Tbshran Lecture 20: November 7 Scrbes: Varsha Chnnaobreddy, Joon Sk Km, Lngyao Zhang Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer:

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Prof. Dr. I. Nasser Phys 630, T Aug-15 One_dimensional_Ising_Model

Prof. Dr. I. Nasser Phys 630, T Aug-15 One_dimensional_Ising_Model EXACT OE-DIMESIOAL ISIG MODEL The one-dmensonal Isng model conssts of a chan of spns, each spn nteractng only wth ts two nearest neghbors. The smple Isng problem n one dmenson can be solved drectly n several

More information

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1 On an Extenson of Stochastc Approxmaton EM Algorthm for Incomplete Data Problems Vahd Tadayon Abstract: The Stochastc Approxmaton EM (SAEM algorthm, a varant stochastc approxmaton of EM, s a versatle tool

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.

More information

Goodness of fit and Wilks theorem

Goodness of fit and Wilks theorem DRAFT 0.0 Glen Cowan 3 June, 2013 Goodness of ft and Wlks theorem Suppose we model data y wth a lkelhood L(µ) that depends on a set of N parameters µ = (µ 1,...,µ N ). Defne the statstc t µ ln L(µ) L(ˆµ),

More information

Welfare Properties of General Equilibrium. What can be said about optimality properties of resource allocation implied by general equilibrium?

Welfare Properties of General Equilibrium. What can be said about optimality properties of resource allocation implied by general equilibrium? APPLIED WELFARE ECONOMICS AND POLICY ANALYSIS Welfare Propertes of General Equlbrum What can be sad about optmalty propertes of resource allocaton mpled by general equlbrum? Any crteron used to compare

More information

Department of Statistics University of Toronto STA305H1S / 1004 HS Design and Analysis of Experiments Term Test - Winter Solution

Department of Statistics University of Toronto STA305H1S / 1004 HS Design and Analysis of Experiments Term Test - Winter Solution Department of Statstcs Unversty of Toronto STA35HS / HS Desgn and Analyss of Experments Term Test - Wnter - Soluton February, Last Name: Frst Name: Student Number: Instructons: Tme: hours. Ads: a non-programmable

More information

STAT 3008 Applied Regression Analysis

STAT 3008 Applied Regression Analysis STAT 3008 Appled Regresson Analyss Tutoral : Smple Lnear Regresson LAI Chun He Department of Statstcs, The Chnese Unversty of Hong Kong 1 Model Assumpton To quantfy the relatonshp between two factors,

More information

CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING INTRODUCTION

CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING INTRODUCTION CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING N. Phanthuna 1,2, F. Cheevasuvt 2 and S. Chtwong 2 1 Department of Electrcal Engneerng, Faculty of Engneerng Rajamangala

More information