An Uncertainty-Based Distributed Fault Detection Mechanism for Wireless Sensor Networks

Size: px
Start display at page:

Download "An Uncertainty-Based Distributed Fault Detection Mechanism for Wireless Sensor Networks"

Transcription

1 Sensors 014, 14, ; do: /s Artcle OPEN ACCESS sensors ISSN An Uncertanty-Based Dstrbuted Fault Detecton Mechansm for Wreless Sensor Networks Yang Yang *, Zhpeng Gao, Hang Zhou and uesong Qu State Key Laboratory of Networkng and Swtchng Technology, Bejng Unversty of Posts and Telecommuncatons, No.10 tucheng Road, Hadan Dstrct, Bejng , Chna; E-Mals: (Z.G.); (H.Z.); (.Q.) * Author to whom correspondence should be addressed; E-Mal: yyang@bupt.edu.cn; Tel./Fax: Receved: 14 February 014; n revsed form: 14 Aprl 014 / Accepted: 17 Aprl 014 / Publshed: 5 Aprl 014 Abstract: Exchangng too many messages for fault detecton wll cause not only a degradaton of the network qualty of servce, but also represents a huge burden on the lmted energy of sensors. Therefore, we propose an uncertanty-based dstrbuted fault detecton through aded judgment of neghbors for wreless sensor networks. The algorthm consders the serous nfluence of sensng measurement loss and therefore uses Markov decson processes for fllng n mssng data. Most mportant of all, fault msjudgments caused by uncertanty condtons are the man drawbacks of tradtonal dstrbuted fault detecton mechansms. We draw on the experence of evdence fuson rules based on nformaton entropy theory and the degree of dsagreement functon to ncrease the accuracy of fault detecton. Smulaton results demonstrate our algorthm can effectvely reduce communcaton energy overhead due to message exchanges and provde a hgher detecton accuracy rato. Keywords: fault detecton; uncertanty; evdence fuson; data mssng; nformaton entropy 1. Introducton Sensors can be rapdly deployed nto large areas and perform montorng tasks by autonomous wreless communcaton methods. In dsaster preventon applcatons, for example, nodes detect and estmate envronmental nformaton, and then forecast when and where a natural calamty may occur.

2 Sensors 014, Although users always hope that the network wll provde excellent montorng and gatherng functons, t seems nevtable that sensors to experence faults caused by some extrnsc and ntrnsc factors. Generally, a fault s an unexpected change n the network, whch leads to measurement errors, system breakdown or communcaton falure. Faults are generally classfed as crash, tmng, omsson, ncorrect computaton, fal breakdown, authentcated Byzantne, etc. [1]. From another pont of vew, crash faults are classfed as communcaton faults, snce under those condtons, a sensor can t communcate wth others because t has a falure n ts communcaton module or the lnk s down. Contrarly, all other faults are vewed as data faults, whch means the faulty sensors can communcate wth each other, but the sensed or transmtted data s not correct [,3]. To avod erroneous judgments due to faults, broken-down nodes should be detected and solated from other functonng nodes. Fault detecton should make an unambguous decson about whether the behavor of a sensor devates from other common measurements. Sensors always form a local vew of the fault state of sensors by collectng measurements from ther one-hop neghbors. Neghbor cooperaton s one approach to fault detecton, whereby a sensor uses neghbor measurements to decde ts own fault state collaboratvely [4 6]. Ths s demonstrated to be effcacous for fault nformaton collecton and dagnoss because t allevates the overheads of snk nodes or base statons n order to avod network bottlenecks. Accordngly, a novel challenge for fault detecton n wreless sensor networks (WSNs) s how to reduce the energy consumpton when exchangng messages s the man means of fault detecton n the dstrbuted envronment. Moreover, the dynamc network topology and sgnal loss caused by long propagaton delays and sgnal fadng nfluence the effcency of fault detecton n some advanced medcal care or battlefeld response applcatons. In the majorty votng algorthm based on neghbor cooperaton detecton, the normal measurements of sensors that are located close to each other are assumed to be spatally correlated, whle the fault data are uncorrelated. The tendency state of a sensor s determned as possbly faulty (LF) or possbly good (LG) by comparng ts own readngs wth those of ts one-hop neghbors n a votng process. If the number of LG neghbors that have correlated readngs s greater than or equal to half, then t s fault-free, otherwse t s deemed faulty. The weghted votng approach uses geographc dstance or degree of trust as the decdng factor when calculatng the sensor state, but these methods perform n WSNs better based on the hypothess of hgher average connectvty degree. Actually, sensors are usually deployed n a lower connectvty envronment, n whch exchanged readngs are too few to make an approprate comparson and decson (e.g., n Fgure 1a fronter node only has one neghbor). Then the detecton accuracy rate decreases as the fault rate ncreases. Moreover, the faults caused by attacks are unevenly dstrbuted n the case of ntruson montorng because the hostle sgnals wthout a fxed routng wll randomly affect or tamper wth readngs. In ths paper, we manly focus on sensng faults other than communcaton faults. After analyzng the defects of tradtonal algorthms, we present an Uncertanty-based Dstrbuted Fault Detecton (udfd) mechansm for wreless sensor networks. The man contrbutons of udfd are as follows: (1) Propose the uncertanty-based dstrbuted fault detecton algorthm, whch can avod decreasng fault detecton accuracy when the falure rato becomes hgher. In addton, the accuracy of fault detecton remans at a hgh level regardless of a lower connectvty scene;

3 Sensors 014, () Data loss wll nfluence the fault judgment because each sensor determnes ts own state step by step accordng to ts neghbors measurements. The paper represents a data forecast model based on a Markov decson processes for fllng n lost data to provde reference data for others state determnatons; (3) We classfy two types of sensors tendency states: Possble Good (LG) and Undetermned (Un). LG nodes contrbute to judge nodes ultmate state. The Un nodes are both n an uncertanty status, so we must determne the ultmate status of an Un node. Here we desgn belef probablty assgnment (BPA) functons for dfferent evdences that reflect the states of Un nodes. What s more, an evdence fuson rule based on nformaton entropy theory s used to avod evdence conflcts. Fgure 1. Fault detecton llustraton. F 1 4 F 3 F F G G G G F 6 F F 3 G 1 5 G G 7 8 G G G G 4 F (a) (b) The rest of the paper s organzed as follows: Secton descrbes some related works n the area of fault detecton n WSNs. Secton 3 ntroduces our Uncertanty-based Dstrbuted Fault Detecton algorthm (udfd) and the concrete mechansms nvolved. Secton 4 depcts the smulaton results wth respect to typcal fault detecton algorthms lke DFD and IDFD, and demonstrates our algorthm s effcency and superorty. In Secton 5, we conclude the paper.. Related Works In ths secton, we brefly revew related works n the area of dstrbuted and centralzed fault detecton n WSNs. The authors n [4] proposed and evaluated a localzed fault detecton scheme (DFD) to dentfy faulty sensors. An mproved DFD scheme was proposed by Jang n [5]. Neghbors always exchange sensng measurements perodcally, therefore a sensor judges ts own state (as good or faulty) accordng to neghbors values. A faulty dentfcaton algorthm reported n [7] s completely localzed and requres lower computatonal overhead, and t can easly be scaled to large sensor networks. In the algorthm, the readng of a sensor s compared wth ts neghbors medan readngs. If the dfference s large or large but negatve, the sensor s deemed as faulty. If half of neghbors are faulty and the number of neghbors s even, the algorthm cannot detect faults. Krshnamachar and co-workers proposed n [8] a dstrbuted soluton for the canoncal task of bnary detecton of nterestng envronmental events. They explctly take nto account the possblty of

4 Sensors 014, measurement faults and develop a dstrbuted Bayesan scheme for detectng and correctng faults. Each sensor node dentfes ts own status based on local comparsons of sensed data wth some thresholds and dssemnaton of the test results [9]. Tme redundancy s used to tolerate transent sensng and communcaton faults. To elmnate the delay nvolved n z tme redundancy scheme, a sldng wndow s employed wth some data storage for comparson wth prevous results. The MANNA scheme [10] creates a manager located externally to the WSN. It has a global vson of the network and can perform complex tasks that would not be possble nsde the network. Management actvtes take place when sensor nodes are collectng and sendng temperature data. Every node wll check ts energy level and send a message to the manager/agent whenever there s a state change. The manager can then obtan the coverage map and energy level of all sensors based upon the collected nformaton. To detect node falures, the manager sends GET operatons to retreve the node state. Wthout hearng from the nodes, the manager wll consult the energy map to check ts resdual energy. In ths way, MANNA archtecture s able to locate faulty sensor nodes. However, ths approach requres an external manager to perform the centralzed dagnoss and the communcaton between nodes and the manager s too expensve for WSNs. Tsang-Y et al. [11] proposed a dstrbuted fault-tolerant decson fuson n the presence of sensor faults. The collaboratve sensor fault detecton (CSFD) scheme s proposed to elmnate unrelable local decsons. In ths approach, the local sensors send ther decsons sequentally to a fuson center. Ths scheme establshes an upper bound on the fuson error probablty based on a pre-desgned fuson rule. Ths upper bound assumes dentcal local decson rules and fault-free envronments. They proposed a crteron to search the faulty sensor nodes whch s based on ths error boundary. Once the fuson center dentfes the faulty sensor nodes, all correspondng local decsons are removed from the computaton of the lkelhood ratos that are adopted to make the fnal decson. Ths approach consders crash and ncorrect computaton faults. In [1], a taxonomy for classfcaton of faults n sensor networks and the frst on-lne model-based testng technque are ntroduced. The technque consders the mpact of readngs of a partcular sensor on the consstency of mult-sensor fuson. A sensor s most lkely to be faulty f ts elmnaton sgnfcantly mproves the consstency of the results. A way to dstngush random nose s to run a maxmum lkelhood or Bayesan approach on the mult-sensor fuson measurements. If the accuracy of fnal results of multsensory fuson mproves after runnng these procedures, random nose should exst. To get a consstent mappng of the sensed phenomena, dfferent sensors measurements need to be combned n a model. Ths cross-valdaton-based technque can be appled to a broad set of fault models. It s generc and can be appled to an arbtrary system of sensors that use an arbtrary type of data fuson. However, ths technque s centralzed. Sensor node nformaton must be collected and sent to the base staton to conduct the on-lne fault detecton. Mao et al. [13] presented an onlne lghtweght falure detecton scheme named Agnostc Dagnoss (AD). Ths approach s motvated by the fact that the system metrcs of sensors (e.g., rado-on tme, number of packets transmtted) usually exhbt certan correlaton patterns. Ths approach collects types of metrcs that are classfed nto four categores: (1) tmng metrcs (e.g., RadoOnTmeCounter). They denote the accumulatve rado-on tme; () traffc metrcs (e.g., TransmtCounter). They record the accumulatve number of packets transmtted by a sensor node; (3) task metrcs (e.g., TaskExecCounter). Ths s the accumulatve number of tasks executed; (4) other metrcs

5 Sensors 014, such as Parent Change Counter, whch counts the number of parent changes. AD explots the correlatons between the metrcs of each sensor usng a correlaton graph that descrbes the status of the sensor node. By mnng through the perodcally updated correlaton graphs, abnormal correlatons are detected n tme. Specfcally, n addton to predefned faults (.e., wth known types and symptoms), slent falures caused by Byzantne faults are consdered. Exchangng too many messages for fault detecton wll cause not only a degradaton of the network qualty of servce, but also a huge burden on the lmted energy of sensors. Hence, we desgn an uncertanty-based dstrbuted fault detecton based on neghbor cooperaton n WSNs. It adopts auto-correlated test results to descrbe dfferent sensng states from day to day, and the nformaton entropy-based D-S evdence theory wll be ntroduced to deduce actual states for undetermned nodes. 3. Uncertanty-Based Fault Detecton Mechansm 3.1. The DFD and IDFD Schemes and Ther Drawbacks Ths secton presents the DFD algorthm proposed by Chen [4] and IDFD algorthm descrbed by Jang [5] to gve an overvew of dstrbuted fault detecton, and then analyzes these algorthms drawbacks. Chen [4] ntroduced a localzed fault detecton method by exchangng measures n WSNs. It s assumed that x s the measurement of node. We defne between node and j at tme t, whle When tl d j d to represent the measured dfference t j s measurement dfference from tme t l to t l+1 : d x x (1) t t t j j d d d ( x x ) ( x x ) () tl tl 1 tl tl 1 tl 1 tl tl j j j j j t d j s less than or equal to a predefned threshold 1, we wll consder a test result c j s set to l 0, or else t contnuously calculates d t l j. If d t j > ( s also a predefned threshold), then c j = 1, otherwse c j = 0. Here the expresson c j = 1 means node and node j are possbly n dfferent states. Next, the tendency status (possbly a faulty LF or possbly a good LG) s determned accordng to followng formula [14]: where N LF f cj N / T jn LG otherwse s the number of one-hop neghbors of node. The formula states that a sensor s deemed to be possbly good only f there are less than N / (3) neghbors whose test results are 1. In order to process the second round test, each node needs to send ts tendency state to ts one-hop neghbors. In the DFD algorthm, n the end state the node Z s decded to be fault-free only f a dfference s greater than or equal to N /, otherwse s undetermned. Here (1 cj ) cj (1 cj ) ( j N, T LG ). In order to promote dentfcaton effcency for undetermned sensors, these nodes j repeatedly check whether ther neghbor s state s fault-free or not. If such a neghbor exsts, then the sensor s faulty (fault-free) accordng to the test result 1(0) between them. A sensor may not decde ts

6 Sensors 014, own state because the states of neghbors are n conflct, e.g., Z j = Z k = GOOD. At the same tme, c j c k. Then Z s GOOD f T = LG, or else Z s FAULT. Jang [5] consders the determnant condton (1 c ) / j N & T LG j N n the DFD algorthm s too harsh and ths wll lead some normal nodes to be msdagnosed as faulty, so the determnant condton for a normal node s amended as: jn& T j JLG Tj LG J c N / (4) If there s no tendency status of a neghbor as LG, then the fnal determnant status s set as normal (faulty) based on T = LG (T = LF). Although ths mechansm promotes the fault detecton accuracy to a certan extent through smulaton demonstraton, t doesn t have a clear way to resolve conflcts or erroneous judgments as llustrated n Fgure 1. In Fgure 1a, t calculates c 1 = 0, c 13 = 0, and c 14 = 0 for node 1. Then T 1 s set as LG accordng to Equaton (3). In the same way, we get T = LF, T 3 = LF, T 4 = LF. Node 1 has no neghbor whose tendency status s LG, and then the fnal determnant status s set as normal based on the rule of T = LG. Ths s an obvous erroneous judgment. The tendency states n Fgure 1b are calculated as follows: T 1 = LF, T = LG, T 3 = LF, T 4 = LG. For node 1, c & j, and / / 1 jn T LG T3LG T5LG. The node 1 s decded as J N Tj LG faulty accordng to Equaton (4). Actually, node 1 s a normal sensor. Node 1 wll make a mstake when the number of normal neghbors equals the number of faulty neghbors. The premse s that ther ntal detecton tendency states are LG. By analyzng msjudgment condtons of tradtonal algorthms, a defect s that an ndetermnacy occurs on the condton = n Equaton (4), and thus the node s not reducble to good or faulty. Another s that these algorthms gnore the effect of sensors own measurements whch are approxmate at the same tme on adjacent days (e.g., 8 June and 9 June). The analogous and hstorcal readngs of the same node contrbute to determne the faulty state under vague condtons. Moreover, most dstrbuted fault detecton mechansms assume that sensors have the ablty to acqure every measurement and cooperatvely judge the state of each other. When the sensor s communcaton module has a falure, but the acquston module s actve, the readngs can t be perceved by the sensor. In a dstrbuted collaboratve process, nodes dagnose data faults based prmarly on neghbors data. Once a neghbor s data s mssng, t wll affect the accuracy of fault dagnoss, e.g., n Fgure 1b, node 4 can t determne ts own status when node 1 has no data. 3.. Uncertanty-Based Dstrbuted Fault Detecton Algorthm In the paper, we manly resolve the followng problems: (1) data mssng before exchangng readngs; () msjudgments caused by ndetermnacy condtons. The problem of mssng data due to communcaton faults wll affect the determnaton accuracy when comparng neghbors measurements. To solve the data loss, a faulty sensng node should fll n the mssng measurements to provde the reference. Secondly, the represented algorthm adopts the auto-correlated test results to descrbe the status of dfferences between dfferent days. Fnally, those undetermned appearances may occur n the above-mentoned secton. The nformaton entropy and the degree of dsagreement functon combned

7 Sensors 014, n evdence fuson theory are mproved accordngly to help to deduce ther actual states. In addton, usng nformaton entropy n the evdence fuson can reduce evdence conflcts and ncrease detecton accuracy Defntons We lst the notatons n the udfd algorthm as follows: p: Probablty of fault of a sensor; N : A set of neghbors of node ;, x Dt : Measurement value of node at tme t on day D; N : Number of one-hop neghbors of node ; t d j : Measurement dfference between node and j at tme t on the same day accordng to Formula (1); tl d : Measurement dfference between node and j from tme t l to t l+1 on the same day j accordng to Formula ();, Δd Dt : Measurement dfference of node at the same tme t on dfferent day; c j : Test result between node and j, cj {0,1} ; T : Tendency value of a sensor, T { LG, Un} ; Z : Determned detecton status of a sensor, Z {GOOD,FAULT} ; t Δ, θ 1, θ, θ 3 : Predefned threshold values about, Δ t D t d d l, Δd ; Num ({G}): Number of good neghbors of node ; Num ({F}): Number of faulty neghbors of node Fault Detecton j j The man processes of the udfd algorthm based on neghbor cooperaton are summarzed as follows. The key technology for solvng the two problems s descrbed n Sectons 3..3 and Stage 1: Each sensor acqures the readngs from ts own sensng module. If no data s acqured, then t flls up the mssng data. After that, t exchanges the measurement at tme t on day D wth ts neghbors and calculates the test result C j (It s assumed that C j = 0 at the ntal tme): 1: If d, then set C j = 0; Dt, : else C j = 1; 3: end f 4: t If dj 1, then C j = 1 3 t t 5: else f dj 1 && d l j, then C j = 1; 6: else C j = 0; 7: end f 8: Repeat the above steps untl all of test results about neghbors are obtaned. Stage : Node generates the tendency value based on c ( j) : j

8 Sensors 014, : If j c j c N 1, then T = LG; 10: else T = Un; 11: end f 1: Broadcast the tendency status f T = LG. Stage 3: Calculate the determned status of LG nodes: 13: If T = LG && ( j) j { LG} ; Num( Ns ) 1 14: If Cj, then Z = Good; jlg jlg 15: else Z = Fault; 16: end f 17: else f T = LG && no any neghbor s LG, then T = Un; 18: end f 19: A LG node can determne ts own status (good or faulty), and only good sensors broadcast ther states n order to save transmsson overheads. Stage 4: A node whose tendency status s Un determnes the actual state by usng entropy-based evdence combnaton mechansm: 0: Node ( { LF, Un}) receves the evdence of good neghbors. 1: Combne the evdences generated by measurements by adoptng nformaton entropy-based evdence fuson, and acqure the combned BPA functons m * ({ G }), m * ({ F }), and m * ( ); mn mj ( G m ({ G})) ; : Node fnds the node j whch matches the * 3: f c j = 1, then Z = FAULT, else Z = GOOD; 4: end f 5: Determned node broadcasts ts status f t s a good sensor. Broadcastng not only uses up nodes energy but also occupes the channel bandwdth, so the man method of savng energy consumpton n our algorthm s that only partcular states n dfferent stages (LG and GOOD) are broadcast. In Step 1, only the node whose tendency status s equal to LG broadcasts the value. The reason s that only LG neghbors partcpate n fnal state determnaton n Step 14. Smlarly, only good sensors broadcast ther states n order to save energy transmsson overhead Mssng Data Preprocessng Mechansm In the paper, we manly focus on sensng faults rather than communcaton faults. When mssng data, occurs because of a sensng fault, t wll affect the accuracy of fault dagnoss. Ths means Dt has been lost because the communcaton module has faled, whch subsequently nfluences the reference data for other sensors faulty state determnaton. It s necessary for node to fll n the mssng data and send t to neghbors. In ths secton, we use a Markov decson processes based on neghbors hstorcal data to predct the current mssng measurement values of node. Relyng the features of Markov theory whch can reflect the nfluence of random factors and extenson to the stochastc process whch s dynamc and fluctuatng s consdered and we combne the hstorcal data of node wth ts neghbors

9 Sensors 014, hstorcal data, and then form a fuson hstorcal data vector, whch can be adaptvely adjusted accordng to the sgnfcance of neghbors measurements. Therefore, the state transton matrx of Markov s adopted to predct the value and sgn of the readng dfference between two days. The steps for data mssng preprocess preprocessng are as follows: Steps: (1) For each node j N, where N s the set of all the neghbors of node, fetch the prevous m hstorcal measurements of node j, and these hstorcal measurements correspond to an m dmensonal Dm, t Dm1, t D1, t vector V j, that s V (,,..., ) ; j j j j () Calculate the reputaton value C j for each neghbor of node, that s for each node j N, we 1 m D k, t D k, t k1 j have Cj e, where. Note that for a dfferent node, node j has m dfferent reputaton values and a smaller value for wll ncrease the reputaton value of node j; (3) Here we ntroduce Mahalanobs dstance to evaluate the smlarty dstance between node and ts neghbors. Then the predcton results should keep Mahalanobs dstance changes wthn a predefned threshold. For each node j N V, V between vectors V, calculate the Mahalanobs s dstance D j and V j, n order to evaluate the smlarty of node and all ts neghbors. That s D V V V V V V, where Σ s the covarance matrx of V and V j ; T 1 (, j ) ( j ) ( j ) (4) Assume that * V s a fuson of the hstorcal measurements of node and all ts neghbors, whch s used n the Markov decson processes to predct the current measurement of node. It s also an m dmensonal vector and can be calculated as follows: N C * j V V V j1 N j (5) C k1 k In ths data-fuson formula, the hstorcal measurement vector V j s weghted by the reputaton value of node j, and the factors α and β ( 1) ndcate to what extent a node trusts tself and neghbors. Here 0.5. (5) Accordng to the result of fuson n Step 4, use Markov decson processes to predct the current, measurement of node, then we can get Dt ; (6) For each node j N, recalculate the Mahalanobs s dstance D( V, V ) between vectors V and V j. That s vectors, and covarance matrx of (7) If j N D VV VV V V, here V and T 1 (, j ) ( j ) Σ ( j ) V, Dm, t Dm1, t D1, t D, t (,,,, ) V and V j;, (, j) VV, j adjust the fuson factor Dt, j V j are (m + 1) dmensonal V Dm, t Dm1, t D1, t D, t j ( j, j,, j, j ), Σ s the D VV D, where θ s a predefned threshold, then there s no need to and the predcted value can be adopted. Otherwse, the predcted value ncreases the dfferences between node and node j, so the fuson factor needs to be reduced approprately, n order to decrease the proporton of neghbors measurements n the calculaton of (8) If the fuson factor α has been adjusted n Step 7, then return to Step 4. Otherwse, ths algorthm ends. Dt, * V ;

10 Sensors 014, In order to predct the mssng data n Step 5 of the above algorthm, we draw on the experence Dt, of Markov decson processes [15,16]. Frstly, accordng to above algorthm, we can get the correspondng vector * V whch s calculated n Step 4 of the and t s, 1,, ( Dm t, D m t,..., D t ) an (m 1) dmensonal vector and can be consdered as an ndependent and dentcal dstrbuted Markov chan. Then, we classfy the state of each component n vector by an average-standard devaton E mn, max, where mn s and classfcaton method. Assume that state s can be expressed as max s ndcate the lower bound and upper bound of state s. Then the sample average s: The standard devaton s: j s s s m 1 Dj, t (6) m 1 S 1 m Dj, t ( ) (7) m j Accordng to central-lmt theorem [17], we dvde the sldng nterval of hstorcal fault data E1 3 S, S, E [ S, 0.5 S), E3 [ 0.5 S, 0.5 S), E4 [ 0.5 S, S), and E5 [ S, 3 S). The state of each component n the dfference vector nto fve states, that s dependng on whch sldng nterval t belongs to. The transton probablty matrx P (1) can be calculated as follows. Assume that (1) M st ndcates the sample numbers that state E S transfers to state E t n one step, and M S ndcates the sample numbers of (1) (1) M st (1) state E S before transfer. Then we get pst, where p st means the transton probablty of M shftng from state E S to state E t by one step. Therefore the 5 5 transton probablty matrx s: For any component Assume that ( D j, t) P (1) s p p p (1) (1) p (1) (1) (j = 1,, m), the probablty dstrbuton vector s: D j D j D j D j D j π( D j ) (,,,, ) (8) ( D, t) s n state E 3, then the probablty dstrbuton vector of t s πd (0,0,1,0,0). As the probablty dstrbuton vector π D and the transton probablty matrx P (1) are known, then the probablty dstrbuton vector of (1) 1 P ( D 1, t) π D π D, the correspondng state n max{ π ( D 1), s{1,,3, 4,5}} s the state ( D 1, t) as follows: belongs to. If ( D 1, t) s n state s, then the specfc value of s ( D 1, t) s s determned

11 Sensors 014, D1, t πs 1 D π 1 s s D πs 1 D πs D πs 1 D πs 1 D mns π D π D π D s1 s s 1 mns max maxs π D π D π D s1 s s1 (9) Contnue to ntroduce Markov decson processes to predct the sgns (postve and negatve) of ( D 1, t) ( Dm, t) ( Dm1, t) D, t. For the vector (,,..., ), we defne that state E 1 corresponds to postve, and state E corresponds to negatve. Then we get the transton probablty p M (1) (1) st st M s (1) (1) (1) p 11 p 1 the transton probablty matrx P (1) (1) whch reflects the probablty of transferences p 1 p ( D j, t) between postve and negatve. Also for any component, (j = 1,,,m), the probablty dstrbuton vector s ( 1 D j, D j the probablty dstrbuton vector of t s πd and the transton probablty matrx vector of the sgn of π D j. Assume that ( D 1, t) max{ ( D 1), s{ 1,}} ndcates the sgn of s ( D, t) and s a postve, then π D (1,0). As the probablty dstrbuton vector s (1) (1) P are known, then the probablty dstrbuton π D 1 π D P, the correspondng state n ( D 1, t) Informaton Entropy Based Evdence Confuson. As the Un nodes are both n uncertanty status, we need to fnd a mechansm to determne the status of these nodes. Dempster-Shafer evdence theory s an effectve method for dealng wth uncertanty problems, but the results obtaned are counterntutve when the evdences conflct hghly wth each other [18,19]. In the mproved evdence fuson algorthm we propose, the possble events can be depcted as evdences. Through combnaton rules, evdences are aggregated nto a comprehensve belef probablty assgnment under uncertanty condtons. It s assumed that a set of hypotheses about node status s denoted as frame of dscernment { GF, }. The symbol G represents a good sensor, and F s faulty. The power set Θ ncludes all of subsets of. Here {{ },{ G},{ F},{ }}, each symbol of whch respectvely represents the hypotheses about mpossble, good, faulty, and uncertanty. The belef probablty assgnment (BPA) functons of node are depcted as follows: We defne the BPA functon for good status s: m: [0,1] (10) m Φ 0 (11)

12 Sensors 014, u1 1 1 ( u1 ) 1 1 e m({ G}) u1 1 u u u1 Smlarly, the BPA functon for faulty status s: 0 u ( u1 ) 1 1 e m({ F}) u1 1 u u u1 (1) (13) The BPA functon for uncertanty status s: u11 41 ( u1 ) 1 m({ }) e u1 1 u u u1 Here we desgn an expectaton devaton functon (14). It s assumed that the measurement value of nodes at tme t on day D s a random varable, whch has the expectaton E and varance. Defne E that means the multple relaton between and the dfference between and E. x ndcates the data offset between node and the average of good neghbors. The larger more probable that the node s faulty. Wth the ncrease of x s, the x, m({g}) reduces, on the contrary, m({f}) rses. In Secton 3.1, we have dscussed that one of the defects of tradtonal algorthms s that an ndetermnacy occurs for the = condton n Equaton (4), and thus the node s not reducble to good or faulty. Therefore, we defne the range ( 1 1, 1 1) wthn whch the status of ths node has hgher uncertanty (the probablty of ths node beng fault s moderate) and m( ) N( 1, 1) when x ( 1 1, 1 1). When x 1, m({ }) 1, whch means the uncertanty reaches the top (depcted n the Fgure ).The defntons of m(g), m(f) and m( ) express ths meanng above and provde a good descrpton of the nfluence of changng x on evdence.

13 Sensors 014, In Equaton (14), when ( 1 1, 1 1), we can see that: Accordng to the Chebyshev nequalty: x P( ) 1/ x 1 1 e (15) p{ ε} σ / ε E (16) Make ε e then 1/ ε e 1/4 σ. The formula expands as follows: E P{ e } e 1/4 1/ (17) Fgure. The BPA functons. Accordng to Equatons (15) and (16), we get. Here, we defne σ 1 = 0.1, 1/4 1 1 e and then μ 1 = After all above, m({g}), m({f}) and m( ) can be calculated by Equatons (1) (14), respectvely. In D-S evdence theory, f there are more than two BPAs that need to be combned, then the combnaton rule s defned as follows: N N m( ) B Z 1 B (18) 1 m Z 1 K where K s the mass that s assgned to the empty set Φ, and K m ( B ). But the N B 1 tradtonal Dempster-Shafer evdence has a very obvous dsadvantage when beng used n our algorthm of fault detecton. For example: m1: m(g) = 0.8, m(f) = 0., m( ) = 0, m: m(g) = 0.8, m(f) = 0., m( ) = 0. The fused result s m(g) = 0.94, m(f) = 0.06, m( ) = 0. However, the result extremely negates F n the tradtonal Dempster-Shafer evdence fuson. Oblteratng conflct roughly and runnng normalzaton processes leads to extreme dfferences between G and F. Ths wll cause errors n the judgment of sensors states when usng udfd. That s because the node wll fnd the node j whch * matches the mn mj ({ G} m ({ G})). Too extreme evdence wll nfluence the effect of N 1

14 Sensors 014, comprehensve evdence. Based on ths, we propose a new evdence fuson rule combned wth nformaton entropy theory. Accordng to conflcts to the entrety presented by nformaton dvergences, we classfy evdences nto several sets. By fusng the results from dfferent sets, ths prevents extreme extenson of dfferences between G and F. By ths evdence fuson algorthm, we can fnally determne the nodes status. In classcal theores of nformaton, Shannon Entropy measures the amount of nformaton, whle the amount of nformaton reflects the uncertanty n random events. Consderng dfferent evdences should be assgned dfferent fuson weght accordng to ts amount of nformaton, so, n ths secton, the theores of entropy and the degree of dsagreement functon whch measures the nformaton dscrepancy are ntroduced nto combnaton rules for evdence conflcts and ncrease the accuracy of fault determnaton for Un nodes. Frstly, we ntroduce some defntons. The nformaton dvergence D( p q ) between dscrete random varables p and q s defned as below [0]: px ( ) D( p q) p( x) log (19) qx ( ) x It s obvous that D( p q) 0 assume that M l ndcates the lth evdence and Ml ( m1 l, ml,, mnl ), where m l s called a focal element. Here, focal elements n each evdence and s ndcates the amount of evdences. D l s defned as follows: n 1 m 1 1 D D M M m l s s n l l ( l j ) l ln s j1 s j1 1 mj 1, m 0,0 n,1 l s, where n s the number of l m (0) It ndcates the degree of dfferences between M l and the whole evdences. It s determned by the average of the nformaton dvergence between M l and each evdence. After ths, defne δ l as the percentage of the whole dfference degree that M l occupes. It s calculated as follows: D / D (1) l l 1 s Accordng to δ l, evdences are gong to be classfed nto several subsets. Evdences whch have smlar δ l are aggregated n the same subset. Before classfcaton, the demarcaton pont s confrmed as below, whch means the average dfferences between δ l : s s l () ss ( 1) l1 l1 Assume that C r s the resultant subset and P s the collecton of δ l. The pseudo code of the classfcaton algorthm s as follows: 1: r = 0; : Whle P s not empty, do 3: Randomly, choose any one element from P and put t nto C r. Remark ths element as C r1 ; 4: Remove C r1 from P; 5: Loop1, for l = 1 to s 6: Loop, for j = 1 to C r ( C r s the cardnalty of C r )

15 Sensors 014, : If l Crj, then contnue loop1; 8: End f; 9: End loop; 10: Put δ l nto C r and remove t from P; 11: End loop1; 1: r++; 13: End whle. After classfcaton, the dfference between δ l of each evdence n the same subset s less than, that s, evdences have a smaller extent of conflct n each subset. Assume that there are m subsets and the weght of each subset s defned as CWr Cr / s, where C r s the number of evdences n each subset. Evdences n each subset wll be fused wth weghts to get the aggregatve center of C r, and the weghtng fuson s based on nformaton entropy. Informaton entropy ndcates the amount of nformaton an evdence has. The larger the nformaton entropy s, the less amount of nformaton the evdence has. Defne that the nformaton entropy of evdence Ml ( m1 l, ml,, mnl ) s calculated as follows [1]: n H m ln m (3) l jl jl j1 If Θ s a focal element n M l, then a larger ml ( ) means M l has less amount of nformaton, so the amount of nformaton of M l can be calculated accordng to the followng formula: Hl Vl 1 ml f Hl 1 ml e (4) A smaller weght wll be assgned to an evdence whch has less amount of nformaton, so the weght allocaton of each evdence s as follows: W l C r V l V, 0 1 l f l where V l l (5) 1 Cr, f l, Vl=0 Then we can get the aggregatve center of C r by usng an mproved D-S formula. For any focal n A p A Kq A, where: element A, the result of fuson s ( ) r p A Cr ml ( A ) (6) C r A 1 1 A l C r q A Wlml( A) (7) l1 K (8) Here, p(a) represents the tradtonal way to fuse evdences and q(a) represents the average support degrees from each evdence to A. When K s large enough, the nfluence from q(a) s ncreased. Assume that there are m subsets and n r s the aggregatve center of C r, then the fnal result of fuson s: m k1 k k (9) A m A ( ( CW n ( A))) ( m( A))

16 Sensors 014, Smulaton Analyss 4.1. Smulaton Settng We use the MATLAB smulaton tool to demonstrate our model. As shown n Fgure 3, a square wth a sde length of 100 m s constructed n our model, n whch sensors are deployed and form the network. Ten temperature sources are deployed n the square as the sensng objects of sensors. The dstance between two sources s no less than L. Every temperature source randomly generates temperature data x whch ranges from 5 to 40 C. These readngs smulate the temperature varaton of four seasons, whch means t has regularty and smoothness. Second, n sensors are deployed n ths square and each of them selects the nearest temperature source whch must be n the sensng range. If no temperature source exsts wthn sensng range, a sensor s set to not work, whch means no sensng from a temperature source and no communcaton wth neghbor nodes. Fgure 3. Topology descrpton. Each workng node establshes ts varaton of sensed data accordng to the dstance to ts temperature source, whch can be descrbed by the formulas below: max x d /10 (30) mn x d /10 (31) max and mn are the upper and lower bounds of the data range, respectvely. x s the temperature generated by the temperature sources and t s unformly dstrbuted n ( mn, max ). d s the dstance between a sensor and ts temperature source. In every sensng moment, a sensor chooses a random value between max and mn as ts sensng data. Each sensor node chooses other nodes whch are wthn ts communcaton range (communcaton radus s represented by R) and have the same temperature source as ts neghbor nodes. After ths, each node creates a set of neghbor nodes and the wreless sensor network s formed. Two cases of unformly dstrbuted fault nodes and ntensvely dstrbuted fault nodes are smulated. The frst case s used n comparson when the number of nodes ranges. In the second case, we set squares

17 Sensors 014, whch are located at a random coordnate as a fault regon. We compare the detecton effects for dfferent scales of ntensve faults by changng the area of a square. Accordng to the sensng data desgnaton, data rangng from mn to max are treated as good, otherwse, data are treated as faulty. Fault data are set to 5 d / r or 5 d / r. Parameters are ntalzed: L = 30 m, r = 0 m, R = 0 m. In our max mn smulaton, each fnal data result s the average of results from 30 repeats. 4.. Smulaton Result Analyss Effect of Data loss At each moment of the data collecton, some nodes are chosen to be unable to sense data to smulate a data loss scenaro. The Data Mssng Preprocess Mechansm proposed n ths paper s compared wth the Data Fllng method based on the k-nearest Neghbor algorthm (df-knn) algorthm. The man dea of df-knn s to select k nodes from neghborhood whch have the shortest dstances, wegh the data of the k nodes accordng to these dstances and fnally sum the data as the nterpolaton result. Here, the data loss rate s set to 5%, 10%, 15%, 0%, 5%, 30% and 35%, respectvely. Fgure 4. Effects of data fllng. As shown n Fgure 4, data loss rate s set as the horzontal ordnate, whch means the rato of the number of the data loss nodes to the sum of workng nodes. The mean resdual s set as the vertcal ordnate, whch means the average of dfferences between nterpolaton data and pre-establshed data and t reflects the fnal accuracy of the algorthms. Mean resdual grows as the loss rate grows. When the loss rate s lower, the mean resdual of udfd s 0.1 lower than that of KNN, and acheves an unremarkable mprovement, but as the loss rate grows hgher, the mprovement turns to be hgher. Approxmately, when the loss rate s hgh enough, the mprovement s 0.5, whch means udfd s more sutable n the large-scale data loss stuaton. Wth the growth of data loss rate, the number of neghbors whch have avalable data reduces, whch means less nformaton could be collected and eventually ths makes the nterpolaton results unrelable. In comparson wth df-knn, udfd adequately nvolves the hstorcal data of neghbors to predct and solve the problem of credt reducton due to less avalable data, whch leads to better results.

18 Sensors 014, Evdence Fuson In ths paper, nformaton theory-based evdence reasonng s used to fuse collected evdences before the status judgment of nodes. Orgnal D-S evdence reasonng and an mproved one proposed by Qang Ma et al. [] are used for comparson. The mproved D-S s depcted as below. Defne the dstance between evdence m 1 and m : Defne the smlarty of m 1 and m : d m, m 1/ m m ( m m ) (3) T S m, m 1 d( m, m ) (33) Defne the basc credt of m : Here N s the sum of evdences. Defne the weght of m j : Amend all evdences: s ( m 1, 1, m ) (34) jn j / max (35) j 1jN ( ) m A m A (36) m (1 ) m (37) A s the establshed focal element and ψ s the uncertan one. Fuse the amended evdences through the orgnal D-S. The algorthm above measures the degree of conflcts among evdences by nvolvng dstance and amends evdences before fuson. The smulaton results are shown n Table 1. Belef functon and plausblty functon are nvolved to estmate the fuson results. BPA-based belef functon n the frame of dscernment Θ s defned as: Bel A mb ( ) BA (38) BPA-based plausblty functon n the frame of dscernment Θ s defned as: PlA mb ( ) B A (39) Belef nterval s defned as [Bel(A), Pl(A)], whch s shown n Fgure 5. The hypothess that A s true s accepted n [0, Bel(A), s uncertan n [Bel(A), Pl(A)] and s refused n Pl(A), 1]. The length of nterval presents the possblty to make a correspondng concluson to ths hypothess. Fgure 5. Belef nterval.

19 Sensors 014, Table 1. Results of evdence fuson. Evdences Algorthms Evdence Set 1 m 1 :m(g) = 0.1, m(f) = 0., m(ψ) = 0.7 m :m(g) = 0., m(f) = 0., m(ψ) = 0.6 Evdence Set m 1 :m(g) = 0.1, m(f) = 0., m(ψ) = 0.7 m :m(g) = 0., m(f) = 0., m(ψ) = 0.6 m 3 :m(g) = 0.1, m(f) = 0.1, m(ψ) = 0.8 Evdence Set m 1 :m(g) = 0.1, m(f) = 0., m(ψ) = 0.7 m :m(g) = 0., m(f) = 0., m(ψ) = 0.6 m 3 :m(g) = 0.6, m(f) = 0., m(ψ) = 0. m(g) = 0.34, m(g) = 0.703, m(g) = , Orgnal D-S m(f) = , m(f) = , m(f) = 0.849, m(ψ) = m(ψ) = m(ψ) = Improved D-S by Qang Ma m(g) = 0.34, m(f) = , m(ψ) = m(g) = 0.561, m(f) = , m(ψ) = 0.7 m(g) = 0.881, m(f) = 0.909, m(ψ) = m(g) = 0.71, m(g) = , m(g) = , udfd m(f) = , m(f) = 0.7, m(f) = 0.679, m(ψ) = m(ψ) = m(ψ) = The belef functons and plausblty functons are shown n Table accordng to the fuson results. Table. Belef functons and plausblty functons of fuson results. Orgnal D-S Improved D-S by Qang Ma udfd Evdence Set 1 Evdence Set Evdence Set 3 Bel(G) = 0.34, Pl(G) = 6808 Bel(G) = 0.703, Pl(G) = Bel(G) = , Pl(G) = Bel(F) = , Pl(F) = Bel(F) = , Pl(F) = Bel(F) = 0.849, Pl(F) = 0.40 Bel(ψ) = , Pl(ψ) = 1 Bel(ψ) = , Pl(ψ) = 1 Bel(ψ) = , Pl(ψ) = 1 Bel(G) = 0.34, Pl(G) = 6808 Bel(G) = 0.561, Pl(G) = Bel(G) = 0.881, Pl(G) = Bel(F) = , Pl(F) = Bel(F) = , Pl(F) = Bel(F) = 0.909, Pl(F) = Bel(ψ) = , Pl(ψ) = 1 Bel(ψ) = , Pl(ψ) = 1 Bel(ψ) = 0.410, Pl(ψ) = 1 Bel(G) = 0.71, Pl(G) = Bel(G) = , Pl(G) = Bel(G) = , Pl(G) = Bel(F) = , Pl(F) = Bel(F) = 0.7, Pl(F) = Bel(F) = 0.679, Pl(F) = Bel(ψ) = , Pl(ψ) = 1 Bel(ψ) = , Pl(ψ) = 1 Bel(ψ) = 0.397, Pl(ψ) = 1 The results of these algorthms are smlar to each other when two evdences wth low degree of conflct n evdence set 1 are to be fused, among whch the results of the orgnal D-S and the mproved D-S proposed by Qang Ma [] are the same. A smlar evdence s added to set 1 to form set. Through the analyss of belef functons and plausblty functons of three algorthms, the possblty of orgnal D-S to accept that G s true s and the possblty to refuse s The possblty of mproved D-S to accept that G s true s and the possblty to refuse s The possblty of udfd to accept that G s true s and the possblty to refuse s 0.7. Accordng to evdence set, the possbltes of the prevous two algorthms to accept and refuse that G s true s too hgh to reflect the actual stuaton (the possbltes to accept and refuse are both lower than 0.) of each evdence. However, the algorthm proposed n ths paper s closer. In udfd, the possbltes to accept and refuse are not rased by reducng the uncertanty, whch makes the fuson result more credble. A completely dfferent evdence s added to set 1 to form set 3 wth hgh degree of conflct. It s obvous that the possblty (0.5978) of orgnal D-S to accept that G s true s so hgh that approaches the one of the last evdence whch s added n set 3 and the possblty (0.881) of mproved D-S s too low to reflect the

20 Sensors 014, affect caused by hgh degree of conflct. The result of udfd s between the ones of the prevous two algorthms, whch balances the nfluences of all evdences and s more credble Detecton Accuracy DFD, IDFD and udfd are compared based on the constructed wreless sensor network model. Measures to be nvolved are detecton accuracy (the rato of number of correctly detected nodes to the sum of workng nodes), false alarm rate (the rato of number of nodes whch are msjudged from good to false to the sum of workng nodes), mssng alarm rate (the rato of number of nodes whch are msjudged from false to good to the sum of workng nodes). In ths smulaton, each fnal data pont s the average of results from 30 repeats. Frst, we analyze the effects of these three algorthms wth the changng fault rate when nodes are randomly dstrbuted unformly. Consderng that dfferent nfluences are caused by dfferent dstrbuton denstes, cases wth 40, 80 and 10 workng nodes are smulated. Fgure 6 shows the detecton effects of the three algorthms wth 40 workng nodes. In ths case, nodes are dstrbuted sparsely n the smulaton regon. It can be seen from the fgure that wth the ncreasng fault rate, detecton accuracy shows an approxmate lnear downward trend; however, false alarm rate and mssng alarm rate show the opposte trend. Through further calculaton, when the fault rate ranges from 5% to 50%, average detecton accuracy of udfd s 9.33% ponts hgher than that of DFD and 6.5% ponts hgher than that of IDFD; average fault alarm rate of udfd s 6.93% ponts lower than that of DFD and 5.33% ponts lower than that of IDFD; average mssng alarm rate of udfd s.49% ponts lower than that of DFD and 1.0% ponts lower than that of IDFD. The udfd brngs better detecton accuracy under sparse dstrbuton condtons. Fgure 6. Detecton effects under 40 workng node condtons.

21 Sensors 014, Fgure 7 shows the detecton effects of the three algorthms wth 80 workng nodes. In ths case, nodes are dstrbuted moderately densely n the smulaton regon. As s shown by the fgure, wth the ncreasng fault rate, detecton accuracy shows an approxmately lnear downward trend; however, false alarm rate and mssng alarm rate show the opposte trend. Through further calculaton, when the fault rate ranges from 5% to 50%, average detecton accuracy of udfd s 7.31% ponts hgher than that of DFD and 5.44% ponts hgher than that of IDFD; average fault alarm rate of udfd s 4.1% ponts lower than that of DFD and 3.4% ponts lower than that of IDFD; average mssng alarm rate of udfd s 3.% ponts lower than that of DFD and.04% ponts lower than that of IDFD. The udfd provdes better detecton accuracy n ths condton. Meanwhle, as the fault rate grows, the superorty of the detecton effect of udfd contnues to ncrease. When the fault rate s 50%, the detecton accuracy of udfd s 10.08% ponts hgher than that of DFD and 7.68% ponts hgher than that of IDFD, whch ndcates that udfd adapts better to hgh fault rate condtons. Fgure 7. Detecton effects under 80 workng node condtons. Fgure 8 shows the detecton effects of the three algorthms wth 10 workng nodes. In ths case, nodes are dstrbuted densely n the smulaton regon. It can be seen from the fgure that wth the ncreasng fault rate, detecton accuracy shows an approxmately lnear downward trend; however, false alarm rate and mssng alarm rate show the opposte trend. Through further calculaton, when the fault rate ranges from 5% to 50%, the average detecton accuracy of udfd s 3.75% ponts hgher than that of DFD and 1.74% ponts hgher than that of IDFD; average fault alarm rate of udfd s 1.69% ponts lower than that of DFD and 1.14% ponts lower than that of IDFD; average mssng alarm rate of udfd s.09% ponts lower than that of DFD and 0.6% ponts lower than that of IDFD. The udfd provdes better detecton accuracy under dense dstrbuton condtons. Furthermore, as the fault rate grows, the superorty of the detecton effect of udfd ncreases contnually, whch ndcates that udfd performs better under hgh fault rate condtons.

22 Sensors 014, Fgure 8. Detecton effects under 10 workng node condtons. Through comprehensve analyss of Fgures 6 8, we can see that detecton accuracy of these algorthms ncreases wth the ncreasng dstrbuton densty of nodes. Ths s due to the ncrease of avalable nformaton when judgng resultng from the growng number of neghbor nodes. By comparson, the advantage of detecton accuracy of udfd ncreased as the dstrbuton densty of nodes decreases, whch ndcates that the detecton accuracy of udfd mproves more than that of DFD and IDFD. Thanks to evdence fuson based on the status of neghbor nodes before judgment, udfd works better under small number of neghbor node condtons. Second, we analyze the detecton accuracy of the three algorthms when fault nodes are ntensvely dstrbuted. The ntensve dstrbuton scheme nvolves settng squares located at random coordnates wth length of 0, 5, 30, 35 and 40 m as the fault regons. In the fault regon, all nodes are set to fault. By changng the sze of the ault regon, we can observe the detecton accuracy for dfferent scales of faulty nodes. Here, the number of workng nodes s 80. Fgure 9 shows detecton effects of the three algorthms wth dfferent fault regon szes. When the fault rate ranges from 5% to 50%, the average detecton accuracy of udfd s.08% ponts hgher than that of DFD and 1.55% ponts hgher than that of IDFD; average fault alarm rate of udfd s 0.84% ponts lower than that of DFD and 0.45% ponts lower than that of IDFD; average mssng alarm rate of udfd s 1.4% ponts lower than that of DFD and 1.09% ponts lower than that of IDFD. It s easy to conclude that udfd can acheve better detecton effects when ntensve faults occur. The udfd takes n more nformaton from the neghborhood to judge when dealng wth ntensve faults stuatons, whch reduces the nfluence from mutual cheatng among faulty nodes. In udfd, θ 1 s the threshold of data from dfferent nodes on same moment, θ s the threshold of data from the same node on dfferent moments and θ 3 s the threshold of data of the same node collected on

23 Sensors 014, the same moment of dfferent days. In our model, a value of θ rangng wthn the nterval (0, ) has lttle nfluence on detecton effect, so t s set to a fxed value of 1. Based on those values, the best combnaton of θ 1 and θ 3 s gong to be explored. The selecton of θ 1 and θ 3 s drectly related to the judgment results, thus, to explore the best combnaton of θ 1 and θ 3 becomes the key to explore the best detecton effect of udfd. Fgure 10 shows a 3D map of detecton accuracy wth dfferent combnatons of θ 1 and θ 3. Here, the number of workng nodes s 80. It can be seen from the curved surface that the combnaton of θ 1 = 5, θ 3 = 5.4 approaches the peak, whch means the maxmum detecton accuracy s Fgure 9. Detecton effects under ntensve fault condtons. Fgure 10. Detecton accuracy wth dfferent combnatons of θ 1 and θ 3.

24 Sensors 014, Communcaton Energy Consumpton In our model, communcaton between nodes s smulated by usng the ZgBee protocol. ZgBee, a personal area network protocol based on IEEE , supports short-dstance, low-complexty, self-organzng, low-power, hgh-speed and low-cost wreless communcaton technology and apples well n WSNs. The bref frame structure of ZgBee s shown n Fgure 11. The frame head s constructed by the bts from the applcaton layer, network layer, MAC layer and physcal layer. In our model, messages transmtted between nodes nclude prejudged statuses, evdences and fnal judged statuses. The nformaton above can be encapsulated n the payload feld of the applcaton layer frame by analyzng the frame structure of ZgBee. When statuses are to be transmtted, the payload s only 4 bts ( bts present message type and bts present status, namely LG/G) and the total length of a frame s 47 bts (the header length s 43 bts). Fgure 11. Bref frame structure of ZgBee. Rado energy dsspaton model s calculated as ndcated below:,, E l d E l E l d Tx Txelec Txamp leelec l fsd,? d d0 4 leelec l mpd,? d d0 Rx Rxelec elec (40) E l E l le (41) ETx l, d s the transmsson energy, whch ncludes electronc energy ETx elec l energy E l, d and amplfer Tx amp. l s the length of a message. E elec s determned by dgtal codng, modulaton and flterng s fxed as 50 nj/bt. = 10 fs pj/bt/m and = mp pj/bt/m4. d s the transmsson dstance. d 0 s the threshold of d and s set to 0 m. If d s larger than d 0, energy dsspaton s manly caused by free space power loss ( d ). Otherwse, energy dsspaton s manly caused by multpath power loss 4 ( d ). Frst, we analyze the number of messages transmtted n the smulated network under 40 workng node condtons, whch s depcted n Fgure 1. When transmttng messages, nodes exchange statuses by rado broadcastng. An accumulated number of messages s recorded n the process of 30 tests. Wth the growth of test rounds, the number of messages shows an approxmately lnear upward trend. Through further calculaton, approxmate slopes of DFD, IDFD and udfd are 66.7, and 103.3, respectvely. Obvously, the rate of ncrease of udfd s the lowest, whch means a mnmum of messages transmtted durng the fault detecton. Ths s because the nodes n DFD and IDFD have to exchange all prejudged states and fnal judged statuses whether they are good or faulty, whch leads to more nteractons, whle udfd only exchanges message f the tendency status s LG or fnal status s good.

25 Sensors 014, The comparson of average communcaton energy consumpton of each node after 30 tests s shown n Fgure 13. Average energy consumptons of all nodes n DFD, IDFD and udfd are 0.35, and 0.08 mj. By countng and comparng the energy consumpton of each node specfcally, we fnd that udfd has the best energy savng performance, whch s caused by the reducton of nteractons. Fgure 1. Number of messages accumulated durng 30 tests under 40 workng node condtons. Fgure 13. Energy consumpton of each node under 40 workng node condtons. Fgures show the energy consumpton of each detecton under condtons of 40, 80 and 10 workng nodes. Through further calculaton, under 40 workng node condtons, the average energy consumpton of all detectons n udfd s.87 mj lower than that of IDFD and 5.47 mj lower than that of DFD. Under 80 workng node condtons, the average energy consumpton of all detectons n udfd s 4.77 mj lower than that of IDFD and 10.6 mj lower than that of DFD.

Power law and dimension of the maximum value for belief distribution with the max Deng entropy

Power law and dimension of the maximum value for belief distribution with the max Deng entropy Power law and dmenson of the maxmum value for belef dstrbuton wth the max Deng entropy Bngy Kang a, a College of Informaton Engneerng, Northwest A&F Unversty, Yanglng, Shaanx, 712100, Chna. Abstract Deng

More information

A Robust Method for Calculating the Correlation Coefficient

A Robust Method for Calculating the Correlation Coefficient A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal

More information

A Network Intrusion Detection Method Based on Improved K-means Algorithm

A Network Intrusion Detection Method Based on Improved K-means Algorithm Advanced Scence and Technology Letters, pp.429-433 http://dx.do.org/10.14257/astl.2014.53.89 A Network Intruson Detecton Method Based on Improved K-means Algorthm Meng Gao 1,1, Nhong Wang 1, 1 Informaton

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1]

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1] DYNAMIC SHORTEST PATH SEARCH AND SYNCHRONIZED TASK SWITCHING Jay Wagenpfel, Adran Trachte 2 Outlne Shortest Communcaton Path Searchng Bellmann Ford algorthm Algorthm for dynamc case Modfcatons to our algorthm

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

Chapter 13: Multiple Regression

Chapter 13: Multiple Regression Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

Pop-Click Noise Detection Using Inter-Frame Correlation for Improved Portable Auditory Sensing

Pop-Click Noise Detection Using Inter-Frame Correlation for Improved Portable Auditory Sensing Advanced Scence and Technology Letters, pp.164-168 http://dx.do.org/10.14257/astl.2013 Pop-Clc Nose Detecton Usng Inter-Frame Correlaton for Improved Portable Audtory Sensng Dong Yun Lee, Kwang Myung Jeon,

More information

International Journal of Mathematical Archive-3(3), 2012, Page: Available online through ISSN

International Journal of Mathematical Archive-3(3), 2012, Page: Available online through   ISSN Internatonal Journal of Mathematcal Archve-3(3), 2012, Page: 1136-1140 Avalable onlne through www.ma.nfo ISSN 2229 5046 ARITHMETIC OPERATIONS OF FOCAL ELEMENTS AND THEIR CORRESPONDING BASIC PROBABILITY

More information

Boostrapaggregating (Bagging)

Boostrapaggregating (Bagging) Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod

More information

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud Resource Allocaton wth a Budget Constrant for Computng Independent Tasks n the Cloud Wemng Sh and Bo Hong School of Electrcal and Computer Engneerng Georga Insttute of Technology, USA 2nd IEEE Internatonal

More information

Comparison of Regression Lines

Comparison of Regression Lines STATGRAPHICS Rev. 9/13/2013 Comparson of Regresson Lnes Summary... 1 Data Input... 3 Analyss Summary... 4 Plot of Ftted Model... 6 Condtonal Sums of Squares... 6 Analyss Optons... 7 Forecasts... 8 Confdence

More information

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA 4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Homework Assignment 3 Due in class, Thursday October 15

Homework Assignment 3 Due in class, Thursday October 15 Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

Copyright 2017 by Taylor Enterprises, Inc., All Rights Reserved. Adjusted Control Limits for P Charts. Dr. Wayne A. Taylor

Copyright 2017 by Taylor Enterprises, Inc., All Rights Reserved. Adjusted Control Limits for P Charts. Dr. Wayne A. Taylor Taylor Enterprses, Inc. Control Lmts for P Charts Copyrght 2017 by Taylor Enterprses, Inc., All Rghts Reserved. Control Lmts for P Charts Dr. Wayne A. Taylor Abstract: P charts are used for count data

More information

Statistics II Final Exam 26/6/18

Statistics II Final Exam 26/6/18 Statstcs II Fnal Exam 26/6/18 Academc Year 2017/18 Solutons Exam duraton: 2 h 30 mn 1. (3 ponts) A town hall s conductng a study to determne the amount of leftover food produced by the restaurants n the

More information

Negative Binomial Regression

Negative Binomial Regression STATGRAPHICS Rev. 9/16/2013 Negatve Bnomal Regresson Summary... 1 Data Input... 3 Statstcal Model... 3 Analyss Summary... 4 Analyss Optons... 7 Plot of Ftted Model... 8 Observed Versus Predcted... 10 Predctons...

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Copyright 2017 by Taylor Enterprises, Inc., All Rights Reserved. Adjusted Control Limits for U Charts. Dr. Wayne A. Taylor

Copyright 2017 by Taylor Enterprises, Inc., All Rights Reserved. Adjusted Control Limits for U Charts. Dr. Wayne A. Taylor Taylor Enterprses, Inc. Adjusted Control Lmts for U Charts Copyrght 207 by Taylor Enterprses, Inc., All Rghts Reserved. Adjusted Control Lmts for U Charts Dr. Wayne A. Taylor Abstract: U charts are used

More information

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests Smulated of the Cramér-von Mses Goodness-of-Ft Tests Steele, M., Chaselng, J. and 3 Hurst, C. School of Mathematcal and Physcal Scences, James Cook Unversty, Australan School of Envronmental Studes, Grffth

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Statistics for Economics & Business

Statistics for Economics & Business Statstcs for Economcs & Busness Smple Lnear Regresson Learnng Objectves In ths chapter, you learn: How to use regresson analyss to predct the value of a dependent varable based on an ndependent varable

More information

2016 Wiley. Study Session 2: Ethical and Professional Standards Application

2016 Wiley. Study Session 2: Ethical and Professional Standards Application 6 Wley Study Sesson : Ethcal and Professonal Standards Applcaton LESSON : CORRECTION ANALYSIS Readng 9: Correlaton and Regresson LOS 9a: Calculate and nterpret a sample covarance and a sample correlaton

More information

CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING INTRODUCTION

CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING INTRODUCTION CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING N. Phanthuna 1,2, F. Cheevasuvt 2 and S. Chtwong 2 1 Department of Electrcal Engneerng, Faculty of Engneerng Rajamangala

More information

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method Appled Mathematcal Scences, Vol. 7, 0, no. 47, 07-0 HIARI Ltd, www.m-hkar.com Comparson of the Populaton Varance Estmators of -Parameter Exponental Dstrbuton Based on Multple Crtera Decson Makng Method

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

First Year Examination Department of Statistics, University of Florida

First Year Examination Department of Statistics, University of Florida Frst Year Examnaton Department of Statstcs, Unversty of Florda May 7, 010, 8:00 am - 1:00 noon Instructons: 1. You have four hours to answer questons n ths examnaton.. You must show your work to receve

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Application research on rough set -neural network in the fault diagnosis system of ball mill

Application research on rough set -neural network in the fault diagnosis system of ball mill Avalable onlne www.ocpr.com Journal of Chemcal and Pharmaceutcal Research, 2014, 6(4):834-838 Research Artcle ISSN : 0975-7384 CODEN(USA) : JCPRC5 Applcaton research on rough set -neural network n the

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Lecture 3: Shannon s Theorem

Lecture 3: Shannon s Theorem CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

arxiv:cs.cv/ Jun 2000

arxiv:cs.cv/ Jun 2000 Correlaton over Decomposed Sgnals: A Non-Lnear Approach to Fast and Effectve Sequences Comparson Lucano da Fontoura Costa arxv:cs.cv/0006040 28 Jun 2000 Cybernetc Vson Research Group IFSC Unversty of São

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

Markov Chain Monte Carlo Lecture 6

Markov Chain Monte Carlo Lecture 6 where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways

More information

Chapter 8 Indicator Variables

Chapter 8 Indicator Variables Chapter 8 Indcator Varables In general, e explanatory varables n any regresson analyss are assumed to be quanttatve n nature. For example, e varables lke temperature, dstance, age etc. are quanttatve n

More information

Bayesian predictive Configural Frequency Analysis

Bayesian predictive Configural Frequency Analysis Psychologcal Test and Assessment Modelng, Volume 54, 2012 (3), 285-292 Bayesan predctve Confgural Frequency Analyss Eduardo Gutérrez-Peña 1 Abstract Confgural Frequency Analyss s a method for cell-wse

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Uncertainty and auto-correlation in. Measurement

Uncertainty and auto-correlation in. Measurement Uncertanty and auto-correlaton n arxv:1707.03276v2 [physcs.data-an] 30 Dec 2017 Measurement Markus Schebl Federal Offce of Metrology and Surveyng (BEV), 1160 Venna, Austra E-mal: markus.schebl@bev.gv.at

More information

Uncertainty as the Overlap of Alternate Conditional Distributions

Uncertainty as the Overlap of Alternate Conditional Distributions Uncertanty as the Overlap of Alternate Condtonal Dstrbutons Olena Babak and Clayton V. Deutsch Centre for Computatonal Geostatstcs Department of Cvl & Envronmental Engneerng Unversty of Alberta An mportant

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

Supplementary Notes for Chapter 9 Mixture Thermodynamics

Supplementary Notes for Chapter 9 Mixture Thermodynamics Supplementary Notes for Chapter 9 Mxture Thermodynamcs Key ponts Nne major topcs of Chapter 9 are revewed below: 1. Notaton and operatonal equatons for mxtures 2. PVTN EOSs for mxtures 3. General effects

More information

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm Desgn and Optmzaton of Fuzzy Controller for Inverse Pendulum System Usng Genetc Algorthm H. Mehraban A. Ashoor Unversty of Tehran Unversty of Tehran h.mehraban@ece.ut.ac.r a.ashoor@ece.ut.ac.r Abstract:

More information

Inductance Calculation for Conductors of Arbitrary Shape

Inductance Calculation for Conductors of Arbitrary Shape CRYO/02/028 Aprl 5, 2002 Inductance Calculaton for Conductors of Arbtrary Shape L. Bottura Dstrbuton: Internal Summary In ths note we descrbe a method for the numercal calculaton of nductances among conductors

More information

Correlation and Regression. Correlation 9.1. Correlation. Chapter 9

Correlation and Regression. Correlation 9.1. Correlation. Chapter 9 Chapter 9 Correlaton and Regresson 9. Correlaton Correlaton A correlaton s a relatonshp between two varables. The data can be represented b the ordered pars (, ) where s the ndependent (or eplanator) varable,

More information

Regularized Discriminant Analysis for Face Recognition

Regularized Discriminant Analysis for Face Recognition 1 Regularzed Dscrmnant Analyss for Face Recognton Itz Pma, Mayer Aladem Department of Electrcal and Computer Engneerng, Ben-Guron Unversty of the Negev P.O.Box 653, Beer-Sheva, 845, Israel. Abstract Ths

More information

An Improved multiple fractal algorithm

An Improved multiple fractal algorithm Advanced Scence and Technology Letters Vol.31 (MulGraB 213), pp.184-188 http://dx.do.org/1.1427/astl.213.31.41 An Improved multple fractal algorthm Yun Ln, Xaochu Xu, Jnfeng Pang College of Informaton

More information

Department of Quantitative Methods & Information Systems. Time Series and Their Components QMIS 320. Chapter 6

Department of Quantitative Methods & Information Systems. Time Series and Their Components QMIS 320. Chapter 6 Department of Quanttatve Methods & Informaton Systems Tme Seres and Ther Components QMIS 30 Chapter 6 Fall 00 Dr. Mohammad Zanal These sldes were modfed from ther orgnal source for educatonal purpose only.

More information

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.

More information

Turbulence classification of load data by the frequency and severity of wind gusts. Oscar Moñux, DEWI GmbH Kevin Bleibler, DEWI GmbH

Turbulence classification of load data by the frequency and severity of wind gusts. Oscar Moñux, DEWI GmbH Kevin Bleibler, DEWI GmbH Turbulence classfcaton of load data by the frequency and severty of wnd gusts Introducton Oscar Moñux, DEWI GmbH Kevn Blebler, DEWI GmbH Durng the wnd turbne developng process, one of the most mportant

More information

Annexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances

Annexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances ec Annexes Ths Annex frst llustrates a cycle-based move n the dynamc-block generaton tabu search. It then dsplays the characterstcs of the nstance sets, followed by detaled results of the parametercalbraton

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

18.1 Introduction and Recap

18.1 Introduction and Recap CS787: Advanced Algorthms Scrbe: Pryananda Shenoy and Shjn Kong Lecturer: Shuch Chawla Topc: Streamng Algorthmscontnued) Date: 0/26/2007 We contnue talng about streamng algorthms n ths lecture, ncludng

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity Week3, Chapter 4 Moton n Two Dmensons Lecture Quz A partcle confned to moton along the x axs moves wth constant acceleraton from x =.0 m to x = 8.0 m durng a 1-s tme nterval. The velocty of the partcle

More information

Queueing Networks II Network Performance

Queueing Networks II Network Performance Queueng Networks II Network Performance Davd Tpper Assocate Professor Graduate Telecommuncatons and Networkng Program Unversty of Pttsburgh Sldes 6 Networks of Queues Many communcaton systems must be modeled

More information

A New Evidence Combination Method based on Consistent Strength

A New Evidence Combination Method based on Consistent Strength TELKOMNIKA, Vol.11, No.11, November 013, pp. 697~6977 e-issn: 087-78X 697 A New Evdence Combnaton Method based on Consstent Strength Changmng Qao, Shul Sun* Insttute of Electronc Engneerng, He Longang

More information

Chapter 9: Statistical Inference and the Relationship between Two Variables

Chapter 9: Statistical Inference and the Relationship between Two Variables Chapter 9: Statstcal Inference and the Relatonshp between Two Varables Key Words The Regresson Model The Sample Regresson Equaton The Pearson Correlaton Coeffcent Learnng Outcomes After studyng ths chapter,

More information

Hidden Markov Models

Hidden Markov Models CM229S: Machne Learnng for Bonformatcs Lecture 12-05/05/2016 Hdden Markov Models Lecturer: Srram Sankararaman Scrbe: Akshay Dattatray Shnde Edted by: TBD 1 Introducton For a drected graph G we can wrte

More information

Pulse Coded Modulation

Pulse Coded Modulation Pulse Coded Modulaton PCM (Pulse Coded Modulaton) s a voce codng technque defned by the ITU-T G.711 standard and t s used n dgtal telephony to encode the voce sgnal. The frst step n the analog to dgtal

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

Lecture 6: Introduction to Linear Regression

Lecture 6: Introduction to Linear Regression Lecture 6: Introducton to Lnear Regresson An Manchakul amancha@jhsph.edu 24 Aprl 27 Lnear regresson: man dea Lnear regresson can be used to study an outcome as a lnear functon of a predctor Example: 6

More information

Motion Perception Under Uncertainty. Hongjing Lu Department of Psychology University of Hong Kong

Motion Perception Under Uncertainty. Hongjing Lu Department of Psychology University of Hong Kong Moton Percepton Under Uncertanty Hongjng Lu Department of Psychology Unversty of Hong Kong Outlne Uncertanty n moton stmulus Correspondence problem Qualtatve fttng usng deal observer models Based on sgnal

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

x = , so that calculated

x = , so that calculated Stat 4, secton Sngle Factor ANOVA notes by Tm Plachowsk n chapter 8 we conducted hypothess tests n whch we compared a sngle sample s mean or proporton to some hypotheszed value Chapter 9 expanded ths to

More information

Evaluation of Validation Metrics. O. Polach Final Meeting Frankfurt am Main, September 27, 2013

Evaluation of Validation Metrics. O. Polach Final Meeting Frankfurt am Main, September 27, 2013 Evaluaton of Valdaton Metrcs O. Polach Fnal Meetng Frankfurt am Man, September 7, 013 Contents What s Valdaton Metrcs? Valdaton Metrcs evaluated n DynoTRAIN WP5 Drawbacks of Valdaton Metrcs Conclusons

More information

Topic 23 - Randomized Complete Block Designs (RCBD)

Topic 23 - Randomized Complete Block Designs (RCBD) Topc 3 ANOVA (III) 3-1 Topc 3 - Randomzed Complete Block Desgns (RCBD) Defn: A Randomzed Complete Block Desgn s a varant of the completely randomzed desgn (CRD) that we recently learned. In ths desgn,

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

This column is a continuation of our previous column

This column is a continuation of our previous column Comparson of Goodness of Ft Statstcs for Lnear Regresson, Part II The authors contnue ther dscusson of the correlaton coeffcent n developng a calbraton for quanttatve analyss. Jerome Workman Jr. and Howard

More information

Chapter 15 - Multiple Regression

Chapter 15 - Multiple Regression Chapter - Multple Regresson Chapter - Multple Regresson Multple Regresson Model The equaton that descrbes how the dependent varable y s related to the ndependent varables x, x,... x p and an error term

More information

/ n ) are compared. The logic is: if the two

/ n ) are compared. The logic is: if the two STAT C141, Sprng 2005 Lecture 13 Two sample tests One sample tests: examples of goodness of ft tests, where we are testng whether our data supports predctons. Two sample tests: called as tests of ndependence

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland

More information

The optimal delay of the second test is therefore approximately 210 hours earlier than =2.

The optimal delay of the second test is therefore approximately 210 hours earlier than =2. THE IEC 61508 FORMULAS 223 The optmal delay of the second test s therefore approxmately 210 hours earler than =2. 8.4 The IEC 61508 Formulas IEC 61508-6 provdes approxmaton formulas for the PF for smple

More information

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur Analyss of Varance and Desgn of Experment-I MODULE VII LECTURE - 3 ANALYSIS OF COVARIANCE Dr Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Any scentfc experment s performed

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

Joint Statistical Meetings - Biopharmaceutical Section

Joint Statistical Meetings - Biopharmaceutical Section Iteratve Ch-Square Test for Equvalence of Multple Treatment Groups Te-Hua Ng*, U.S. Food and Drug Admnstraton 1401 Rockvlle Pke, #200S, HFM-217, Rockvlle, MD 20852-1448 Key Words: Equvalence Testng; Actve

More information

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem. Lecture 14 (03/27/18). Channels. Decodng. Prevew of the Capacty Theorem. A. Barg The concept of a communcaton channel n nformaton theory s an abstracton for transmttng dgtal (and analog) nformaton from

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Chapter 5 Multilevel Models

Chapter 5 Multilevel Models Chapter 5 Multlevel Models 5.1 Cross-sectonal multlevel models 5.1.1 Two-level models 5.1.2 Multple level models 5.1.3 Multple level modelng n other felds 5.2 Longtudnal multlevel models 5.2.1 Two-level

More information

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1 Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons

More information

SIO 224. m(r) =(ρ(r),k s (r),µ(r))

SIO 224. m(r) =(ρ(r),k s (r),µ(r)) SIO 224 1. A bref look at resoluton analyss Here s some background for the Masters and Gubbns resoluton paper. Global Earth models are usually found teratvely by assumng a startng model and fndng small

More information

Lecture 4. Instructor: Haipeng Luo

Lecture 4. Instructor: Haipeng Luo Lecture 4 Instructor: Hapeng Luo In the followng lectures, we focus on the expert problem and study more adaptve algorthms. Although Hedge s proven to be worst-case optmal, one may wonder how well t would

More information

Uncertainty in measurements of power and energy on power networks

Uncertainty in measurements of power and energy on power networks Uncertanty n measurements of power and energy on power networks E. Manov, N. Kolev Department of Measurement and Instrumentaton, Techncal Unversty Sofa, bul. Klment Ohrdsk No8, bl., 000 Sofa, Bulgara Tel./fax:

More information

UNIVERSITY OF TORONTO Faculty of Arts and Science. December 2005 Examinations STA437H1F/STA1005HF. Duration - 3 hours

UNIVERSITY OF TORONTO Faculty of Arts and Science. December 2005 Examinations STA437H1F/STA1005HF. Duration - 3 hours UNIVERSITY OF TORONTO Faculty of Arts and Scence December 005 Examnatons STA47HF/STA005HF Duraton - hours AIDS ALLOWED: (to be suppled by the student) Non-programmable calculator One handwrtten 8.5'' x

More information

DUE: WEDS FEB 21ST 2018

DUE: WEDS FEB 21ST 2018 HOMEWORK # 1: FINITE DIFFERENCES IN ONE DIMENSION DUE: WEDS FEB 21ST 2018 1. Theory Beam bendng s a classcal engneerng analyss. The tradtonal soluton technque makes smplfyng assumptons such as a constant

More information