Consensus Algorithms and Distributed Structure Estimation in Wireless Sensor Networks. Sai Zhang

Size: px
Start display at page:

Download "Consensus Algorithms and Distributed Structure Estimation in Wireless Sensor Networks. Sai Zhang"

Transcription

1 Consensus Algorthms and Dstrbuted Structure Estmaton n Wreless Sensor Networks by Sa Zhang A Dssertaton Presented n Partal Fulfllment of the Requrements for the Degree Doctor of Phlosophy Approved Aprl 2017 by the Graduate Supervsory Commttee: Chan Tepedelenloglu, Co-Char Andreas Spanas, Co-Char Kostas Tsakals Danel Blss ARIZONA STATE UNIVERSITY May 2017

2 ABSTRACT Dstrbuted wreless sensor networks (WSNs) have attracted researchers recently due to ther advantages such as low power consumpton, scalablty and robustness to lnk falures. In sensor networks wth no fuson center, consensus s a process where all the sensors n the network acheve global agreement usng only local transmssons. In ths dssertaton, several consensus and consensus-based algorthms n WSNs are studed. Frstly, a dstrbuted consensus algorthm for estmatng the maxmum and mnmum value of the ntal measurements n a sensor network n the presence of communcaton nose s proposed. In the proposed algorthm, a soft-max approxmaton together wth a non-lnear average consensus algorthm s used. A desgn parameter controls the trade-off between the soft-max error and convergence speed. An analyss of ths trade-off gves gudelnes towards how to choose the desgn parameter for the max estmate. It s also shown that f some pror knowledge of the ntal measurements s avalable, the consensus process can be accelerated. Secondly, a dstrbuted system sze estmaton algorthm s proposed. The proposed algorthm s based on dstrbuted average consensus and L 2 norm estmaton. Dfferent sources of error are explctly dscussed, and the dstrbuton of the fnal estmate s derved. The CRBs for system sze estmator wth average and max consensus strateges are also consdered, and dfferent consensus based system sze estmaton approaches are compared. Then, a consensus-based network center and radus estmaton algorthm s descrbed. The center localzaton problem s formulated as a convex optmzaton problem wth a summaton form by usng soft-max approxmaton wth exponental functons. Dstrbuted optmzaton methods such as stochastc gradent descent and dffuson adaptaton are used to estmate the center. Then, max consensus s used to

3 compute the radus of the network area. Fnally, two average consensus based dstrbuted estmaton algorthms are ntroduced: dstrbuted degree dstrbuton estmaton algorthm and algorthm for trackng the dynamcs of the desred parameter. Smulaton results for all proposed algorthms are provded.

4 To My Famly.

5 ACKNOWLEDGMENTS I would lke to express my sncere grattude to my advsors, Dr. Chan Tepedelenloglu and Dr. Andreas Spanas for the contnuous support of my Ph.D studes and related research, for ther motvaton, encouragement and constant support. Ther gudance helped me n shapng ths research work. Specal thanks to ther enthusasm, extraordnary patence and serous atttude towards research, whch proved to be an mmense help to me, all the tme. I could not have magned havng better advsors and mentors for my Ph.D. Besdes my advsors, I am grateful to Dr. Konstantnos Tsakals and Dr. Danel Blss for ther precous tme n servng on my thess commttee member and for ther nsghtful comments and valuable feedback. I would lke to thank Dr. Mahesh Banavar for hs advce and many dscusson. Wthout hs precous support t would not bepossble to conduct thsresearch. Iwould lke to extend my apprecaton to the School of Electrcal, Computer and Energy Engneerng at Arzona State Unversty for provdng me ths opportunty to pursue my Ph.D degree. I would lke to thank all my frends and current and former colleagues n the Sen- SIP center, Svaraman Dasarathan, Jongmn Lee, Xue Zhang, Xaofeng L, Ruochen Zeng, Ahmed Ewasha, Jayaraman Jayaraman Thagarajan, Karthkeyan Natesan Ramamurthy, Huan Song, Je Fan, Davd Ramrez, Henry Braun, Abhnav Dxt, Uday Shankar and Sunl Rao for ther kndness, help and support. Most mportantly, I would lke to thank my parents, for ther uncondtonal love and support, wthout whom, I could not have completed ths work. v

6 TABLE OF CONTENTS Page LIST OF FIGURES... x CHAPTER 1 INTRODUCTION Wreless Sensor Networks Wreless Sensor Networks wth Fuson Center Wreless Sensor Network wth no Fuson Center Applcatons Consensus n Wreless Sensor Networks Average Consensus Max Consensus Contrbutons of the Dssertaton Outlne of the Dssertaton MAX CONSENSUS USING SOFT-MAX System Model Graph Representaton Assumptons on Wreless Sensor Network Model Revew of Average Consensus Max Consensus usng the Soft-max Problem Statement Proof of Convergence Analyss of the Max Consensus Algorthm Sources of Error Bound on Convergence Tme Shfted Non-lnear Bounded Functon Used n Max Consensus v

7 CHAPTER Page 2.6 Smulatons Performance of Max Consensus Performance of Max Consensus wth Shfted Non-lnear Bounded Functon DISTRIBUTED NODE COUNTING IN WIRELESS SENSOR NET- WORKS System Model Node Countng usng Average Consensus Problem Statement Node Countng Algorthm Specal Case: Equal x Performance Analyss Sources of Error Dstrbuton of ˆN Fsher Informaton Dscusson: Fsher Informaton for Consensus Based Dstrbuted System Sze Estmaton CRB for System Sze Estmaton n the Absence of Nose CRB for System Sze Estmaton n the Presence of Nose Smulaton Results Convergence of the Algorthm PDF of ˆN Specal Intal Values x as n (3.29) Small Network wth N = v

8 CHAPTER Page 4 DISTRIBUTED NETWORK CENTER AND RADIUS ESTIMATION System Model Revew of Mathematcal Background Revew of Soft-max Approxmaton Revew of Dstrbuted Optmzaton Revew of Max Consensus Estmaton of Network Center and Radus Problem Statement Dstrbuted Center Estmaton Dstrbuted Radus Estmaton Dscusson Steady State Error for Center Estmaton Convergence Speed for Center and Radus Estmaton Smulatons CONSENSUS BASED DISTRIBUTED ESTIMATION ALGORITHMS Dstrbuted Estmaton of the Degree Dstrbuton n Wreless Sensor Networks Estmaton of Degree Dstrbuton Estmaton of Degree Matrx Performance Analyss Dscussons Smulatons Runnng Consensus Over Dstrbuted Networks: Non-Statonary Data and Trackng Ablty v

9 CHAPTER Page System Model Runnng Consensus wth Non-Statonary Data Smulatons FUTURE WORK Dstrbuted Functon Computaton n WSNs Dstrbuted Network Structure Estmaton CONCLUSIONS REFERENCES APPENDIX A PROOF OF OPTIMAL ASYMPTOTIC COVARIANCE MATRIX FOR MAX CONSENSUS IN CHAPTER B PROOF OF THEOREM C PROOF OF THEOREM D PROOF OF THEOREM E PROOF OF CONVEXITY FOR OBJECTIVE FUNCTION IN DIS- TRIBUTED CENTER ESTIMATION IN CHAPTER v

10 LIST OF FIGURES Fgure Page 1.1 An Example of Wreless Sensor Network wth A Fuson Center An Example of Dstrbuted Wreless Sensor Network wth No Fuson Center Bounded Transmsson Functons Graph Representaton Of The Sensor Network, N = Entres of Tradtonal Max Consensus Result Versus Iteratons t (Keep the Largest Measurement at Each Iteraton) Entres of the Consensus Soft Max Result Versus Iteratons t, β = 5, ω = 0.015, h(x) = γtanh(ωx), α(t)= t+1, a Entres of the Consensus Soft Max Result Versus Iteratons t, β = 7, ω = 0.015, h(x) = γtanh(ωx), α(t)= t+1, a Entres of the Consensus Soft Max Result Versus Iteratons t, β=30, ω=10 11, h(x)= γtanh(ωx), α(t)= t+1, a Entres of the Consensus Soft Max Result Versus Iteratons t, β = 7, ω = 0.01, N = 75, h(x) = γtanh(ωx), α(t) = 12/(t+1) Entres of the Consensus Soft Max Result Versus Iteratons t, β = 7, ω = 0.01, N = 75, h(x) = γtanh(ω(x T)), T = , α(t) = 12/(t+1) Smulaton Result for Unform + Maxmum + ML Algorthm n [1]: Node Countng Result Versus Number of Iteratons t. σ 2 n = and K = Smulaton Result for Bernoull Tral Algorthm n [2]: Node Countng Result at Node 1 Versus Number of Iteratons t. σn 2 = 1 and K = x

11 Fgure Page 3.3 EntresofNodeCountngResultVersusNumberofIteratonst. x (0) N(0,25), σ 2 n = 1 and r (k) Bernoull Dstrbuted wth ±1. α(t) = 0.1/(t+1) and K = EntresofNodeCountngResultVersusNumberofIteratonst. x (0) = a = 5, σ 2 n = 1andr(k) Bernoull Dstrbuted wth ±1. α(t) = 0.1/(t+1) and K = EntresofNodeCountngResultVersusNumberofIteratonst. x (0) = a = 5, σ 2 n = 1 and r (k) N(0,1). α(t) = 0.1/(t+1) and K = MSE Versus t, Nosy σn 2 = 1,K = PDF for ˆN wth Dfferent K Values, SNR = 13.98dB, α(t) = 0.1/t PDF for ˆN wth Dfferent SNR Values, K = 100, α(t) = 0.1/t ˆN(t) at Dfferent Nodes, K = 1000, r (k) Bernoull Dstrbuted ˆN(t) at Dfferent Nodes, K = 1000, r (k) Gaussan Dstrbuted MSE Versus t(4 Nodes Network wthstar Topology), x 1 = 5,x 1 = 0, σ 2 n = 0 and K = MSE versus t (4 Nodes Network wth Star Topology), Nosy σ 2 n = 1,K = A Dstrbuted Network (2-D) wth N = 6 Nodes wth Network Center at the Orgn and Radus Graph Representaton of the Sensor Network, N = Estmate of the x Coordnate Value of the Center, x (t) Versus Iteraton t Usng Algorthm 1, η = 10 4 and Startng Pont x (0) = Estmate of the y Coordnate Value of the Center, y (t) Versus Iteraton t Usng Algorthm 1, η = 10 4 and Startng Pont y (0) = x

12 Fgure Page 4.5 Error Versus t at Node 1 wth the Algorthm 1, Where O(x O,y O ) s the True Center and x O = 0,y O = Estmate of the x Coordnate Value of the Center, x (t) Versus Iteraton t Usng Dffuson Adaptaton, η = 10 4 and Startng Pont to be Unformly Dstrbuted U( 0.5,0.5) Estmate of the y Coordnate Value of the Center, y (t) Versus Iteraton t Usng Dffuson Adaptaton, η = 10 4 and Startng Pont to be Unformly Dstrbuted U( 0.5,0.5) Average Error Versus t Usng Dffuson Adaptaton, Where O(x O,y O ) s the True Center and x O = 0,y O = Radus Estmate Versus t Usng Max Consensus n Secton The Intal Valueat Nodes Set to bethe DstanceBetween the Estmated Center and Its Own Locaton Estmated Network Area at Node 1 at t = True Degree Dstrbuton Estmate of Degree Dstrbuton at Tme t = 100 at Node 1 n the Absence of Nose, σn 2 = 0 and α(t) = 0.1/t Estmate of Degree Dstrbuton at Tme t = 100 at Node 1 n the Presence of Nose, σn 2 = 0.1 and α(t) = 0.1/t Estmate of Degree Dstrbuton at Tme t = 100 at Node 1 n the Presence of Nose, σn 2 = 0.01 and α(t) = 0.1/t Error Versus t Smulaton Results for Post Processng as n Equaton (5.9): Error Versus t x

13 Fgure Page 5.7 Degree Dstrbuton Estmaton at Node 1 (n the Presence of Nose and K = Entres of Estmaton Result Versus Iteraton Tme t (Usng Runnng Consensus wth k = 19) Entres of Estmaton Result Versus Iteraton Tme t (Usng Runnng Consensus wth k = 99) Entres of Estmaton Result Versus Iteraton Tme t (Usng Runnng Consensus wth k = t) Entres of Estmaton Result Versus Iteraton Tme t (Usng Dffuson LMS n [3] wth µ = 0.01 and u k,t = 1) Entres of Estmaton Result Versus Iteraton Tme t (Usng Dffuson LMS n [3] wth µ = 0.05 and u k,t = 1) Entres of Estmaton Result Versus Iteraton Tme t (Usng Runnng Consensus wth k = 19) Entres of Estmaton Result Versus Iteraton Tme t (Usng Runnng Consensus wth k = 99) Entres of Estmaton Result Versus Iteraton Tme t (Usng Runnng Consensus wth k = t) x

14 Chapter 1 INTRODUCTION 1.1 Wreless Sensor Networks A wreless sensor network (WSN) s a group of specalzed spatally dstrbuted sensors used to montor and record quanttes, such as temperature, pressure, speed, chemcal concentraton, pollutant levels and so on [4 6]. Sensors n the wreless sensor networks are usually small, nexpensve, memory-lmted, lghtweght, power effcent and portable devces [4]. Therefore, wreless sensor networks usually have many advantages such as scalablty and low power consumpton. The development of wreless sensor networks was motvated by mltary applcatons such as battlefeld survellance. Currently, sensor networks are used wdely n many ndustral and consumer applcatons such as envronmental and habtat montorng, dsaster management, and emergency response applcatons [7] Wreless Sensor Networks wth Fuson Center In a wreless sensor network wth a fuson center, the spatally dstrbuted sensor nodes are used to montor physcal or envronmental condtons and pass ther data through the network to the fuson center [5,6,8]. An example of the wreless sensor network wth fuson center s gven n Fgure 1.1. In a centralzed wreless sensor network, the fuson center has all the data from sensor nodes. Therefore, functons of the data or measurements from the sensor nodes, such as the average, the maxmum or the mnmum of the ntal measurements can be easly computed at the fuson center. However, there are also dsadvantages of 1

15 usng a centralzed wreless sensor network. If a centralzed archtecture s used, the entre network wll collapse f the fuson center crashes. Moreover, centralzed wreless sensor networks usually requre a large bandwdth snce the sensor nodes n the network need to communcate wth a common fuson center [9]. Fgure 1.1: An Example of Wreless Sensor Network wth A Fuson Center Wreless Sensor Network wth no Fuson Center In a dstrbuted network wthout the fuson center, sensor nodes communcate and exchange data wth each other. Usually t s assumed that there s a lnk between two nodes f ther physcal dstance s smaller than the communcaton radus and that two nodes can communcate wth each other f there s a lnk between them. An example of the dstrbuted wreless sensor network wth no fuson center s gven n Fgure 1.2. Wreless sensor network wthout a fuson center can functon autonomously wthout a central node controllng the entre network. Compared to the centralzed network, there are many advantages of usng a dstrbuted network wthout a fuson center: a dstrbuted system s more scalable than 2

16 a centralzed system wth a fuson center and t s more robust to lnk falures. Snce the nodes n a decentralzed network communcate only wth ther neghbors, the sensors requre low power [10 12]. However, there are also dsadvantages. Functon computaton n dstrbuted wreless sensor network s usually more complcated than n centralzed network. For example, system sze estmaton can easly be done n a centralzed network by lettng each node transmt a fxed constant value to the fuson center, but the problem s not straghtforward n a network wthout a fuson center [9, 13]. Moreover, convergence of the states of nodes s slow n a dstrbuted sensor network. Fgure 1.2: An Example of Dstrbuted Wreless Sensor Network wth No Fuson Center Applcatons Wreless sensor network s wdely used n both mltary and ndustral applcatons [8, 14]. A comprehensve revew of the wreless sensor network applcatons s gven n [8,15]. 3

17 In mltary applcatons, wreless sensor network s manly used for trackng enemes. In [16], based on collaboratve sgnal process n WSNs, an approach for trackng multple targets s presented. Improved movng vehcle target classfcaton n battlefelds usng WSNs s ntroduced n [17], where multmodal fuson n WSN s used. Wreless sensor network s also wdely used n ndustral and commercal applcatons. For example, dstrbuted sensng, detecton and estmaton applcatons can be found n [18 22]. Sensors equpped wth solar cells for envronment protecton are mentoned n [8], sensor network can be used to protect the forest wthout human acton for months or even for years. WSN can be used n extreme envronments [23 25], for example near a volcano or a flood area, and can functon autonomously wthout manually control. WSNs can also be used n the area of health and medcne [8,26,27]. In a applcaton named Telemontorng of Human Physologcal, WSN s used to sense and store human physologcal data, and the data s used to explore and dagnose medcal and health problems. The advantage of usng a WSN n health applcatons s that the sensors are usually small n sze, therefore the sensor devses wll not affect the everyday lves of patents and allow doctors to dentfy symptoms earler or even n real tme [28]. 1.2 Consensus n Wreless Sensor Networks The consensus problem has a long hstory n dstrbuted computng and multagent system [29 31]. In dstrbuted wreless sensor networks, consensus s a process where all the sensors n the network acheve global agreement usng only local transmssons. The problem of consensus n WSNs has attracted great nterest among researchers n recent years snce t s useful n dverse applcatons, especally n computer scence, control and communcaton areas [32 35]. One of the most popular applcatons of 4

18 consensus s dstrbuted sensor fuson n sensor networks [36]. Dstrbuted average consensus s used n [36] for dstrbuted sensor fuson and the lnear least-squares estmator can be obtaned at nodes n a dstrbuted way by runnng average consensus. Max consensus and average consensus can also be used to estmate the envronmental data, such as the average temperature or maxmum polluton level, etc. In [37], max consensus s used to compensate for clock drft and s used to tme-synchronze wreless sensor network nodes. A more comprehensve revew of the consensus applcatons s gven n [30], where the applcatons, ncludng synchronzaton of coupled oscllators [38], flockng for moble agents [39] and dstrbuted formaton control, are dscussed. In the followng, two most of the wdely used consensus approaches n WSNs are ntroduced. A revew of average consensus s gven n Secton and max consensus s descrbed n Secton Average Consensus Average consensus s wdely used and well studed n the lterature [35, 40]. By runnng the average consensus, the states of nodes converge to the average of the ntal values. In [35], a lnear average consensus algorthm n the absence of communcaton nose s ntroduced. At each teraton tme, each node updates ts state based on ts own state at a prevous tme and data from ts neghbors. Durng the teratve update step, receved data s weghted wth a constant weght and t s shown n [35] that the convergence rate s related to the weght and the optmal weght matrx s calculated by solvng a convex optmzaton problem. In [32], t s assumed that the topology of the network s changng over tme and an average consensus algorthm for swtchng topology s proposed. Delays n the network are also consdered. Convergence of algorthm s proved and t s shown n[32] that the convergence speed s related to the algebrac connectvty of the graph. 5

19 Average consensus n the presence of communcaton nose and lnk falures s consdered n[34]. Two algorthms are provded: ) the frst algorthm named A N D algorthm uses a decayng step sze to control the effect of communcaton nose; and ) the second algorthm named A N C algorthm uses the tradtonal constant weght as n [35]. The teratve updatng algorthm only runs for a fxed number of teratons, and the algorthm s restarted and rerun for multple tmes. Fnally, the sample mean of the results from multple consensus runs s obtaned as the fnal result. A dstrbuted nonlnear average consensus algorthm n the presence of communcaton nose s proposed n [41]. A nonlnear sgmod functon s used to bound the transmt power and a decreasng step sze s used to control the effect of communcaton nose. It s shown n [41] that the nonlnear average consensus converges slower than the lnear average consensus, and there s a trade-off between the transmt power and the convergence rate: larger transmt power results n a faster convergence. In [42], average consensus wth mpulsve nose s consdered and a receve nonlnear functon s used to ensure convergence of the algorthm. The above mentoned works all assume that the sensors frst sense the envronment and then average consensus s appled. In [43], t s assumed that the sensng and averagng states are smultaneous and each node has a new measurement at each teraton tme. A tme dependent step sze s used and the average of all the ntal measurements can be obtaned at nodes. In[9], the problem of computng a certan functon of the sensed data s consdered. The proposed algorthm s based on the average consensus algorthm and unversal approxmaton theorem. A pre-processng functon s used to map the sensed data at sensor nodes and a post-processng functon s used to process the fnal average consensus results at nodes. It s proved n [9] that any contnuous functon of the ntal sensed data can be approxmated. 6

20 There are lots of works and applcatons usng the results of average consensus algorthms. For example, average consensus s used for system sze estmaton n [2]. In [44] and [45], average consensus s used to estmate the probablty mass functon of the ntal measurements Max Consensus Whle average consensus s well studed n lterature, estmatng the average s not always the goal. In varous applcatons, estmatng the maxmum measured value n the network s necessary [9], [46]. For example, spectrum sensng algorthms that use the OR-rule for cogntve rado applcatons can be mplemented usng max consensus [47]. Also, max consensus can be used to estmate the maxmum and mnmum degrees of the network graph, whch are useful n optmzng consensus algorthms [35]. In [48], t s also mentoned that max consensus and mn consensus have a broad range of applcatons n dstrbuted decson-makng for mult-agent systems. In [37], max consensus s used to compensate for clock drft and s used to tme-synchronze wreless sensor network nodes. To deal wth the problem of fndng a unque leader n a group of agents n a dstrbuted way, a max consensus problem n a nose free envronment s proposed n [48], where each node n the network collects data from all of ts neghbors and fnds the largest receved data. At each teraton, after comparng ts own state and the largest receved data, each node updates ts state wth the max of the two values. Max consensus algorthms usng a smlar approach as n [48] are proposed n [46, 49 52]. At each teraton tme, every sensor n the network updates ts state wth the largest measurements t has recovered so far. Reference [46] consders both parwse and broadcast communcatons, and analyzes the convergence tme. A Maxplus algebra s used n [50] to analyze the max consensus algorthm n a drected 7

21 graph. Tme dependent graphs are consdered n [51], where t s shown that strong connectvty s requred for reachng max consensus. A general class of algorthms whch can be used for both average and mn consensus algorthms s also mentoned n [49]. In [53], the authors extend the work of the weghted power mean algorthm orgnally proposed by [54] and show that ths class of algorthms can also be used to calculate the maxmum of the ntal measurements when the desgn parameter s chosen to be nfnty. A smlar max approxmaton algorthm s also mentoned n [9] to compute the maxmum of the ntal measurements n a centralzed sensor network wth a fuson center. Reference [53] also descrbes another dstrbuted coordnaton algorthm for max consensus. Rumor spreadng algorthms mentoned n [55, 56], whle not desgned specfcally for max consensus, may be helpful n max consensus problems. In ths setup, one or several nodes know that they have the maxmum and can spread the rumor (max) to all the other nodes. If nodes do not know whether they have the maxmum or not, a natural way to use rumor spreadng for max consensus s to use the max operator. Unfortunately, such an extenson of rumor spreadng s susceptble to nose on the communcaton lnk. 1.3 Contrbutons of the Dssertaton Here we summarze the man contrbutons of ths dssertaton. We consder the dstrbuted max consensus n the presence of communcaton nose. The contrbuton s n both desgn and analyss of a max consensus algorthm n wreless sensor networks n the presence of communcaton nose. Regardng desgn, the soft maxmum, together wth non-lnear bounded transmssons s proposed. In the proposed max consensus algorthm, every sensor 8

22 n the network evaluates a functon of ts ntal observaton and a non-lnear average consensus algorthm such as those n [41] can be used wth a judcous choce of a desgn parameter β. Regardng analyss, sources of errors n the proposed max consensus algorthm are presented. We show that the parameter of the soft-max functon that makes the soft-max approxmaton accurate also makes the convergence slow. The techncal novelty n the analyss s the analytcal study of ths trade-off. By boundng the sources of error, the needed convergence tme s calculated. We also ntroduce a shfted non-lnear bounded functon for faster convergence. The analyses provde gudelnes for nonlnear transmsson desgn, and algorthm parameter settngs to trade-off between estmaton error and faster convergence. We desgn a fully dstrbuted node countng algorthm for any connected dstrbuted network wth communcaton nose. The algorthm s based on L 2 norm estmaton and average consensus algorthm. A lnear teratve average consensus algorthm s used wth pre-processed ntal values. Then, by applyng average consensus and post-processng, each node reaches consensus on an estmate of number of nodes. The performance analyss n the presence of nose s provded and shows that the choce of the ntal values at nodes affects performance. The sources of error between the states of nodes and the desred convergence result s quantfed. The Fsher nformaton and dstrbuton of the estmate of N at each node s also derved. The analyss not only shows how the performance of the algorthm s affected by the number of teratons, nose varance, and structure of the graph, but also provdes gudelnes towards choosng the desgn parameters. The algorthm s fully dstrbuted and nodes do not have to be labeled or know the structure of the graph. 9

23 We consder system sze estmaton problem usng dfferent consensus algorthms such as average consensus and max consensus. We derve the Fsher nformaton and Cramer-Rao bounds for consensus-based system sze estmators consderng dfferent nose condtons. It s shown that n the absence of nose, the max consensus approach results n a lower Cramer-Rao bound than the average consensus approach. In the presence of communcaton nose, we demonstrate how the sgnal-to-nose rato affects the Fsher nformaton and Cramer-Rao bounds. The results not only present the best estmaton varance the algorthms can acheve, but also provde gudelnes on how to choose consensus algorthms and ntal values for system sze estmaton. We descrbe the desgn of a fully dstrbuted network area estmaton algorthm. In the proposed algorthm, we assume that nodes only know ther own locatons, and the network center and radus are estmated. The man contrbuton s that we formulate the network center estmaton problem as an optmzaton problem. By rewrtng the objectve functon usng soft-max approxmaton, the problem can be turned nto a convex optmzaton problem wth a summaton form. Therefore, dstrbuted optmzaton methods such as stochastc gradent descent and dffuson adaptaton method can be used to solve the convex optmzaton problem n a dstrbuted manner. It can be shown that the algorthm converges to an estmate of the center of the network. Then max consensus s used to estmate the radus and the network area s obtaned at all nodes. The proposed algorthm s fully dstrbuted and hence nodes do not need to be labeled; two nodes communcate wth each other only f they are neghbors. We descrbe the desgn of a fully dstrbuted degree dstrbuton estmaton algorthm n wreless sensor networks. We formulate the degree dstrbuton 10

24 estmaton problem as an emprcal PMF estmaton usng consensus n the presence of communcaton nose. The proposed algorthm s fully dstrbuted: sensor nodes do not need to be labeled and each node n the network only needs to know ts own degree. How the communcaton nose affects the performance s also dscussed. Fnally, we show that the propertes of degree dstrbuton can be used to mprove the proposed algorthm. We desgn a runnng consensus algorthm for trackng the dynamc of a desred estmator n a dstrbuted wreless sensor network. A desgn parameter s used to control the senstvty of the algorthm, and there s a trade-off between the senstvty to the dynamc of the estmator and the convergence of the states at nodes. We also compare the proposed algorthm wth the exstng dffuson method. 1.4 Outlne of the Dssertaton The rest of ths dssertaton s organzed as follows. In Chapter 2, a bref revew of the graph theory s provded. Later n the chapter, we descrbe the max consensus usng the soft-max approach. The estmaton error and convergence speed of the algorthm are also analyzed n Chapter 2. In Chapter 3, we focus on dstrbuted node countng to estmate the system sze of the network (number of actve nodes n the network) n the presence of communcaton nose. Performance analyss of the algorthm s gven, and dfferent sources of error are explctly dscussed. The overall performance of the system sze estmator s gven at the end of Chapter 3, where the dstrbuton and the Fsher nformaton of the estmator are calculated, and smulatons collaboratng the analyss are gven. In Chapter 4, a dstrbuted network center and radus estmaton algorthm s ntroduced. Dscusson on performance of the al- 11

25 gorthm and smulaton results are gven. In Chapter 5, two dstrbuted estmaton algorthms based on consensus algorthms are presented. We frst ntroduce a network degree dstrbuton estmaton algorthm based on average consensus and probablty mass functon estmaton. Then, a runnng consensus algorthm for trackng the dynamcs of a desred estmator s descrbed. Fnally, future work and conclusons are gven n Chapter 6 and 7. 12

26 Chapter 2 MAX CONSENSUS USING SOFT-MAX In ths chapter, a dstrbuted consensus algorthm for estmatng the maxmum value of the ntal measurements n a sensor network wth communcaton nose s descrbed. In the absence of communcaton nose, max estmaton can be done by updatng the state value wth the largest receved measurements n every teraton at each sensor. In the presence of communcaton nose, however, the maxmum estmate wll ncorrectly drft and the estmate at each sensor wll dverge. As a result, a softmax approxmaton together wth a non-lnear consensus algorthm s used n our work. Note that part of the works n ths chapter can be found n our publshed pepers n [57, 58]. The followng of ths chapter s organzed as follows. Frst, a bref revew of the graph theory and assumptons on the system model s gven n Secton 2.1. A bref revew of the exstng average consensus algorthms s gven n Secton 2.2. Then, n Secton 2.3, the proposed max consensus algorthm s descrbed. The performance of the algorthm s gven n Secton 2.4. Dfferent sources of error are explctly dscussed and we show that there s a trade-off between the soft-max error and convergence speed. We also show that f some pror knowledge of the ntal measurements s avalable, the convergence speed can be made faster by usng an optmal step sze n the teratve algorthm. In Secton 2.5, a shfted non-lnear bounded transmt functon s ntroduced for faster convergence when sensor nodes have some pror knowledge of the ntal measurements. Fnally, smulaton results corroboratng the theory are also provded n Secton

27 2.1 System Model Graph Representaton The structure of a dstrbuted wreless sensor network s modeled as an undrected graph, G = (N,E) contanng a set of nodes N = {1,...,N} and a set of edges E. The set of neghbors of node s denoted by N,.e., N = {j {,j} E}. Two nodes can communcate wth each other only f they are neghbors. The number of neghbors of node s d. We use a degree matrx, D = dag[d 1, d 2,..., d N ], whch s a dagonal matrx contanng the degrees of each node. The connectvty structure of the graph s characterzed by the adjacency matrx A = {a j } such that a j = 1 f {,j} E and a j = 0 otherwse. The graph Laplacan of the network L s defned as L = D A. The Laplacan matrx s bascally a matrx representaton of a specal case of the dscrete Laplacan operator, and many propertes of graph can be nferred from the Laplacan matrx, for example calculate the number of spannng trees for a graph [59]. For a connected graph, the smallest egenvalue of the graph Laplacan s always zero,.e., λ 1 (L) = 0 and λ (L) > 0, = 2,,N. The zero egenvalue λ 1 (L) = 0 corresponds to the egenvector wth all entres one,.e. L1 = 0. The performance of consensus algorthms often depends on λ 2 (L), whch s also known as the algebrac connectvty [32]. Algebrac connectvty of smple and weghted graphs are dscussed n [60], where several upper and lower bounds to λ 2 (L) are also gven Assumptons on Wreless Sensor Network Model In dstrbuted sensor network applcatons and algorthms, the two most commonly used ways of dssemnaton of nformaton are ) parwse communcatons and ) broadcast communcatons. In parwse communcatons, every node chooses a random neghbor n each teraton and the two nodes exchange nformaton [46, 55, 56]. 14

28 Broadcast communcaton model s more commonly used when wreless channel s consdered [34, 35, 41]. In ths dssertaton, we assume broadcast communcatons, where each node broadcasts ts state to ts neghbors at each teraton tme. Sensors may use ether analog or dgtal methods to transmt nformaton between neghbors. Dgtal methods quantze the nformaton, and use dgtal modulaton [61 64]. The bandwdth for the nter-sensor communcaton channel s drectly related to the number of quantzaton levels. The bandwdth s large when the number of quantzaton levels s large. The analog transmsson methods convey nformaton usng ampltude or phase modulaton. Analog modulaton s also wdely consdered n consensus algorthms and sensor network applcatons [31,35,65]. We assume analog transmssons n ths dssertaton. Nosy communcatons between nodes s consdered n ths manuscrpt. In wreless sensor networks, nosy communcaton models are wdely used n average consensus problems, such as [34, 41, 66], and detecton and estmaton problem over multple access channel n the presence of communcaton nose s consdered n [3]. Therefore t s standard practce to adopt nosy communcaton models between sensor nodes. To conclude, we have the followng assumptons on the system model: ) nodes n the dstrbuted sensor network have ther own ntal measurements, and the nodes do not know f they have the maxmum; ) the communcatons n the network are synchronzed, and at each teraton, nodes are broadcastng ther state values to ther neghbors; ) communcatons between nodes s analog followng [31, 35, 65] and s subject to addtve nose; and v) each node updates ts state based on the receved data. 15

29 2.2 Revew of Average Consensus Dstrbuted average consensus s well studed n lterature. In [35], dstrbuted lnear average consensus s consdered. It s assumed that the communcatons between nodes s perfect wthout nose. To compute the average of ntal state x(0) = [x 1 (0) x N (0)] T, the teratve updatng algorthm can be expressed as, x (t+1) = W x (t)+ W j x j (t), (2.1) j N where = 1,,N s the node ndex and t = 0,1,2, s the dscrete tme ndex. W R N N s the weght matrx and W j s ts element n the th row and jth column. In the algorthm as n equaton (2.1), node s updatng ts state at tme t+1 based on ts state n the prevous tme and data receved from ts neghbors, j N. It s shown n [35] that convergence of the algorthm s guaranteed s the followng condtons are satsfed, 1 T W = 1 T, W1 = 1, (2.2) ρ ( W 11 T) < 1, (2.3) whereρ( )sthespectralradusofamatrx. Thechoceoftheweght matrxw affects the convergence speed of the algorthm and an optmal W for fastest dstrbuted lnear averagng s calculated n [35] by solvng an optmzaton problem. In real world applcatons of wreless sensor networks, communcatons between nodes s usually nosy. In [34], a lnear teratve averagng algorthm n the presence of communcaton nose s ntroduced. To compute the average of ntal state x(0) = [x 1 (0) x N (0)] T, the average consensus algorthm can be expressed as, x (t+1) = [1 α(t)d ]x (t)+α(t) [x j (t)+n j (t)], (2.4) j N where = 1,2,...,N, and t = 0,1,2,..., s the tme ndex. The value x (t + 1) s the state update of node at tme t + 1 and n j (t) s the nose assocated wth the 16

30 recepton of x j (t). We assume n j (t) s Gaussan dstrbuted, n j (t) N (0,σn) 2 and s ndependent across tme and space. α(t) s a postve weght factor to bound the varance of communcaton nose, and s a decreasng functon of t. To ensure convergence, we make the followng assumptons on the system model: Assumptons: A1) Connected Graph: The graph s connect,.e. λ 1 (L) = 0 and λ (L) > 0, = 2,,N. A2) Independent Nose Sequence: The recepton nose s an ndependent sequence and we assume the nose s Gaussan dstrbuted,.e. n j (t) N ( 0,σ 2 n), σ 2 n. (2.5) A3) Persstence Condton: The postve weght step α(t) s a decreasng functon of t, and satsfes the condtons: α(t) > 0, α(t) =, t=0 α 2 (t) <. (2.6) t=0 The followng theorem characterzes the convergence result of the average consensus algorthm n the presence of communcaton nose: Theorem 1. Assume assumptonsa1), A2)andA3) hold. Let x(t) = [x 1 (t) x N (t)] T be the vector contanng the states of nodes at tme t. Then by runnng the teratve algorthm as n equaton (2.4), there exsts a real random varable θ such that, [ ] Pr lm x(t) = θ1 = 1. (2.7) t Let x = 1 N N =1 x (0) be the average of the ntal measurements. Defne ξ = E[(θ x) 2 ] be the mean square error. As t, we have ( N ) =1 ξ = d α 2 (t). (2.8) N 2 σn 2 t=0 17

31 As a result, for fnte t, ξ s bounded as, ξ (N 1)σ2 n N α 2 (t). (2.9) Proof. The proof s smlar to the proof of Theorem 4 and Lemma 5 n [34]. Equaton (2.8) can be obtaned by assumng the ntal measurements are 0 and the nodes n the network s convergng to the average of scaled nose samples receved at nodes. Equaton (2.9) holds snce d N 1. In wreless sensor networks, sensors are usually low cost and low power consumpton. Therefore, n [41,67,68], nonlnear dstrbuted average consensus are consdered and a nonlnear functon s used to bound the transmt power. The nonlnear average consensus algorthm wll be used n our max consensus algorthm and wll be more t=0 detaled descrbed n the followng of ths chapter. 2.3 Max Consensus usng the Soft-max Problem Statement Consder a wreless sensor network wth N sensor nodes, each wth a real-valued ntal measurement, x, = 1,2,,N. It s desred that the nodes reach consensus on the maxmum value of the ntal measurements, x max := max x, under the assumpton that the sensors have a sngle state that they update based on local receved measurements. Max consensus n the absence of nose s straght forward: the nodes update ther states wth the largest receved measurement thus far n each teraton. Consder the followng algorthm at each node: { } x (t+1)=max x (t),maxx j (t), ˆx max, (t+1)=x (t+1). (2.10) j N However, n the presence of nose, such algorthms wll dverge due to postve nose samples. An ntutve explanaton s that any postve nose sample wll always make the maxmum larger f the max operator s used n the max consensus algorthm. 18

32 tanh(x) x 1+x π arctan(π 2 x) u(x) Fgure 2.1: Bounded Transmsson Functons. x Average consensus s well studed n lterature. Exstng average consensus algorthms converge to the sample mean of the ntal measurements. As a result, the soft-max can be used to calculate the maxmum. To relate the soft-max to the sample mean of {e βx }, we have, ȳ = 1 N N e βx = 1 N =1 N y (0), (2.11) =1 whereȳ sthesamplemeanofthemappedntalmeasurementsandy (0) := e βx. The quantty ȳ s computed usng an teratve dstrbuted algorthm, n whch each sensor communcates only wth ts neghbors. If the states of all the sensor nodes converge to ȳ, then the network s sad to have reached consensus on the sample average of the mapped ntal measurements. The relaton between ȳ and the soft-max value s gven by smax(x) = 1 β log N =1 e βx = 1 (logn +logȳ). (2.12) β The average consensus algorthms lke n [34, 41] can be used to acheve consensus n the sensor network. Sensors may adopt ether a dgtal or analog method for transmttng ther nformaton to ther neghbors. One such method s the lnear amplfy-and-forward (AF) scheme n whch sensors transmt scaled versons of ther 19

33 measurements to ther neghbors where the teratve algorthm may be chosen as the lnear consensus algorthm of [34]. However, usng the AF technque s not a vable opton for consensus on the soft-max. The reason s that accurate approxmaton of the max value usng the soft-max method requres the parameter β to be large, whch can result n a large dynamc range of the mapped ntal measurements and large transmt power. Moreover, usng a lnear transmt amplfer s power-neffcent. As a result, a non-lnear consensus (NLC) algorthm can be mplemented [41]. The consensus on the soft maxmum s acheved by lettng each sensor map ts state value at tme t through a bounded functon h( ) before transmsson to ensure bounded transmt power. To descrbe the communcatons between nodes, we use the standard Gaussan MAC so that each node receves a nosy verson of the superposton of the transmtted sgnal from ts neghbors. Ths s because the step szes are the same across dfferent network lnks and there s no need to recover the transmtted data separately. Consder the followng algorthm wth addtve nose at the recever: [ y (t+1) = y (t) α(t) d h(y (t)) ] h(y j (t))+n (t), (2.13) j N where = 1,2,...,N, and t = 0,1,2,..., s the tme ndex. The value y (t + 1) s the state update of node at tme t+1, y j (t) s the state value of the j th neghbor of node at tme t, and n (t) s the addtve nose at node, whch s assumed to be ndependent across tme and space wth zero mean and varance σ 2 n. α(t) s a postve step sze whch satsfes t=0 α2 (t) < and t=0α(t) =. The node j transmts ts nformaton y j (t) by mappng t through the non-lnear functon h( ) to constran the transmtted power. We assume that h(x) = γ u(ωx), (2.14) where u(x) s a normalzed non-lnear bounded functon as n Fgure 2.1 and we make the followng assumpton on u(x): Assumptons 20

34 (A1): u(0) = 0. u(x) = u( x). (A2): max(u(x)) = 1. (A3): The functon u( ) s dfferentable and nvertble, u (0) = 1 and 0 < du(x) dx 1. The parameter γ controls the maxmum transmt power and ω s a scale parameter that controls how fast h( ) reaches the maxmum. The values of γ and ω affect the performance of the algorthm, for example larger γ value results n faster convergence. Note that nvertblty of h( ) s needed for convergence, however there s no need to apply the nverse of h( ) n equaton (2.13). Node receves a nosy verson of the superposton j N h(y j (t)). The recurson n equaton (2.13) can be expressed n vector form as, y(t+1) = y(t) α(t)[lh(y(t)) +n(t)], (2.15) where y(t) = [y 1 (t) y 2 (t) y N (t)] T and h(y(t)) = [h(y 1 (t)) h(y 2 (t)) h(y N (t))] T. L s the Laplacan matrx of the graph and n(t) s the vector contanng the addtve recepton nose at nodes. Snce the nose s..d. wth varance σn 2, the covarance of n(t) s σ 2 ni. Snce (2.13) converges to a value that approxmates (2.11), the consensus estmate of the maxmum at node can be wrtten usng (2.12) as ˆx max (t ) = 1 β (logn +logy (t )), (2.16) where t s the teraton at whch the algorthm s stopped Proof of Convergence Snce the non-lnear average consensus approach s used n the max consensus algorthm, the convergence proof wll follow the proof n [41] whch uses a dscrete tme Markov process approach [69] (also see Theorem 5 n [41]). Therefore, there exsts a fnte real random varable θ such that, [ ] Pr lm y(t) = t θ 1 = 1, (2.17) 21

35 where 1 s a column vector wth all ones. Equaton (2.17) shows that convergence s reached when t. The followng theorem characterzes the random varable θ. Theorem 2. θ n (2.17) s an unbased estmate of ȳ, E[θ ] = ȳ. Its mean square error ξ N = E[(θ y) 2 ], and s fnte whch can be expressed as, ξ N = σ2 n N α 2 (t). (2.18) Proof: The proof s a straghtforward adaptaton of Theorem 3 n [41]. t=0 The nodes n the sensor network reach consensus on the random varable θ whch sanunbasedestmatoroftheaverageofthemappedntalmeasurements, E[θ ] = ȳ. Then the soft-max of the ntal measurements can be obtaned usng equaton (2.12). 2.4 Analyss of the Max Consensus Algorthm Sources of Error Let θ 0 be a realzaton of θ. From (2.17) we have that the states of nodes n the sensor network are convergng to θ 0 as t. However, n practce, we need to stop thealgorthmatafnteteratontmet. There arethreesources oferrorbetween the true maxmum x max and ˆx max (t ) n (2.16): ) (smax(x) x max ) = 1 (logn +logȳ) β x max, due to the fact that soft-max approxmaton wll always be larger than the true max; ) (θ 0 ȳ) caused by communcaton nose and ) (y (t ) θ 0 ) cause by fnte number of teratons. In the followng subsecton, we are gong to characterze and analyze these errors Soft-max error Ths s a determnstc error whch depends on β,n, and the value of x. We have: x max smax(x) x max + 1 logn, (2.19) β 22

36 Both nequaltes are clearly tght for large β MSE of the algorthm The second term (θ 0 ȳ) s due to the presence of communcaton nose: the state of the sensors does not converge to the sample mean of the mapped ntal measurements, nstead t converges to a random varable θ whose expectaton s the sample mean of the mapped ntal measurements, ȳ from (2.11). Ths occurs also n lnear average consensus n the presence of nose. The mean square error of θ s defned as ξ N = E[(θ y) 2 ] and s characterzed as (2.18) n Theorem 2. From (2.18), we see that the mean square error s fnte and s small when t=0 α2 (t) or σ 2 n small Convergence speed The thrd cause of error s due to a fnte number of teratons: even though lm t y(t) = θ 0, y(t ) θ 0. However, wth a judcous choce of non-lnear functon h( ) and step sze α(t), one can reduce the convergence tme. In the rest of ths chapter, we wll assume that α(t) = a t+1,a > 0, whch satsfes t=0 α2 (t), t=0 α(t) =. The convergence speed s analyzed by establshng t(y(t) θ 0 1) s asymptotcally normal wth zero mean and some covarance matrx C. The next theorem further quantfes the convergence speed. Theorem 3. Let 2aλ 2 (L)h (θ 0 ) > 1 so that the matrx [ah (θ 0 )B+I/2] s stable (every egenvalue of the square matrx has strctly negatve real part) and I s the dentty matrx, and B s a dagonal matrx contanng all the non-zero egenvalues of L. Defne U = [N 1/2 1 Φ] whch s a untary matrx whose columns are the egenvectors of L. Let [ñ(t) ñ(t)] = N 1 U T n(t) and Cñ = E[ññ T ] s a dagonal 23

37 matrx, Cñ R (N 1) (N 1). Then as t, t(y(t) θ0 1) N(0,C), (2.20) where the asymptotc covarance matrx C = N 1 a 2 σ 2 n 11T +N 1 ΦS θ 0 Φ T, and S θ 0 = a 2 0 e (ah (θ 0 )B+I/2)t Cñe (ah (θ 0 )B+I/2)t dt. The proof s the same as gven n Theorem 5 n [41]. The convergence speed s quantfed by C, whch s defned to be the largest egenvalue of the covarance matrx. We show n Appendx that the l 2 norm of the covarance matrx can be expressed as C = max x 1 xt Cx = max { a 2 σ 2 n, 1 N a 2 σn 2 } 2ah. (2.21) (θ 0 )λ 2 (L) 1 Ths norm, C, can be optmzed wth respect to a, and the value that mnmzes C s a = (N + 1)/[2Nλ 2 (L)h (θ 0 )]. The optmal value for the l 2 norm of the covarance matrx denoted as C can be represented as ( ) N +1 2 ( σ C 2 )( ) = n 1 2 2N λ 2 2 (L) h (θ 0 ) ( ) N +1 2 ( )( ) σ 2 = n 1 2 2N λ 2 2 (L)γ, (2.22) ωu (ωθ 0 ) whch s proved n Appendx. The nterpretaton s that convergence s slower when C s larger. It s clear from equaton (2.22) that convergence wll be fast f λ 2 (L) large, whch mples faster convergence n a more connected graph. Also the value of C decreases as h (θ 0 ) ncreases, whch shows that the convergence speed depends on the non-lnear functon and the convergence pont. We see from equaton (2.22) that larger maxmum transmt power γ results n faster convergence. Also note that larger ωu (ωθ 0 ) value results n faster convergence as shown n equaton (2.22). Therefore, f 24

38 θ 0 s approxmately known from pror runs of the algorthm and γ s fxed, the value of ω can be set to the soluton of the optmzaton problem: maxmze ω ωu (ωθ 0 ). By observng the three sources of error mentoned above, we fnd there s a tradeoff between the convergence speed and the soft-max error. To see ths, recall that the convergence speed s quantfed by C. From the analyss of sources of error, choosng a larger β would reduce the determnstc bas caused by the soft-max mappng (2.19), but degrades the varance term n (2.22). The reason s that h( ) s chosen to be an odd bounded transmsson functon as n Fgure 2.1 wth a zero-crossng and steepest slope at the orgn, wth h (x) decreasng for x 0. Snce θ 0 0, h (θ 0 ) wll be small when θ 0 gets larger whch ncreases the value of C and makes the convergence slower. The convergence pont θ 0 wll be large when β s chosen large snce θ 0 1 N eβx. Therefore a trade-off between the convergence speed and the soft-max error exsts: a more accurate soft-max can be obtaned by choosng a large β, but ths degrades the convergence speed Bound on Convergence Tme The convergence speed of the max-consensus algorthm s quantfed by the asymptotc covarance matrx. If some pror knowledge about the dstrbuton of the ntal measurements s known, the step sze can be set based on the expresson of a = (N + 1)/[2Nλ 2 (L)h (θ 0 )] and α(t) = a /(t+1) to make the convergence fast. In ths secton, we assume that the step sze s set to be a as mentoned. The tradeoff controlled by β balances soft-max error and convergence speed. How much tme t s needed for the nodes to reach consensus s always an mportant problem. In ths subsecton, we wll show that by upper boundng the three sources of error n Secton 2.4.1, an approxmaton on the teraton tme for reachng consensus can be calculated. 25

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1]

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1] DYNAMIC SHORTEST PATH SEARCH AND SYNCHRONIZED TASK SWITCHING Jay Wagenpfel, Adran Trachte 2 Outlne Shortest Communcaton Path Searchng Bellmann Ford algorthm Algorthm for dynamc case Modfcatons to our algorthm

More information

Dynamic Systems on Graphs

Dynamic Systems on Graphs Prepared by F.L. Lews Updated: Saturday, February 06, 200 Dynamc Systems on Graphs Control Graphs and Consensus A network s a set of nodes that collaborates to acheve what each cannot acheve alone. A network,

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

Power Allocation for Distributed BLUE Estimation with Full and Limited Feedback of CSI

Power Allocation for Distributed BLUE Estimation with Full and Limited Feedback of CSI Power Allocaton for Dstrbuted BLUE Estmaton wth Full and Lmted Feedback of CSI Mohammad Fanae, Matthew C. Valent, and Natala A. Schmd Lane Department of Computer Scence and Electrcal Engneerng West Vrgna

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

Convergence of random processes

Convergence of random processes DS-GA 12 Lecture notes 6 Fall 216 Convergence of random processes 1 Introducton In these notes we study convergence of dscrete random processes. Ths allows to characterze phenomena such as the law of large

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

Erratum: A Generalized Path Integral Control Approach to Reinforcement Learning

Erratum: A Generalized Path Integral Control Approach to Reinforcement Learning Journal of Machne Learnng Research 00-9 Submtted /0; Publshed 7/ Erratum: A Generalzed Path Integral Control Approach to Renforcement Learnng Evangelos ATheodorou Jonas Buchl Stefan Schaal Department of

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

Hidden Markov Models

Hidden Markov Models Hdden Markov Models Namrata Vaswan, Iowa State Unversty Aprl 24, 204 Hdden Markov Model Defntons and Examples Defntons:. A hdden Markov model (HMM) refers to a set of hdden states X 0, X,..., X t,...,

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 ) Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often

More information

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence

More information

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud Resource Allocaton wth a Budget Constrant for Computng Independent Tasks n the Cloud Wemng Sh and Bo Hong School of Electrcal and Computer Engneerng Georga Insttute of Technology, USA 2nd IEEE Internatonal

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Appendix B: Resampling Algorithms

Appendix B: Resampling Algorithms 407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1 On an Extenson of Stochastc Approxmaton EM Algorthm for Incomplete Data Problems Vahd Tadayon Abstract: The Stochastc Approxmaton EM (SAEM algorthm, a varant stochastc approxmaton of EM, s a versatle tool

More information

Lecture 3: Shannon s Theorem

Lecture 3: Shannon s Theorem CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts

More information

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method Appled Mathematcal Scences, Vol. 7, 0, no. 47, 07-0 HIARI Ltd, www.m-hkar.com Comparson of the Populaton Varance Estmators of -Parameter Exponental Dstrbuton Based on Multple Crtera Decson Makng Method

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI Logstc Regresson CAP 561: achne Learnng Instructor: Guo-Jun QI Bayes Classfer: A Generatve model odel the posteror dstrbuton P(Y X) Estmate class-condtonal dstrbuton P(X Y) for each Y Estmate pror dstrbuton

More information

Some modelling aspects for the Matlab implementation of MMA

Some modelling aspects for the Matlab implementation of MMA Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

Chapter - 2. Distribution System Power Flow Analysis

Chapter - 2. Distribution System Power Flow Analysis Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load

More information

Chapter Newton s Method

Chapter Newton s Method Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve

More information

Clock Synchronization in WSN: from Traditional Estimation Theory to Distributed Signal Processing

Clock Synchronization in WSN: from Traditional Estimation Theory to Distributed Signal Processing Clock Synchronzaton n WS: from Tradtonal Estmaton Theory to Dstrbuted Sgnal Processng Yk-Chung WU The Unversty of Hong Kong Emal: ycwu@eee.hku.hk, Webpage: www.eee.hku.hk/~ycwu Applcatons requre clock

More information

Lecture 4: Universal Hash Functions/Streaming Cont d

Lecture 4: Universal Hash Functions/Streaming Cont d CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Lossy Compression. Compromise accuracy of reconstruction for increased compression. Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost

More information

A Hybrid Variational Iteration Method for Blasius Equation

A Hybrid Variational Iteration Method for Blasius Equation Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method

More information

Distributed parameter estimation in wireless sensor networks using fused local observations

Distributed parameter estimation in wireless sensor networks using fused local observations Dstrbuted parameter estmaton n wreless sensor networks usng fused local observatons Mohammad Fanae, Matthew C. Valent, Natala A. Schmd, and Marwan M. Alkhweld Lane Department of Computer Scence and Electrcal

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Changing Topology and Communication Delays

Changing Topology and Communication Delays Prepared by F.L. Lews Updated: Saturday, February 3, 00 Changng Topology and Communcaton Delays Changng Topology The graph connectvty or topology may change over tme. Let G { G, G,, G M } wth M fnte be

More information

Maximizing the number of nonnegative subsets

Maximizing the number of nonnegative subsets Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum

More information

STATS 306B: Unsupervised Learning Spring Lecture 10 April 30

STATS 306B: Unsupervised Learning Spring Lecture 10 April 30 STATS 306B: Unsupervsed Learnng Sprng 2014 Lecture 10 Aprl 30 Lecturer: Lester Mackey Scrbe: Joey Arthur, Rakesh Achanta 10.1 Factor Analyss 10.1.1 Recap Recall the factor analyss (FA) model for lnear

More information

General viscosity iterative method for a sequence of quasi-nonexpansive mappings

General viscosity iterative method for a sequence of quasi-nonexpansive mappings Avalable onlne at www.tjnsa.com J. Nonlnear Sc. Appl. 9 (2016), 5672 5682 Research Artcle General vscosty teratve method for a sequence of quas-nonexpansve mappngs Cuje Zhang, Ynan Wang College of Scence,

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD he Gaussan classfer Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world X observatons g decson functon L[g,y] loss of predctng y wth g Bayes decson rule s

More information

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2) 1/16 MATH 829: Introducton to Data Mnng and Analyss The EM algorthm (part 2) Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 20, 2016 Recall 2/16 We are gven ndependent observatons

More information

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS IJRRAS 8 (3 September 011 www.arpapress.com/volumes/vol8issue3/ijrras_8_3_08.pdf NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS H.O. Bakodah Dept. of Mathematc

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm Desgn and Optmzaton of Fuzzy Controller for Inverse Pendulum System Usng Genetc Algorthm H. Mehraban A. Ashoor Unversty of Tehran Unversty of Tehran h.mehraban@ece.ut.ac.r a.ashoor@ece.ut.ac.r Abstract:

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Communication-efficient Distributed Solutions to a System of Linear Equations with Laplacian Sparse Structure

Communication-efficient Distributed Solutions to a System of Linear Equations with Laplacian Sparse Structure Communcaton-effcent Dstrbuted Solutons to a System of Lnear Equatons wth Laplacan Sparse Structure Peng Wang, Yuanq Gao, Nanpeng Yu, We Ren, Janmng Lan, and D Wu Abstract Two communcaton-effcent dstrbuted

More information

Tracking with Kalman Filter

Tracking with Kalman Filter Trackng wth Kalman Flter Scott T. Acton Vrgna Image and Vdeo Analyss (VIVA), Charles L. Brown Department of Electrcal and Computer Engneerng Department of Bomedcal Engneerng Unversty of Vrgna, Charlottesvlle,

More information

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Wreless Informaton Transmsson System Lab. Chapter 7 Channel Capacty and Codng Insttute of Communcatons Engneerng atonal Sun Yat-sen Unversty Contents 7. Channel models and channel capacty 7.. Channel models

More information

DUE: WEDS FEB 21ST 2018

DUE: WEDS FEB 21ST 2018 HOMEWORK # 1: FINITE DIFFERENCES IN ONE DIMENSION DUE: WEDS FEB 21ST 2018 1. Theory Beam bendng s a classcal engneerng analyss. The tradtonal soluton technque makes smplfyng assumptons such as a constant

More information

Research Article Green s Theorem for Sign Data

Research Article Green s Theorem for Sign Data Internatonal Scholarly Research Network ISRN Appled Mathematcs Volume 2012, Artcle ID 539359, 10 pages do:10.5402/2012/539359 Research Artcle Green s Theorem for Sgn Data Lous M. Houston The Unversty of

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Appendix B. The Finite Difference Scheme

Appendix B. The Finite Difference Scheme 140 APPENDIXES Appendx B. The Fnte Dfference Scheme In ths appendx we present numercal technques whch are used to approxmate solutons of system 3.1 3.3. A comprehensve treatment of theoretcal and mplementaton

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS HCMC Unversty of Pedagogy Thong Nguyen Huu et al. A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS Thong Nguyen Huu and Hao Tran Van Department of mathematcs-nformaton,

More information

DO NOT DO HOMEWORK UNTIL IT IS ASSIGNED. THE ASSIGNMENTS MAY CHANGE UNTIL ANNOUNCED.

DO NOT DO HOMEWORK UNTIL IT IS ASSIGNED. THE ASSIGNMENTS MAY CHANGE UNTIL ANNOUNCED. EE 539 Homeworks Sprng 08 Updated: Tuesday, Aprl 7, 08 DO NOT DO HOMEWORK UNTIL IT IS ASSIGNED. THE ASSIGNMENTS MAY CHANGE UNTIL ANNOUNCED. For full credt, show all work. Some problems requre hand calculatons.

More information

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran

More information

Computing MLE Bias Empirically

Computing MLE Bias Empirically Computng MLE Bas Emprcally Kar Wa Lm Australan atonal Unversty January 3, 27 Abstract Ths note studes the bas arses from the MLE estmate of the rate parameter and the mean parameter of an exponental dstrbuton.

More information

Information Weighted Consensus

Information Weighted Consensus Informaton Weghted Consensus A. T. Kamal, J. A. Farrell and A. K. Roy-Chowdhury Unversty of Calforna, Rversde, CA-92521 Abstract Consensus-based dstrbuted estmaton schemes are becomng ncreasngly popular

More information

Lecture 10: May 6, 2013

Lecture 10: May 6, 2013 TTIC/CMSC 31150 Mathematcal Toolkt Sprng 013 Madhur Tulsan Lecture 10: May 6, 013 Scrbe: Wenje Luo In today s lecture, we manly talked about random walk on graphs and ntroduce the concept of graph expander,

More information

Markov Chain Monte Carlo Lecture 6

Markov Chain Monte Carlo Lecture 6 where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity Week3, Chapter 4 Moton n Two Dmensons Lecture Quz A partcle confned to moton along the x axs moves wth constant acceleraton from x =.0 m to x = 8.0 m durng a 1-s tme nterval. The velocty of the partcle

More information

Time-Varying Systems and Computations Lecture 6

Time-Varying Systems and Computations Lecture 6 Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

A Robust Method for Calculating the Correlation Coefficient

A Robust Method for Calculating the Correlation Coefficient A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal

More information

Basically, if you have a dummy dependent variable you will be estimating a probability.

Basically, if you have a dummy dependent variable you will be estimating a probability. ECON 497: Lecture Notes 13 Page 1 of 1 Metropoltan State Unversty ECON 497: Research and Forecastng Lecture Notes 13 Dummy Dependent Varable Technques Studenmund Chapter 13 Bascally, f you have a dummy

More information

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng

More information

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence. Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Random Walks on Digraphs

Random Walks on Digraphs Random Walks on Dgraphs J. J. P. Veerman October 23, 27 Introducton Let V = {, n} be a vertex set and S a non-negatve row-stochastc matrx (.e. rows sum to ). V and S defne a dgraph G = G(V, S) and a drected

More information

Feb 14: Spatial analysis of data fields

Feb 14: Spatial analysis of data fields Feb 4: Spatal analyss of data felds Mappng rregularly sampled data onto a regular grd Many analyss technques for geophyscal data requre the data be located at regular ntervals n space and/or tme. hs s

More information

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,

More information

Solutions Homework 4 March 5, 2018

Solutions Homework 4 March 5, 2018 1 Solutons Homework 4 March 5, 018 Soluton to Exercse 5.1.8: Let a IR be a translaton and c > 0 be a re-scalng. ˆb1 (cx + a) cx n + a (cx 1 + a) c x n x 1 cˆb 1 (x), whch shows ˆb 1 s locaton nvarant and

More information

1 GSW Iterative Techniques for y = Ax

1 GSW Iterative Techniques for y = Ax 1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn

More information

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Supporting Information

Supporting Information Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to

More information

Estimating the Fundamental Matrix by Transforming Image Points in Projective Space 1

Estimating the Fundamental Matrix by Transforming Image Points in Projective Space 1 Estmatng the Fundamental Matrx by Transformng Image Ponts n Projectve Space 1 Zhengyou Zhang and Charles Loop Mcrosoft Research, One Mcrosoft Way, Redmond, WA 98052, USA E-mal: fzhang,cloopg@mcrosoft.com

More information

Inexact Newton Methods for Inverse Eigenvalue Problems

Inexact Newton Methods for Inverse Eigenvalue Problems Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.

More information

SIO 224. m(r) =(ρ(r),k s (r),µ(r))

SIO 224. m(r) =(ρ(r),k s (r),µ(r)) SIO 224 1. A bref look at resoluton analyss Here s some background for the Masters and Gubbns resoluton paper. Global Earth models are usually found teratvely by assumng a startng model and fndng small

More information

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute

More information