Distributed parameter estimation in wireless sensor networks using fused local observations

Size: px
Start display at page:

Download "Distributed parameter estimation in wireless sensor networks using fused local observations"

Transcription

1 Dstrbuted parameter estmaton n wreless sensor networks usng fused local observatons Mohammad Fanae, Matthew C. Valent, Natala A. Schmd, and Marwan M. Alkhweld Lane Department of Computer Scence and Electrcal Engneerng West Vrgna Unversty, Morgantown, WV, USA ABSTRACT The goal of ths paper s to relably estmate a vector of unknown determnstc parameters assocated wth an underlyng functon at a fuson center of a wreless sensor network based on ts nosy samples made at dstrbuted local sensors. A set of nosy samples of a determnstc functon characterzed by a fnte set of unknown parameters to be estmated s observed by dstrbuted sensors. The parameters to be estmated can be some attrbutes assocated wth the underlyng functon, such as ts heght, ts center, ts varances n dfferent drectons, or even the weghts of ts specfc components over a prened bass set. Each local sensor processes ts observaton and sends ts processed sample to a fuson center through parallel mpared communcaton channels. Two local processng schemes, namely analog and dgtal, are consdered. In the analog local processng scheme, each sensor transmts an amplfed verson of ts local analog nosy observaton to the fuson center, actng lke a relay n a wreless network. In the dgtal local processng scheme, each sensor quantzes ts nosy observaton before transmttng t to the fuson center. A flat-fadng channel model s consdered between the local sensors and fuson center. The fuson center combnes all of the receved locally-processed observatons and estmates the vector of unknown parameters of the underlyng functon. Two dfferent well-known estmaton technques, namely maxmum-lkelhood ML, for both analog and dgtal local processng schemes, and expectaton maxmzaton EM, for dgtal local processng scheme, are consdered at the fuson centre. The performance of the proposed dstrbuted parameter estmaton system s nvestgated through smulaton of practcal scenaros for a sample underlyng functon. Keywords: Dstrbuted parameter estmaton, maxmum lkelhood ML, expectaton maxmzaton EM, analog/dgtal local processng, local quantzaton, ntegrated mean-squared error, flat-fadng channel, fuson center, wreless sensor networks.. INTRODUCTION Wreless sensor networks WSNs are typcally formed by a large number of densely-deployed geographcallydstrbuted sensors wth lmted capabltes that cooperate wth each other to acheve a common goal. One of the most mportant applcatons of such networks s dstrbuted parameter estmaton, whch s the frst step n a wder range of applcatons such as event detecton and classfcaton. In a WSN performng dstrbuted parameter estmaton, dstrbuted local sensors observe the condtons of ther surroundng envronment, process ther local observatons, and send ther processed data to a fuson center, whch then combnes all of the locallyprocessed samples to perform the ultmate global estmaton. Ths paper studes the problem of estmatng a vector of unknown parameters assocated wth a determnstc functon at the fuson center of a WSN from ts dstrbuted nosy samples observed by local sensors and communcated through parallel flat-fadng channels. Two local processng schemes, namely analog and dgtal, wll be consdered. In the analog local processng scheme, each sensor acts as a pure relay and transmts an amplfed verson of ts raw analog local observaton to the fuson center. In the dgtal local processng method, each sensor quantzes ts local nosy observaton and sends ts quantzed sample to the fuson center usng a dgtal modulaton format. A major applcaton of ths work could be n cases, where the estmated parameters can then be used n detecton and classfcaton of the underlyng object that has created the observed nfluence-feld ntensty functon. The man contrbuton of ths paper s a generalzed formulaton of dstrbuted parameter estmaton n the context of wreless sensor E-mal: mfanae@mx.wvu.edu, {matthew.valent and natala.schmd}@mal.wvu.edu, and malkhwel@mx.wvu.edu.

2 networks, where local observatons are not lnearly dependent on the underlyng parameters to be estmated and no specfc observaton model has been consdered n the analyss. Xao et al. have consdered the lnear coherent decentralzed mean-squared error MSE estmaton of an unknown vector under strngent bandwdth and power constrants, where the local observaton models, compresson functons at local sensors, and the fuson rule at the fuson center are all lnear. As a result of the bandwdth constrant, each sensor n ther proposed framework transmts to the fuson center a fxed number of real-valued messages per observaton. The power constrant n ther model lmts the strength of the transmtted sgnals. Based on ther proposed algorthm, each sensor lnearly encodes ts observatons usng a lnear amplfyand-forward codng strategy, and the fuson center apples a lnear mappng to estmate the unknown vector based on the receved messages from local sensors through a coherent multple-access channel MAC assumng a perfect synchronzaton between sensors and the fuson center so that the transmtted messages from local sensors can be coherently combned at the fuson center. It s shown n 2 that f the sensor statstcs are Gaussan,.e., the parameter to be estmated and the ndependent observaton noses are Gaussan, and the communcaton channels between local sensors and the fuson center are standard Gaussan multple-access channels, a smple uncoded analog-and-forward scheme, n whch each sensor s channel nput s merely a scaled verson of ts nosy observaton, drastcally outperforms the separate source-channel codng approach n the sense of mean-squared error. Therefore, the proposed decentralzed estmaton algorthm n can perform optmally n applcatons that satsfy the requrements summarzed n 2. In ths paper, a general non-lnear model for the dstrbuted local observatons s consdered and analyzed. Ishvar et al. 3 have consdered the problem of estmaton wth unrelable communcaton lnks and have derved an nformaton-theoretc achevable rate-dstorton regon characterzng the per-sample sensor bt-rate R versus the estmaton error D. In partcular, they have consdered two cases of fully-dstrbuted estmaton schemes wth no nter-sensor collaboraton and localzed or collaboratve estmaton schemes n whch t s assumed that the network s dvded nto a number of sensor clusters, where collaboraton s allowed among sensors wthn the same cluster but not across clusters. Nowak et al. 4 have studed the theoretcal upper and lower bounds and tradeoffs between the estmaton error and energy consumpton n WSNs as functons of local sensor densty. They have proposed practcal n-network processng approaches based on herarchcal data handlng and multscale parttonng methods for the estmaton of two-dmensonal nhomogeneous felds composed of two or more homogeneous smoothly-varyng regons separated by smooth boundares. Rbero and Gannaks 5 have proposed a dstrbuted bandwdth-constraned maxmum-lkelhood ML estmaton method for estmatng a scaler determnstc parameter n the presence of zero-mean addtve whte Gaussan observaton nose usng only quantzed versons of the orgnal local observatons perfectly avalable at the fuson center. In a sequel work, they have consdered more realstc sgnal models such as known unvarate but generally non-gaussan nose probablty densty functons pdfs, known nose pdfs wth a fnte number of unknown parameters, completely unknown nose pdfs, and practcal generalzatons to vector parameters and multvarate and possbly correlated nose pdfs 6. The observaton model n ths work could n general be a non-lnear functon of the vector parameter to be estmated, but t stll gnores the effects of mperfect wreless channels between local sensors and the fuson center. It s shown n these works that transmttng a few bts or even a sngle bt per sensor can approach the performance of the estmator based on unquantzed data under realstc condtons. In ths paper, a general observaton model wll be consdered n whch local observatons are not necessarly lnear functons of the vector of parameters to be estmated. Furthermore, mpared flat-fadng channels between local sensors and the fuson center wll be consdered n the analyss. Nu and Varshney 7 have proposed a sgnal ntensty- or energy- based ML target locaton estmaton scheme to estmate the coordnates of an energy emttng source usng quantzed local observatons, whch are assumed to be perfectly receved by the fuson center. They have consdered an sotropc sgnal ntensty attenuaton model n whch the ntensty of the sgnal s assumed to be nversely proportonal to the square of the dstance from ts source. An optmal nfeasble as well as two heurstc practcal desgn methods for optmal local quantzaton thresholds are proposed by them that mnmze the summaton of estmaton error varances for the target s two coordnates. In a sequel work, Ozdemr et al. 8 have added the effects of mperfect fadng and nosy wreless communcaton channels between local sensors and the fuson center to ther problem formulaton and have developed smlar results. Maşazade et al. 9 have consdered a smlar problem n whch the quantzed verson

3 of sensor measurements are transmtted to the fuson center over error-free channels. They have proposed an energy-effcent teratve source localzaton scheme n whch the algorthm begns wth a coarse locaton estmate obtaned from measurement data from a set of anchor sensors. Based on the accumulated nformaton at each teraton, the posteror pdf of the source locaton s approxmated usng an mportance-samplng-based Monte Carlo method, whch s then used to actvate a number of non-anchor sensors selected to mprove the accuracy of the source locaton estmate the most. Dstrbuted compresson of measurement data pror to transmsson s also employed at the non-anchor sensors to further reduce the energy consumpton. It s shown that ths teratve scheme reduces the communcaton requrements by selectng only the most nformatve sensors and compressng ther data pror to transmsson to the fuson center. An teratve non-lnear least-square receved sgnal strength RSS-based locaton estmaton technque s also proposed n 0 for jont estmaton of unknown locaton coordnates and dstance-power gradent, whch s a parameter of rado propagaton path-loss model. Salman et al. have proposed a low-complexty verson of ths approach. Although the llustratve case study for our numercal smulaton results n ths paper consders estmatng the locaton of the center of a generc Gaussan functon, among ts other parameters, ts scope s not lmted to estmatng only the locaton. In other words, ths paper consders a more general framework than the ones consdered n the aforementoned works and s not based on the sgnal propagaton model n the observaton envronment. The rest of ths paper s organzed as follows: Secton 2 descrbes the model of the dstrbuted parallel fuson WSN that wll be consdered n our analyss and nes the precse problem that we are consderng n ths paper. In Secton 3, the ML estmate of a vector of unknown parameters assocated wth a determnstc twodmensonal functon s derved for the case of analog local processng scheme. Secton 4 consders the same problem for the case of dgtal sgnal processng method. As t s shown n ths secton, the ML estmate n ths case s nfeasble to be found effcently n a dstrbuted manner. Therefore, a lnearzed verson of the expectaton maxmzaton EM algorthm wll be proposed n Secton 5 to numercally fnd the ML estmate of the vector of unknown parameters n the case of dgtal local processng scheme. Secton 6 presents the numercal results of our smulatons to study the performance of the proposed dstrbuted estmaton framework for a specal twodmensonal Gaussan-shaped functon, whose assocated parameters to be estmated are ts heght and center. The effects of dfferent parameters of the WSN on the performance of the proposed system wll be studed n ths secton. Fnally, n Secton 7 we conclude our dscussons and summarze the man achevements of ths work. Notatons. Throughout ths paper, the followng notatons are used. Boldface upper-case letters denote matrces, boldface lower-case letters denote column vectors, and standard lower-case letters denote scalars. The superscrpts T and denote the matrx transpose and matrx nverse operators, respectvely. The operator dag manpulates the dagonal elements of a matrx or puts a column vector nto a dagonal matrx. s the absolute value of a complex number or the determnant of a square matrx. 2. SYSTEM MODEL AND PROBLEM STATEMENT Consder a set of K spatally-dstrbuted sensors formng a WSN as shown n Fg.. Suppose that the sensors are located wthn the doman of a two-dmensonal functon gx, y that s completely known except for a set of unknown determnstc parameters denoted by θ θ, θ 2,..., θ p T. The ultmate goal of the underlyng WSN s to relably estmate the vector of unknown parameters θ usng dstrbuted nosy samples of functon gx, y provded by local sensors to a fuson center through parallel mpared flat-fadng channels. Assume that the th sensor,, 2,..., K, observes a nosy verson of a sample of functon gx, y at ts locaton as r gx, y + w, where r s the local sensor s nosy observaton, x, y s the locaton of the th sensor n the network, g gx, y s the sample of functon gx, y at locaton x, y, and w s zero-mean spatally-uncorrelated addtve whte Gaussan nose wth varance σ 2 O,.e. w N 0, σ 2 O. Sensors can be placed on a unform lattce or dstrbuted at random over the observaton area covered by the WSN. Throughout our dscussons n ths paper, we assume that the locaton of dstrbuted sensors s known at the fuson center.

4 ..... H M- x K u Channel K Sensor K K PY K U K y K h n u r z Sensor h 2 n 2 gx,y:θ r 2 u 2 Sensor h K n K z 2... Fuson Center θ ML r K Sensor K u K z K Fgure : System model of a typcal WSN for dstrbuted estmaton of a determnstc parameter vector θ. Note that the observaton model ntroduced n Equaton covers a number of varatons n dfferent applcatons. For example, t s a generalzaton of the well-studed lnear observaton model n whch r a T θ + w, where a a, a 2,..., a p T s the local observaton gan vector for sensor consdered for example by Rbero and Gannaks 5. Furthermore, t s also a generalzed verson of the observaton model consdered by Nu and Varshney 7 and Ozdemr et al. 8 Each sensor processes ts local observaton based on ts local processng rule γ and sends ts result, denoted by u γ r, to the fuson center through an mpared communcaton channel. In ths paper, two local processng schemes, namely analog and dgtal, are consdered. In the analog local processng scheme, each sensor acts as a pure relay and transmts an amplfed verson of ts raw analog local observaton to the fuson center. In the dgtal local processng method, each sensor quantzes ts local observaton and sends ts quantzed sample to the fuson center usng a dgtal modulaton format. We wll formulate detaled analyss of the local analog processng scheme n Secton 3 and that of the local dgtal processng scheme n Sectons 4 and 5. Suppose that locally-processed sample s transmtted to the fuson center over parallel ndependent flat fadng channels. In other words, assume that the receved sgnal from sensor at the fuson center s z h u + n,, 2,..., K, 2 where h s the fadng gan of the channel between sensor and the fuson center and n s zero-mean spatallyuncorrelated addtve whte Gaussan nose wth varance σ 2 C,.e. n N 0, σ 2 C. It s assumed that channel fadng gans are uncorrelated and completely known at the fuson center. Upon recevng the vector of locally-processed observatons from dstrbuted sensors z z, z 2,..., z K T, the fuson center combnes them to estmate the vector of unknown determnstc parameters θ. In the followng three sectons, we wll formulate the ML estmaton of θ at the fuson center for the two cases of analog and dgtal local processng schemes and the EM algorthm to numercally fnd the ML estmate of θ at the fuson center for the dgtal local processng scheme. 3. ML ESTIMATION OF θ: ANALOG LOCAL PROCESSING CASE Suppose that each sensor acts as a pure relay and transmts an amplfed verson of ts local raw analog observaton to the fuson center. In other words, u α r,, 2,..., K, 3 where α s the local amplfcaton gan at sensor, assumed to be known at the fuson center. Note that n ths paper, we do not try to optmze the local amplfcaton gans as t s consdered by Banavar et al. 2 for the problem of dstrbuted scalar random parameter estmaton. The receved sgnal from sensor at the fuson center can then be represented as z h α r + n h α g + h α w + n,, 2,..., K. 4

5 Assumng complete knowledge of the local observaton amplfcaton gans and channel fadng gans, the fuson center can make a lnear transformaton on the receved sgnal from each local sensor to fnd z h α z g + w + h α n g + v, 2,..., K, 5 where v w + h α n s a zero-mean whte Gaussan random varable wth varance σ 2 σo 2 + σ2 C. The h α 2 estmaton process at the fuson center can then be performed based on vector z z, z 2,..., z K T. The probablty densty functon of the lnearly-processed receved vector of local observatons from dstrbuted sensors at the fuson center s gven by f z z : θ 2πK Σ exp 2 z g T Σ z g, 6 where Σ dag σ, 2 σ2, 2..., σk 2 s the dagonal matrx of cumulatve nose varances and g g, g 2,..., g K T s the vector of samples of functon gx, y at sensor locatons. The jont log-lkelhood functon of the vector of unknown parameters θ can then be found as l θ ln f z z : θ ln Σ + z g T Σ z g a z g T Σ z g K z g 2, 7 σ 2 where a s based on the fact that Σ s ndependent of θ. The maxmum-lkelhood estmate of the vector of unknown parameters θ s the maxmzer of ts log-lkelhood functon gven n Equaton 7. In other words, θ ML arg max θ arg mn θ arg mn θ l θ z g T Σ z g K σ 2 z g 2. 8 Based on Equaton 8, the ML estmate of the vector of unknown parameters θ ML can be found as the soluton of the followng system of equatons: θ l θ θml 0, 9 T where θ θ, θ 2,..., θ p s the gradent operator wth respect to θ. Ths system of equatons can be smplfed n the vector form by substtutng l θ from Equaton 7 as g θ Σ z g 0, 0

6 where g θ s a p K matrx of partal dervatves of the components of vector g wth respect to the components of θ ned as g g 2 g θ θ K θ g g g 2 g θ 2 θ 2 K θ 2 θ g g 2 θ p θ p The system of equatons derved n Equaton 0 can be rewrtten n the scalar format as K σ 2 g θ j z g g K θ p 0, j, 2,..., p. 2 Ths system of equatons s hghly non-lnear n unknown parameters for most practcal applcatons and does not have a closed-form soluton. In Secton 6, ths system wll be solved usng numercal methods to estmate a vector of unknown parameters assocated wth a specfc two-dmensonal Gaussan-shaped functon of nterest gx, y. 4. ML ESTIMATION OF θ: DIGITAL LOCAL PROCESSING CASE Suppose that sensor quantzes ts local observaton r to b log 2 M bts and sends the ndex of ts quantzed mult-bt sample to the fuson center, where M s the number of quantzaton levels for sensor. Let L {β 0, β,..., β M } be the set of quantzaton thresholds for sensor, where β l s the lth quantzaton threshold of the th sensor, β 0, and β M,, 2,..., K. The local processng rule of sensor s then ned as a functon γ : R {0,,..., M }, whose values are determned as u l β l r < β l +, l 0,,..., M, and, 2,..., K. 3 Based on the local observaton model ntroduced n Equaton, the probablty densty functon of each sensor s quantzed output sample can be found as f U u : θ M l0 Q l δ u l, u 0,,..., M, and, 2,..., K, 4 where δ denotes the dscrete Drac delta functon, Q l s ned as β l g β l + g Q l Q Q, l 0,,..., M, and, 2,..., K, 5 σ O σ O and Q s the complementary dstrbuton functon of the standard Gaussan random varable ned as Qx exp 2 2π t2 dt. 6 x Lke n the case of analog local processng, assumng complete knowledge of the channel fadng gans, the fuson center can make a lnear transformaton on the receved sgnal from each local sensor to fnd z h z u + h n u + v, 2,..., K, 7

7 where v h n s a zero-mean Gaussan random varable wth varance σ 2 σ2 C h. The estmaton process at 2 the fuson center can then be performed based on vector z z, z 2,..., z K T. The probablty densty functon of the lnearly-processed receved vector of local observatons from dstrbuted sensors at the fuson center s gven by f z z : θ a b c K f Z z : θ K M u 0 K M u 0 K M l0 f Z U z u f U u : θ { exp 2πσ 2 M u 0 z u 2 2σ 2 exp z u 2 2πσ 2 l0 2σ 2 } M l0 Q l δ u l Q l δ u l K M Q 2πσ 2 l exp z l2 2σ 2, 8 where a s based on the fact that the receved local observatons from dstrbuted sensors are spatally uncorrelated, the summaton n b s based on the theorem of total probablty and over all M possble realzatons of each u, and c s based on the fact that the nner summaton over u can be smplfed usng the propertes of dscrete Drac delta functon as M u 0 exp z u 2 2σ 2 Q l δ u l exp z l2 2σ 2 Q l. 9 Therefore, the jont log-lkelhood functon of the vector of unknown parameters θ can be found as l θ ln f z z : θ K M ln Q l exp z l2 2σ l0 The maxmum-lkelhood estmate of the vector of unknown parameters θ s the maxmzer of ts log-lkelhood functon gven n Equaton 20. In other words, θ ML arg max θ Therefore, θ ML can be found as the soluton of the followng system of equatons: l θ. 2 θ l θ θml 0, 22 whch can be smplfed by the substtuton of l θ from Equaton 20 as M K l0 A,j l exp z l2 2σ 2 0, j, 2,..., p, 23 M l0 Q l exp z l2 2σ 2

8 where A,j l s the partal dervatve of Q l wth respect to θ j ned as A,j l { Q l} θ j g 2πσO 2 θ j exp g2 2σO 2 B l, 24 where B l s ned as B l 2g β l β 2 exp 2g β l + β 2 2σO 2 exp 2σO 2 l Ths system of equatons gven n Equaton 23 s hghly non-lnear n unknown parameters for most practcal applcatons and does not have a closed-form soluton. In practce, an effcent numercal approach has to be devsed to fnd the ML estmate formulated n ths secton. In the next secton, the expectaton-maxmzaton algorthm 3 wll be developed as an effcent teratve approach to numercally fnd the ML estmate of a vector of unknown parameters for the case of dgtal local processng scheme. 5. LINEARIZED EM SOLUTION: DIGITAL LOCAL PROCESSING CASE Based on the expectaton-maxmzaton algorthm ntroduced by Dempster et al. 3, we consder lnearly-processed receved vector of local observatons from dstrbuted sensors at the fuson center z ned n Equaton 7 as ncomplete data set, and the par of the vector of local observatons r r, r 2,..., r K T ned n Equaton and the vector of lnearly-transformed channel addtve whte Gaussan nose varables v v, v 2,..., v K T,.e. {r, v}, as complete data set. The mappng z u + v relates the ncomplete and complete data spaces, where u u, u 2,..., u K T s the vector of locally-quantzed observatons of dstrbuted sensors based on u γ r as the known local quantzaton rule of sensor ned n Equaton 3. The jont probablty densty functon of the complete data set parametrzed by the vector of unknown parameters θ s found as f CD r, v : θ a f r r : θ f v v 2πK Σ O exp 2 r gt Σ O r g 2πK Σ C exp 2 vt Σ C v 26 where a s based on the fact that r and v are ndependent Gaussan random vectors, Σ O dag σo 2, σo 2 2,..., σo 2 K and ΣC dag σ, 2 σ2, 2..., σk 2 are the dagonal matrces of observaton nose varances and transformed channel nose varances σ 2 complete data set s ned as σ2 C h, respectvely. The jont log-lkelhood functon of the 2 l CD r, v : θ ln f CD r, v : θ r g T Σ O r g K σo 2 r g 2, 27 where the terms ndependent of the vector of unknown parameters θ are omtted. k Let θ be the estmate of the unknown vector of parameters at the kth teraton of the expectatonmaxmzaton algorthm. To further refne and update the estmates of unknown parameters, we alternate the expectaton and maxmzaton steps ned as follows:

9 Expectaton Step E-Step: Durng the expectaton step, the condtonal expectaton of the jont loglkelhood functon of the complete data set, gven the ncomplete data set and θ k, s found as k F θ θ E l CD r, v : θ z k, θ K E r g 2 z k, θ, 28 σ 2 O where E denotes the expectaton operaton wth respect to the condtonal pdf of the complete data set, gven the ncomplete data set and the estmate of the vector of parameters at the kth teraton as f CD ID r, v : θ k z f r z r : θ k z f v z v : θ k z, 29 where r and v are two ndependent Gaussan random vectors, f v z nsde the expectaton operaton, and based on the Bayes rule, v : θ k z s ndependent of the argument f r z r : θ k z z : θ k k r f r r : θ. 30 k z : θ f z r f z Based on the theorem of total probablty, f z z : θ k Note that f z } {{ } K f z r z : θ k r f r r : θ k dr. 3 z : θ k s ndependent of the argument nsde the expectaton operaton n Equaton 28. Maxmzaton Step M-Step: Durng the maxmzaton step, the next estmate of the vector of unknown parameters s found as the maxmzer of the result of the expectaton step. In other words, θ k+ arg max F θ θ k, 32 θ whch can be rewrtten as θ j { F θ θ k} θk+ Ths system of equatons can be specfed more precsely by substtuton of F 0, j, 2,..., p. 33 θ θ k from Equaton 28 as E K σ 2 O r g g θ j z, k θ θk+ 0, j, 2,..., p. 34 The above condtonal expectaton can be found usng the probablty densty functons ned n Equa-

10 tons 29 3 as K E r g g θ j z, σ 2 O k θ a K K σo 2 E r g g z k θ, θ j σ 2 O K σ 2 O K σ 2 O r g g f CD ID r, v : θ θ k z j r g g θ j f Z z f R Z r : θ k z dr dr dv k T k,j z, 35 : θ where a s based on Equaton 29 and the ndependence of v from the argument of the expectaton operaton and T k,j z s ned as T k,j z r g g f Z z θ R : θ k r f R r : θ k dr. 36 j Note that based on Equaton 7, the condtonal pdf of Z, gven R, s Gaussan wth mean u and varance σ 2 g k σ2 C h,.e. Z 2 R N u, σ 2. Moreover, based on Equaton, the pdf of R s Gaussan wth mean and varance σo 2,.e. R N g k, σo 2, where g k g x, y : θ k s the estmate of the underlyng functon gx, y at locaton x, y wth the vector of unknown parameters θ replaced by ts estmate at the kth teraton of the EM algorthm. Usng these two probablty densty functons, T k,j z can be found as T k,j z r g g 2 exp z u 2 r g k θ j 2πσ 2 2σ 2 exp 2πσ 2 2σO 2 dr O M βl+ r g g 2 exp z l2 r g k 2π σ 2 σ2 O l0 β l θ j 2σ 2 exp 2σO 2 dr M exp z l2 g 2πσ 2 2σ 2 Γ k l, θ j 37 where Γ k l s ned as where Q k l0 Γ k l l Q l gg k Λ k l exp βl+ r g exp 2πσO 2 β l 2 σo 2 2π exp g k 2σO 2 Λ k l + and Λ k l s ned as 2g k β l β 2 l 2g k 2σO 2 exp r g k 2σ 2 O 2 dr g k g Q k l, 38 β l + β 2 l + 2σO 2. 39

11 Combng Equatons 35 and and replacng them n Equaton 34 wll result n a non-lnear system of equatons n terms of θ k+. In other words, Equaton 34 can be rewrtten as follows to gve a new update of the vector of unknown parameters at the k + th step of the EM algorthm usng ts values derved n the kth step: K σ 2 O 2πσ 2 f Z z k : θ g k+ θ j M k z g k+ g k+ N k z θ j 0, j, 2,..., p, 40 where M k z and N k M k z N k z z are ned as M l0 M l0 exp z l2 2σ 2 Q k l exp σ 2 O z l2 2σ 2 2π exp g k 2σ 2 O 2 Λ k l + g k Q k l 4a. 4b It can easly been seen that the system of equatons gven n Equaton 40 s hghly non-lnear wth respect to the components of the vector of unknown parameters to be estmated θ k+. Furthermore, the soluton set for ths system of equatons s non-convex. Therefore, a numercal soluton should be found for ths system of equatons at each teraton of the EM algorthm. In Secton 6, ths approach wll be utlzed to numercally fnd the ML estmate of a vector of unknown parameters assocated wth a specfc two-dmensonal Gaussan-shaped functon of nterest gx, y. Newton s method s appled to lnearze the system of equatons gven n 40 as brefly descrbed n the followng. Let f θ k+ 0 be the vector form of the system of equatons gven n 40, where each component of the vector f s a non-lnear functon of the components of the vector of unknown parameters θ k+. At each step n of the Newton s lnearzaton method, the followng system of lnear equatons s solved to fnd a new update for θ k+ n+ based on the prevous estmate of θ k+ n : where J θ k+ n J θ k+ n θ k+ n+ θ k+ n f θ k+ n, n 0,, 2,..., 42 s the p p Jacoban matrx of vector functon f evaluated at θ k+ n. The element at the rth row and cth column of the Jacoban matrx J evaluated at θ k+ n J r,c θ k+ n f r θ θ c θθ k+ n where f r s the rth component of the vector functon f. s ned as, r, c, 2,..., p, CASE STUDY AND NUMERICAL ANALYSIS In ths secton, we wll numercally evaluate the performance of the ML estmaton framework developed n prevous sectons for both analog and dgtal local processng schemes for a specfc functon of nterest gx, y.

12 6. Smulaton Setup, Parameter Specfcaton, and Performance Measure Defnton Suppose that a two-dmensonal Gaussan-shaped functon ned as gx, y h exp x xc 2 2 σx 2 + y y c 2 σy 2 s beng sampled by the WSN under study, where h s the maxmum ntensty or heght of the functon, x c, y c s the locaton of ts center, and σ 2 x and σ 2 y are known spread of the functon n the x and y drectons, respectvely. In our smulatons, we fx these values to be σx 2 4 and σy 2. Let θ h, x c, y c T be the vector of unknown determnstc parameters assocated wth functon gx, y. The goal s to estmate these p 3 determnstc parameters usng dstrbuted possbly sparse nosy samples of the functon acqured by a WSN and transmtted to a fuson center through mpared parallel flat-fadng channels. Suppose that K sensors are randomly dstrbuted n the observaton envronment, whch s assumed to at least cover the area A x c 3σ x, x c + 3σ x y c 3σ y, y c + 3σ y, where X Y denotes the Cartesan product between sets X and Y. Ths choce of the observaton envronment guarantees that all of the sensors are wthn the doman of the underlyng functon gx, y, and that almost all of the doman of ths functon s covered by dstrbuted sensors. In our smulatons, we assume that the observatons made by dstrbuted sensors are homogeneous. In other words, t s assumed that the varances of addtve whte Gaussan observaton noses for all sensors are the same,.e. In our smulatons, the observaton sgnal-to-nose rato SNR s ned as σ 2 O 44 σo 2 σo 2 2 σo 2 K. 45 SNR O 2σ 2 O. 46 Note that ths nton of SNR O s the SNR at a reference dstance from the center of the functon x c, y c at whch the strength of the functon samples s unt. As t was mentoned n Secton 2, there are two man classes of local processng schemes, namely analog and dgtal. Both of these schemes are consdered n our smulatons. For the analog local processng scheme, t s assumed that the observaton amplfcaton gan s absorbed n the channel fadng gan and normalzed to one as descrbed n the followng paragraph. For the dgtal local processng scheme, t s assumed that the quantzaton rule at all of the sensors s the same. Therefore, all sensors quantze ther local analog observatons to b b b 2 b K bts and use the same number of quantzaton levels M M M 2 M K. Each sensor uses a determnstc unform scalar quantzer, whose set of quantzaton thresholds s known and s the same for all sensors. In other words, βl β l,, and l 0,,..., M. In our smulatons, the set of quantzaton thresholds for all sensors s chosen as βl lh M, l, 2,..., M. The parallel ndependent channels between dstrbuted sensors and the fuson center are assumed to be flat Raylegh fadng channels. The observaton and channel SNRs and the channel fadng gans are normalzed to ensure that E h 2 or more generally, E α h 2, when applcable,, 2,..., K. When the sensors are located close to each other and the fuson center s far away from them, the dstance between the sensors and the fuson center s approxmately the same for all sensors, and ths assumpton s vald. In our smulatons, t s assumed that the parallel channels between dstrbuted sensors and the fuson center have homogeneous noses. In other words, t s assumed that the varances of addtve whte Gaussan channel noses n the channels between all sensors and the fuson center are the same,.e. In our smulatons, the channel sgnal-to-nose rato SNR s ned as σ 2 C σc 2 σc 2 2 σc 2 K. 47 SNR C 2σ 2 C. 48

13 Agan, ths nton of SNR C s the SNR at a reference dstance from each sensor at whch the strength of the faded transmtted sgnal s unt. For the analog local processng scheme, the ML estmate of the vector of unknown parameters θ s found based on the system of non-lnear equatons gven n Equaton 2. Ths system of equatons s lnearzed and solved teratvely usng Newton s method as brefly descrbed at the end of Secton 5. For the dgtal local processng scheme, the lnearzed EM algorthm gven n Equatons 40 4 s used as an teratve, effcent approach to numercally fnd the ML estmate of θ. To solve ths system of non-lnear equatons, Newton s lnearzaton method s appled, as brefly descrbed at the end of Secton 5, based on Equaton 42. We have chosen the estmaton ntegrated mean-squared error IMSE ned as IMSE MSE x, y dxdy 49 A as the performance measure to evaluate the ML estmaton technques developed n ths paper, where A s the two-dmensonal observaton envronment and MSE x, y s the locaton-dependent estmaton mean-squared error at locaton x, y A ned as MSE x, y E h,w,n gx, y ĝx, y 2, 50 where E denotes the expectaton wth respect to observaton nose, channel nose, and channel fadng gan realzatons, gx, y s the sample of the underlyng functon at locaton x, y, and ĝx, y s ts estmated value based on the estmate of the vector of unknown parameters. In our smulatons, the result of MSE x, y s also averaged wth respect to the random locaton of dstrbuted sensors n the observaton envronment. 6.2 Effects of K, SNR O, and SNR C on Dstrbuted Parameter Estmaton Performance The performance of the proposed dstrbuted parameter estmaton framework s a functon of, n part, the number of dstrbuted sensors n the observaton envronment, observaton SNR, and channel SNR. In Fg. 2, the ntegrated mean-squared error at the fuson center versus the number of dstrbuted sensors n the observaton envronment K s shown for dfferent values of observaton SNR SNR O and channel SNR SNR C. Fgure 2a shows ths performance measure for applyng ML estmaton technque at the fuson center, when the analog local processng scheme s used. Fgure 2b shows the IMSE as the estmaton performance measure for applyng the lnearzed EM estmaton algorthm at the fuson center, when the dgtal local processng scheme s used wth M 8 quantzaton levels. As t can be seen n ths fgure, as the number of local sensors n the observaton envronment ncreases, the IMSE decreases monotoncally. Ths concluson s vald for both analog and dgtal local processng schemes. Furthermore, n can be observed from Fg. 2 that the performance mprovement due to the ncrease n the densty of the local sensors n the observaton envronment s more consderable when there are small number of sensors. As the number of sensors ncreases, the percentage of ths performance mprovement decreases, and the dstrbuted parameter estmaton system acheves an acceptable performance n terms of the IMSE at a moderate number of local sensors n the observaton envronment. Fgure 3 shows the ntegrated mean-squared error at the fuson center versus the observaton SNR SNR O for dfferent values of the number of dstrbuted sensors n the observaton envronment K and channel SNR SNR C. Fgure 3a shows the IMSE as the performance measure for applyng ML estmaton technque at the fuson center, when the analog local processng scheme s used. Fgure 3b shows the IMSE as the estmaton performance measure for applyng the lnearzed EM estmaton algorthm at the fuson center, when the dgtal local processng scheme s used wth M 8 quantzaton levels. Smlar to detaled dscussons provded n analyzng Fg. 2, as the observaton SNR ncreases, the IMSE decreases monotoncally for both cases of analog and dgtal local processng schemes. Furthermore, t can be observed from Fg. 3 that the performance mprovement due to the ncrease n the observaton SNR s more consderable at low SNRs. As the observaton SNR ncreases, the percentage of ths performance mprovement decreases, and the dstrbuted parameter estmaton system acheves an acceptable performance n terms of the IMSE at a moderate observaton SNR. Smlar results and dscussons could have been provded for analyzng the effects of channel SNR on the performance of the dstrbuted parameter estmaton framework, whch s omtted because of the lmted space.

14 One of the most mportant ponts to be notced n Fgs. 2 and 3 s that for the same values of K, SNR O, and SNR C, the ML estmaton at the fuson center based on the analog local processng scheme outperforms the lnearzed EM estmaton at the fuson center based on the dgtal local processng schemes wth M 8 quantzaton levels. Ths observaton s expected for two man reasons. Frst, the lnearzed EM algorthm s an effcent teratve algorthm for numerc calculaton of the ML estmate and therefore, t shows a degraded performance compared to the exact ML estmate. Second, by performng dgtal sgnal processng and quantzng ther nosy observaton, local sensors are ntroducng some form of quantzaton nose n the processed samples to become avalable at the fuson center. Therefore, the results of estmaton at the fuson center based on the receved analog samples s more relable than those based on the quantzed versons of local samples. 6.3 Effects of M on Dstrbuted Lnearzed EM Parameter Estmaton Performance Besdes K, SNR O, and SNR C, one of the major parameters that affects the performance of the lnearzed EM estmaton at the fuson center, when the dgtal local processng scheme s used, s the number of quantzaton levels at local sensors,.e. M. Fgure 4 shows the ntegrated mean-squared error at the fuson center versus the number of quantzaton levels at local sensors M for dfferent values of the number of dstrbuted sensors r IMSE ted Mean Sq quared Error Integrat SNR C 0 0dB and dsnr O 0 0dB SNR C 5 db and SNR O 0 db SNR C 0 db and SNR O 5 db SNR C 5 db and SNR O 5 db K Number of Sensors a ML estmaton at the fuson center based on analog local processng. red Error IM MSE Integrated Mean Squar SNR C 0 db and SNR O 0 db SNR C 5 db and SNR O 0 db SNR C 0 db and SNR O 5 db SNR C 5 db and SNR O 5 db K Number of Sensors b Lnearzed EM estmaton at the fuson center based on dgtal local processng wth M 8 quantzaton levels. Fgure 2: Integrated mean-squared error IMSE versus the number of dstrbuted sensors n the observaton envronment K for dfferent values of observaton SNR SNR O and channel SNR SNR C.

15 IMSE Integrated Mean Squ uared Error K50 and SNR C 0 db K50 and SNR C 5 db K00 and SNR C 0 db K00 and SNR C 5 5dB Observaton SNR SNR O a ML estmaton at the fuson center based on analog local processng. r IMSE ated Mean Sq quared Error Integra K50 and SNR C 0 0dB K50 and SNR C 5 db K00 and SNR C 0 db K00 and SNR C 5 db Observaton SNR SNR O b Lnearzed EM estmaton at the fuson center based on dgtal local processng wth M 8 quantzaton levels. Fgure 3: Integrated mean-squared error IMSE versus observaton SNR SNR O for dfferent values of the number of dstrbuted sensors n the observaton envronment K and channel SNR SNR C. n the observaton envronment K, observaton SNR SNR O, and channel SNR SNR C, when the lnearzed EM estmaton algorthm s appled at the fuson center and the dgtal local processng scheme s used at local sensors. As t can be seen n ths fgure, as the number of quantzaton levels at local sensors ncreases, the IMSE decreases monotoncally. Agan, t can be observed from Fg. 4 that the performance mprovement due to the ncrease n the umber of quantzaton levels at local sensors s more consderable for small values of M. As the umber of quantzaton levels at local sensors ncreases, the percentage of ths performance mprovement decreases, and the dstrbuted parameter estmaton system acheves an acceptable performance n terms of the IMSE at a reasonably low number of quantzaton levels. It s worth mentonng that Fg. 4 shows that even M 4 quantzaton levels at local sensors can acheve a very good performance n terms of the IMSE. In other words, even f the local sensors quantze ther nosy observatons just to two bts, the system shows an acceptable performance. Ths concluson emphaszes on the energy effcency of the proposed parameter estmaton framework that could lead to a hgher lft-tme of dstrbuted sensors n the network. In other words, local sensors do not need to waste a lot of energy to send very hgh-resoluton observatons quantzed to a large number of quantzaton levels to acheve an acceptable estmaton error performance at the fuson center n terms of the IMSE.

16 r IMSE Integra ated Mean Square Error K50 50, SNR C SNR O 0 db K50, SNR C SNR O 5 db K00,SNR C SNR O 5 db M Number of Quantzaton Levels Fgure 4: Integrated mean-squared error IMSE versus the number of quantzaton levels at local sensors for lnearzed EM estmaton at the fuson center based on dgtal local processng for dfferent values of the number of dstrbuted sensors n the observaton envronment K, observaton SNR SNR O, and channel SNR SNR C. 7. CONCLUSIONS In ths paper, the problem of dstrbuted estmaton of a vector of unknown determnstc parameters assocated wth a two-dmensonal functon was consdered n the context of WSNs. Each sensor observes a sample of the underlyng functon at ts locaton corrupted by an addtve whte Gaussan observaton nose, whose samples are spatally uncorrelated across sensors. After local processng, each sensor transmts ts locally-processed sample to the fuson center of WSN through parallel flat-fadng channels. Two local processng schemes were consdered, namely analog and dgtal. In the analog local processng scheme, each sensor transmts an amplfed verson of ts local analog nosy observaton to the fuson center, actng lke a relay n a wreless network. In the dgtal local processng scheme, each sensor quantzes ts nosy observaton usng a determnstc unform scaler quantzer before transmttng ts dgtally-modulated verson to the fuson center. The ML estmate of the vector of unknown parameters at the fuson center was derved for both analog and dgtal local processng schemes. Snce the ML estmate for the case of dgtal local processng scheme was too complcated to effcently be mplemented, an effcent teratve EM algorthm was proposed to numercally fnd the ML estmate n ths case. Numercal smulaton results were provded to evaluate the performance of the proposed dstrbuted parameter estmaton framework n a typcal WSN applcaton scenaro. As shown n the results of these smulaton, the proposed dstrbuted estmaton framework can acheve a very good performance n terms of the ntegrated mean-squared error for reasonable values of the parameters of the system ncludng the number of dstrbuted local sensors n the observaton envronment, observaton SNR, channel SNR, and the number of quantzaton levels for dgtal local processng scheme. In partcular, numercal performance analyss showed that even wth a low number of quantzaton levels at dstrbuted sensors,.e. hgh energy-effcency, the estmaton framework could provde a very good performance n terms of the ntegrated mean-squared error. REFERENCES. J.-J. Xao, S. Cu, Z.-Q. Luo, and A. J. Goldsmth, Lnear coherent decentralzed estmaton, IEEE Transactons on Sgnal Processng 56, pp , February M. Gastpar, Uncoded transmsson s exactly optmal for a smple gaussan sensor network, IEEE Transactons on Informaton Theory 54, pp , November P. Ishwar, R. Pur, K. Ramchandran, and S. Pradhan, On rate-constraned dstrbuted estmaton n unrelable sensor networks, IEEE Journal on Selected Areas n Communcatons 23, pp , Aprl R. Nowak, U. Mtra, and R. Wllett, Estmatng nhomogeneous felds usng wreless sensor networks, IEEE Journal on Selected Areas n Communcatons 22, pp , August 2004.

17 5. A. Rbero and G. B. Gannaks, Bandwdth-constraned dstrbuted estmaton for wreless sensor networks part I: Gaussan case, IEEE Transactons on Sgnal Processng 54, pp. 3 43, March A. Rbero and G. B. Gannaks, Bandwdth-constraned dstrbuted estmaton for wreless sensor networks part II: unknown probablty densty functon, IEEE Transactons on Sgnal Processng 54, pp , July R. Nu and P. K. Varshney, Target locaton estmaton n sensor networks wth quantzed data, IEEE Transactons on Sgnal Processng 54, pp , December O. Ozdemr, R. Nu, and P. K. Varshney, Channel aware target localzaton wth quantzed data n wreless sensor networks, IEEE Transactons on Sgnal Processng 57, pp , March E. Maşazade, R. Nu, P. K. Varshney, and M. Kesknoz, Energy aware teratve source localzaton for wreless sensor networks, IEEE Transactons on Sgnal Processng 58, pp , September X. L, RSS-based locaton estmaton wth unknown path-loss model, IEEE Transactons on Wreless Communcatons 5, pp , December N. Salman, M. Ghogho, and A. H. Kemp, On the jont estmaton of the RSS-based locaton and path-loss exponent, IEEE Wreless Communcatons Letters, pp , February M. K. Banavar, C. Tepedelenloglu, and A. Spanas, Estmaton over fadng channels wth lmted feedback usng dstrbuted sensng, IEEE Transactons on Sgnal Processng 58, pp , January A. P. Dempster, N. M. Lard, and D. B. Rubn, Maxmum lkelhood from ncomplete data va the EM algorthm, Journal of the Royal Statstcal Socety, Seres B Methodologcal 39, pp. 38, December 977.

Power Allocation for Distributed BLUE Estimation with Full and Limited Feedback of CSI

Power Allocation for Distributed BLUE Estimation with Full and Limited Feedback of CSI Power Allocaton for Dstrbuted BLUE Estmaton wth Full and Lmted Feedback of CSI Mohammad Fanae, Matthew C. Valent, and Natala A. Schmd Lane Department of Computer Scence and Electrcal Engneerng West Vrgna

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Lossy Compression. Compromise accuracy of reconstruction for increased compression. Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton

More information

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Wreless Informaton Transmsson System Lab. Chapter 7 Channel Capacty and Codng Insttute of Communcatons Engneerng atonal Sun Yat-sen Unversty Contents 7. Channel models and channel capacty 7.. Channel models

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

Pulse Coded Modulation

Pulse Coded Modulation Pulse Coded Modulaton PCM (Pulse Coded Modulaton) s a voce codng technque defned by the ITU-T G.711 standard and t s used n dgtal telephony to encode the voce sgnal. The frst step n the analog to dgtal

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

Lecture 3: Shannon s Theorem

Lecture 3: Shannon s Theorem CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts

More information

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Chapter 7 Channel Capacty and Codng Contents 7. Channel models and channel capacty 7.. Channel models Bnary symmetrc channel Dscrete memoryless channels Dscrete-nput, contnuous-output channel Waveform

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD he Gaussan classfer Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world X observatons g decson functon L[g,y] loss of predctng y wth g Bayes decson rule s

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1 On an Extenson of Stochastc Approxmaton EM Algorthm for Incomplete Data Problems Vahd Tadayon Abstract: The Stochastc Approxmaton EM (SAEM algorthm, a varant stochastc approxmaton of EM, s a versatle tool

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Consider the following passband digital communication system model. c t. modulator. t r a n s m i t t e r. signal decoder.

Consider the following passband digital communication system model. c t. modulator. t r a n s m i t t e r. signal decoder. PASSBAND DIGITAL MODULATION TECHNIQUES Consder the followng passband dgtal communcaton system model. cos( ω + φ ) c t message source m sgnal encoder s modulator s () t communcaton xt () channel t r a n

More information

Error Probability for M Signals

Error Probability for M Signals Chapter 3 rror Probablty for M Sgnals In ths chapter we dscuss the error probablty n decdng whch of M sgnals was transmtted over an arbtrary channel. We assume the sgnals are represented by a set of orthonormal

More information

DUE: WEDS FEB 21ST 2018

DUE: WEDS FEB 21ST 2018 HOMEWORK # 1: FINITE DIFFERENCES IN ONE DIMENSION DUE: WEDS FEB 21ST 2018 1. Theory Beam bendng s a classcal engneerng analyss. The tradtonal soluton technque makes smplfyng assumptons such as a constant

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1 Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons

More information

Research Article Green s Theorem for Sign Data

Research Article Green s Theorem for Sign Data Internatonal Scholarly Research Network ISRN Appled Mathematcs Volume 2012, Artcle ID 539359, 10 pages do:10.5402/2012/539359 Research Artcle Green s Theorem for Sgn Data Lous M. Houston The Unversty of

More information

SIO 224. m(r) =(ρ(r),k s (r),µ(r))

SIO 224. m(r) =(ρ(r),k s (r),µ(r)) SIO 224 1. A bref look at resoluton analyss Here s some background for the Masters and Gubbns resoluton paper. Global Earth models are usually found teratvely by assumng a startng model and fndng small

More information

Digital Modems. Lecture 2

Digital Modems. Lecture 2 Dgtal Modems Lecture Revew We have shown that both Bayes and eyman/pearson crtera are based on the Lkelhood Rato Test (LRT) Λ ( r ) < > η Λ r s called observaton transformaton or suffcent statstc The crtera

More information

Feb 14: Spatial analysis of data fields

Feb 14: Spatial analysis of data fields Feb 4: Spatal analyss of data felds Mappng rregularly sampled data onto a regular grd Many analyss technques for geophyscal data requre the data be located at regular ntervals n space and/or tme. hs s

More information

Computing MLE Bias Empirically

Computing MLE Bias Empirically Computng MLE Bas Emprcally Kar Wa Lm Australan atonal Unversty January 3, 27 Abstract Ths note studes the bas arses from the MLE estmate of the rate parameter and the mean parameter of an exponental dstrbuton.

More information

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence

More information

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM An elastc wave s a deformaton of the body that travels throughout the body n all drectons. We can examne the deformaton over a perod of tme by fxng our look

More information

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1]

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1] DYNAMIC SHORTEST PATH SEARCH AND SYNCHRONIZED TASK SWITCHING Jay Wagenpfel, Adran Trachte 2 Outlne Shortest Communcaton Path Searchng Bellmann Ford algorthm Algorthm for dynamc case Modfcatons to our algorthm

More information

Markov Chain Monte Carlo Lecture 6

Markov Chain Monte Carlo Lecture 6 where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways

More information

An Improved multiple fractal algorithm

An Improved multiple fractal algorithm Advanced Scence and Technology Letters Vol.31 (MulGraB 213), pp.184-188 http://dx.do.org/1.1427/astl.213.31.41 An Improved multple fractal algorthm Yun Ln, Xaochu Xu, Jnfeng Pang College of Informaton

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Maxmum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Expectation Maximization Mixture Models HMMs

Expectation Maximization Mixture Models HMMs -755 Machne Learnng for Sgnal Processng Mture Models HMMs Class 9. 2 Sep 200 Learnng Dstrbutons for Data Problem: Gven a collecton of eamples from some data, estmate ts dstrbuton Basc deas of Mamum Lelhood

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Motion Perception Under Uncertainty. Hongjing Lu Department of Psychology University of Hong Kong

Motion Perception Under Uncertainty. Hongjing Lu Department of Psychology University of Hong Kong Moton Percepton Under Uncertanty Hongjng Lu Department of Psychology Unversty of Hong Kong Outlne Uncertanty n moton stmulus Correspondence problem Qualtatve fttng usng deal observer models Based on sgnal

More information

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

Multigradient for Neural Networks for Equalizers 1

Multigradient for Neural Networks for Equalizers 1 Multgradent for Neural Netorks for Equalzers 1 Chulhee ee, Jnook Go and Heeyoung Km Department of Electrcal and Electronc Engneerng Yonse Unversty 134 Shnchon-Dong, Seodaemun-Ku, Seoul 1-749, Korea ABSTRACT

More information

Appendix B: Resampling Algorithms

Appendix B: Resampling Algorithms 407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles

More information

COGNITIVE RADIO NETWORKS BASED ON OPPORTUNISTIC BEAMFORMING WITH QUANTIZED FEEDBACK

COGNITIVE RADIO NETWORKS BASED ON OPPORTUNISTIC BEAMFORMING WITH QUANTIZED FEEDBACK COGNITIVE RADIO NETWORKS BASED ON OPPORTUNISTIC BEAMFORMING WITH QUANTIZED FEEDBACK Ayman MASSAOUDI, Noura SELLAMI 2, Mohamed SIALA MEDIATRON Lab., Sup Com Unversty of Carthage 283 El Ghazala Arana, Tunsa

More information

What would be a reasonable choice of the quantization step Δ?

What would be a reasonable choice of the quantization step Δ? CE 108 HOMEWORK 4 EXERCISE 1. Suppose you are samplng the output of a sensor at 10 KHz and quantze t wth a unform quantzer at 10 ts per sample. Assume that the margnal pdf of the sgnal s Gaussan wth mean

More information

Tracking with Kalman Filter

Tracking with Kalman Filter Trackng wth Kalman Flter Scott T. Acton Vrgna Image and Vdeo Analyss (VIVA), Charles L. Brown Department of Electrcal and Computer Engneerng Department of Bomedcal Engneerng Unversty of Vrgna, Charlottesvlle,

More information

A NEW DISCRETE WAVELET TRANSFORM

A NEW DISCRETE WAVELET TRANSFORM A NEW DISCRETE WAVELET TRANSFORM ALEXANDRU ISAR, DORINA ISAR Keywords: Dscrete wavelet, Best energy concentraton, Low SNR sgnals The Dscrete Wavelet Transform (DWT) has two parameters: the mother of wavelets

More information

The Expectation-Maximization Algorithm

The Expectation-Maximization Algorithm The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.

More information

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng

More information

RELIABILITY ASSESSMENT

RELIABILITY ASSESSMENT CHAPTER Rsk Analyss n Engneerng and Economcs RELIABILITY ASSESSMENT A. J. Clark School of Engneerng Department of Cvl and Envronmental Engneerng 4a CHAPMAN HALL/CRC Rsk Analyss for Engneerng Department

More information

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering / Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons

More information

Time-Varying Systems and Computations Lecture 6

Time-Varying Systems and Computations Lecture 6 Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy

More information

A Particle Filter Algorithm based on Mixing of Prior probability density and UKF as Generate Importance Function

A Particle Filter Algorithm based on Mixing of Prior probability density and UKF as Generate Importance Function Advanced Scence and Technology Letters, pp.83-87 http://dx.do.org/10.14257/astl.2014.53.20 A Partcle Flter Algorthm based on Mxng of Pror probablty densty and UKF as Generate Importance Functon Lu Lu 1,1,

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Maxmum Lkelhood Estmaton INFO-2301: Quanttatve Reasonng 2 Mchael Paul and Jordan Boyd-Graber MARCH 7, 2017 INFO-2301: Quanttatve Reasonng 2 Paul and Boyd-Graber Maxmum Lkelhood Estmaton 1 of 9 Why MLE?

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Power Allocation/Beamforming for DF MIMO Two-Way Relaying: Relay and Network Optimization

Power Allocation/Beamforming for DF MIMO Two-Way Relaying: Relay and Network Optimization Power Allocaton/Beamformng for DF MIMO Two-Way Relayng: Relay and Network Optmzaton Je Gao, Janshu Zhang, Sergy A. Vorobyov, Ha Jang, and Martn Haardt Dept. of Electrcal & Computer Engneerng, Unversty

More information

Low Complexity Soft-Input Soft-Output Hamming Decoder

Low Complexity Soft-Input Soft-Output Hamming Decoder Low Complexty Soft-Input Soft-Output Hammng Der Benjamn Müller, Martn Holters, Udo Zölzer Helmut Schmdt Unversty Unversty of the Federal Armed Forces Department of Sgnal Processng and Communcatons Holstenhofweg

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute

More information

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method Appled Mathematcal Scences, Vol. 7, 0, no. 47, 07-0 HIARI Ltd, www.m-hkar.com Comparson of the Populaton Varance Estmators of -Parameter Exponental Dstrbuton Based on Multple Crtera Decson Makng Method

More information

Application of Nonbinary LDPC Codes for Communication over Fading Channels Using Higher Order Modulations

Application of Nonbinary LDPC Codes for Communication over Fading Channels Using Higher Order Modulations Applcaton of Nonbnary LDPC Codes for Communcaton over Fadng Channels Usng Hgher Order Modulatons Rong-Hu Peng and Rong-Rong Chen Department of Electrcal and Computer Engneerng Unversty of Utah Ths work

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

Hidden Markov Models

Hidden Markov Models Hdden Markov Models Namrata Vaswan, Iowa State Unversty Aprl 24, 204 Hdden Markov Model Defntons and Examples Defntons:. A hdden Markov model (HMM) refers to a set of hdden states X 0, X,..., X t,...,

More information

Classification as a Regression Problem

Classification as a Regression Problem Target varable y C C, C,, ; Classfcaton as a Regresson Problem { }, 3 L C K To treat classfcaton as a regresson problem we should transform the target y nto numercal values; The choce of numercal class

More information

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression 11 MACHINE APPLIED MACHINE LEARNING LEARNING MACHINE LEARNING Gaussan Mture Regresson 22 MACHINE APPLIED MACHINE LEARNING LEARNING Bref summary of last week s lecture 33 MACHINE APPLIED MACHINE LEARNING

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem. Lecture 14 (03/27/18). Channels. Decodng. Prevew of the Capacty Theorem. A. Barg The concept of a communcaton channel n nformaton theory s an abstracton for transmttng dgtal (and analog) nformaton from

More information

A Robust Method for Calculating the Correlation Coefficient

A Robust Method for Calculating the Correlation Coefficient A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal

More information

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements CS 750 Machne Learnng Lecture 5 Densty estmaton Mlos Hauskrecht mlos@cs.ptt.edu 539 Sennott Square CS 750 Machne Learnng Announcements Homework Due on Wednesday before the class Reports: hand n before

More information

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14 APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce

More information

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI Logstc Regresson CAP 561: achne Learnng Instructor: Guo-Jun QI Bayes Classfer: A Generatve model odel the posteror dstrbuton P(Y X) Estmate class-condtonal dstrbuton P(X Y) for each Y Estmate pror dstrbuton

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

An Upper Bound on SINR Threshold for Call Admission Control in Multiple-Class CDMA Systems with Imperfect Power-Control

An Upper Bound on SINR Threshold for Call Admission Control in Multiple-Class CDMA Systems with Imperfect Power-Control An Upper Bound on SINR Threshold for Call Admsson Control n Multple-Class CDMA Systems wth Imperfect ower-control Mahmoud El-Sayes MacDonald, Dettwler and Assocates td. (MDA) Toronto, Canada melsayes@hotmal.com

More information

The Concept of Beamforming

The Concept of Beamforming ELG513 Smart Antennas S.Loyka he Concept of Beamformng Generc representaton of the array output sgnal, 1 where w y N 1 * = 1 = w x = w x (4.1) complex weghts, control the array pattern; y and x - narrowband

More information

OPTIMAL COMBINATION OF FOURTH ORDER STATISTICS FOR NON-CIRCULAR SOURCE SEPARATION. Christophe De Luigi and Eric Moreau

OPTIMAL COMBINATION OF FOURTH ORDER STATISTICS FOR NON-CIRCULAR SOURCE SEPARATION. Christophe De Luigi and Eric Moreau OPTIMAL COMBINATION OF FOURTH ORDER STATISTICS FOR NON-CIRCULAR SOURCE SEPARATION Chrstophe De Lug and Erc Moreau Unversty of Toulon LSEET UMR CNRS 607 av. G. Pompdou BP56 F-8362 La Valette du Var Cedex

More information

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud Resource Allocaton wth a Budget Constrant for Computng Independent Tasks n the Cloud Wemng Sh and Bo Hong School of Electrcal and Computer Engneerng Georga Insttute of Technology, USA 2nd IEEE Internatonal

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Asymptotic Quantization: A Method for Determining Zador s Constant

Asymptotic Quantization: A Method for Determining Zador s Constant Asymptotc Quantzaton: A Method for Determnng Zador s Constant Joyce Shh Because of the fnte capacty of modern communcaton systems better methods of encodng data are requred. Quantzaton refers to the methods

More information

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA 4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected

More information

Quantifying Uncertainty

Quantifying Uncertainty Partcle Flters Quantfyng Uncertanty Sa Ravela M. I. T Last Updated: Sprng 2013 1 Quantfyng Uncertanty Partcle Flters Partcle Flters Appled to Sequental flterng problems Can also be appled to smoothng problems

More information

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2) 1/16 MATH 829: Introducton to Data Mnng and Analyss The EM algorthm (part 2) Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 20, 2016 Recall 2/16 We are gven ndependent observatons

More information

A linear imaging system with white additive Gaussian noise on the observed data is modeled as follows:

A linear imaging system with white additive Gaussian noise on the observed data is modeled as follows: Supplementary Note Mathematcal bacground A lnear magng system wth whte addtve Gaussan nose on the observed data s modeled as follows: X = R ϕ V + G, () where X R are the expermental, two-dmensonal proecton

More information

EGR 544 Communication Theory

EGR 544 Communication Theory EGR 544 Communcaton Theory. Informaton Sources Z. Alyazcoglu Electrcal and Computer Engneerng Department Cal Poly Pomona Introducton Informaton Source x n Informaton sources Analog sources Dscrete sources

More information

Why Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one)

Why Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one) Why Bayesan? 3. Bayes and Normal Models Alex M. Martnez alex@ece.osu.edu Handouts Handoutsfor forece ECE874 874Sp Sp007 If all our research (n PR was to dsappear and you could only save one theory, whch

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

A Hybrid Variational Iteration Method for Blasius Equation

A Hybrid Variational Iteration Method for Blasius Equation Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method

More information

PHYS 705: Classical Mechanics. Calculus of Variations II

PHYS 705: Classical Mechanics. Calculus of Variations II 1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary

More information

the absence of noise and fading and also show the validity of the proposed algorithm with channel constraints through simulations.

the absence of noise and fading and also show the validity of the proposed algorithm with channel constraints through simulations. 1 Channel Aware Decentralzed Detecton n Wreless Sensor Networks Mlad Kharratzadeh mlad.kharratzadeh@mal.mcgll.ca Department of Electrcal & Computer Engneerng McGll Unversty Montreal, Canada Abstract We

More information

Chapter 8 SCALAR QUANTIZATION

Chapter 8 SCALAR QUANTIZATION Outlne Chapter 8 SCALAR QUANTIZATION Yeuan-Kuen Lee [ CU, CSIE ] 8.1 Overvew 8. Introducton 8.4 Unform Quantzer 8.5 Adaptve Quantzaton 8.6 Nonunform Quantzaton 8.7 Entropy-Coded Quantzaton Ch 8 Scalar

More information

Parametric fractional imputation for missing data analysis

Parametric fractional imputation for missing data analysis Secton on Survey Research Methods JSM 2008 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Wayne Fuller Abstract Under a parametrc model for mssng data, the EM algorthm s a popular tool

More information

Mathematical Preparations

Mathematical Preparations 1 Introducton Mathematcal Preparatons The theory of relatvty was developed to explan experments whch studed the propagaton of electromagnetc radaton n movng coordnate systems. Wthn expermental error the

More information

Power law and dimension of the maximum value for belief distribution with the max Deng entropy

Power law and dimension of the maximum value for belief distribution with the max Deng entropy Power law and dmenson of the maxmum value for belef dstrbuton wth the max Deng entropy Bngy Kang a, a College of Informaton Engneerng, Northwest A&F Unversty, Yanglng, Shaanx, 712100, Chna. Abstract Deng

More information

Uncertainty as the Overlap of Alternate Conditional Distributions

Uncertainty as the Overlap of Alternate Conditional Distributions Uncertanty as the Overlap of Alternate Condtonal Dstrbutons Olena Babak and Clayton V. Deutsch Centre for Computatonal Geostatstcs Department of Cvl & Envronmental Engneerng Unversty of Alberta An mportant

More information

Chapter Newton s Method

Chapter Newton s Method Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve

More information