Diffusion Maximum Correntropy Criterion Algorithms for Robust Distributed Estimation

Size: px
Start display at page:

Download "Diffusion Maximum Correntropy Criterion Algorithms for Robust Distributed Estimation"

Transcription

1 Diffusion Maximum Correntropy Criterion Algorithms for Robust Distributed Estimation Wentao Ma a, Badong Chen b, *,Jiandong Duan a,haiquan Zhao c a Department of Electrical Engineering, Xi an University of echnology, Xi an 70048, China b School of Electronic and Information Engineering, Xi an Jiaotong University, Xi an, 70049, China c School of Electrical Engineering, Southwest Jiaotong University, Chengdu, China *Correspondence author:badong Chen. chenbd@mail.xjtu.edu.cn Abstract: Robust diffusion adaptive estimation algorithms based on the maximum correntropy criterion (), including adaptation to combination and combination to adaptation, are developed to deal with the distributed estimation over networ in impulsive (long-tailed) noise environments. he cost functions used in distributed estimation are in general based on the mean square error (MSE) criterion, which is desirable when the measurement noise is Gaussian. In non-gaussian situations, such as the impulsive-noise case, based methods may achieve much better performance than the MSE methods as they tae into account higher order statistics of error distribution. he proposed methods can also outperform the robust diffusion least mean p-power(dlmp) and diffusion minimum error entropy (DMEE) algorithms. he mean and mean square convergence analysis of the new algorithms are also carried out. Keywords: Correntropy; maximum correntropy criterion; diffusion; robust distributed adaptive estimation. Introduction As an important issue in the field of distributed networ, the distributed estimation over networ plays a ey role in many applications, including environment monitoring, disaster relief management, source localization, and so on [-4], which aims to estimate some parameters of interest from noisy measurements through cooperation between nodes. Much progress has been made in the past few years.

2 In particular, the diffusion mode of cooperation for distributed networ estimation(dne) has aroused more and more concern among researchers, which eeps the nodes exchange their estimates with neighbors and fuses the collected estimates via linear combination. So far a number of diffusion mode algorithms have been developed by researchers, such as the diffusion least mean square (DLMS) [5-8], diffusion recursive least square (DRLS)[9] and their variants [0-3]. hese algorithms are derived under the popular mean square error (MSE) criterion, of which the optimizations are well understood and efficient. It is well-nown that the optimality of MSE relies heavily on the Gaussian and linear assumptions. In practice, however, the data distributions are usually non-gaussian, and in these situations, the MSE is possibly no longer an appropriate one especially in the presence of heavy-tailed non- Gaussian noise [4]. In distributed networs, some impulsive noises are usually unavoidable. Recently, some researchers focus on improving robustness of DNE methods. he efforts are mainly directed at searching for a more robust cost function to replace the MSE cost (which is sensitive to large outliers due to the square operator). o address this problem, the diffusion least mean p-power (DLMP) based on p-norm error criterion was proposed to estimate the parameters of the wireless sensor networs [5]. For non-gaussian cases, Information heoretic Learning (IL) [6] provides a more general framewor and can also achieve desirable performance. he diffusion minimum error entropy (DMEE) was proposed in [7]. Under the MEE criterion, the entropy of a batch of N most recent error samples is used as a cost function to be minimized to adapt the weights. he evaluation of the error entropy involves a double sum over the samples, which is computationally expensive especially when the window length L is large. In recent years, the correntropy as a nonlinear similarity measure in IL, has been successfully used as a robust and efficient cost function for non-gaussian signal processing [8]. he adaptive algorithms under the maximum correntropy criterion () are shown to be very robust with respect to impulsive noises, since correntropy is a measure of local similarity and is insensitive to outliers [9]. Moreover, based algorithms are, in general, computationally much simpler than the MEE based algorithms. Research results on dimensionality reduction[0], feature selection [], robust regression [] and adaptive filtering [3-8] have demonstrated the effectiveness of when dealing with occlusion and corruption problems. Motivated by the desirable features of correntropy, we propose in this wor a novel diffusion scheme, called diffusion (D), for robust distributed estimation in impulsive noise environments. he main contributions of the paper are three-folds: (i) a correntropy-based diffusion scheme is proposed to solve the distributed estimation over networs; (ii) two based diffusion algorithms, namely adaptation to combination (AC) and combination to adaptation (CA) diffusion algorithms are developed, which can combat impulsive noises effectively; (iii) the mean and mean square performances have been analyzed. Moreover, simulations are conducted to illustrate the performance of the proposed methods under impulsive noise disturbances. he remainder of the paper is organized as follows. In Section, we give a brief review of. In Section 3, we propose the D method and present two adaptive combination versions. he mean and mean square analysis are performed in section 4. Simulation results are then presented in section 5 Finally, conclusion is given in Section 6.. Maximum correntropy criterion he correntropy between two random variables x and y is defined by

3 V( x, y) E[ ( x,y)] ( x,y) df ( x,y) () where E[.] denotes the expectation operator, (, ) is a shift-invariant Mercer ernel, and F ( x, y ) denotes the joint distribution function. In practice, only a finite number of samples { x } N i,yi i are available, and the joint distribution is usually unnown. In this case, the correntropy can be estimated as the sample mean N V(x, y) E[ (x, y)] ( x i, yi) () N he most popular ernel used in correntropy is the Gaussian ernel: i e ( x, y) exp( ) xy (3) where e x y, and denotes the ernel size. With Gaussian ernel, the instantaneous cost is [8] e ( i) J ( i) G ( e(i) ) exp( ) (4) where i denotes the time instant (or iteration number). (with Gaussian ernel) has some desirable properties[9]: ) it is always bounded for any distribution; ) it contains all even-order moments, and the weights of the higher-order moments are determined by the ernel size; 3) it is a local similarity measure and is robust to outliers. Based on these excellent properties, we develop the diffusion algorithms in the next section. 3. Diffusion algorithms 3.. General diffusion Consider a networ composed of N nodes distributed over a geographic area to estimate an unnown vector w o of size( M ) from measurements collected at N nodes. At each time instant i ( i,, I ), each node has access to the realization of a scalar measurement d and a regression vector u of size( M ), related as d ( i) wo u ( i) n ( i) (5) where n () i denotes the measurement noise, and denotes transposition. Given the above model, for each node, the D sees to estimate w o by maximizing a linear combination of the local correntropy within the node s neighbor N. he cost function of the D for each node can be therefore expressed as J ( w) G ( e ( i)) local l, l, ln ln where w is the estimate of w o, e, ( ) ( ) l i dl i w ul ( i),, satisfying l,, and l, 0 if l N, and ln aing the derivative of (6) yields G ( d ( i) w u ( i)) l, l l { } G e i e i exp( ( d ( ) ( )) ) l i w ul i J l ( l, ( )) exp( ( l, ( )) ) local ( w) ln l, ln xy (6) are some non-negative combination coefficients G ( e ( i)) w l, G ( e ( i)) e ( i) u ( i) l, l, l, l (7) (8)

4 A gradient based algorithm for estimating wo at node can thus be derived as local w ( i) w ( i -) J ( w) w ( i -) G ( e ( i)) e ( i) u (i) l, l, l, l l N where w () i stands for the estimate of wo at time instant i, and is the step size for node. here are mainly two different schemes (including the adapt-then-combine (AC) scheme and the combine-thenadapt (CA) scheme)for the diffusion estimation in the literature[6,8]. he AC scheme first updates the local estimates using the adaptive algorithm and then the estimates of the neighbors are fused together, while the CA scheme [7] performs the operations of the AC scheme in a reverse order. In the next 4.3 section, we will give these two version of D algorithms. For each node, we calculate the intermediate estimates by ( i -) w ( i -) (0) l, l ln where ( i- ) denotes an intermediate estimate offered by node at instant i -, and l, denotes a weight with which a node should share its intermediate estimate wl ( i -) with node. With all the intermediate estimates, the nodes update their estimates by ( i) ( i ) l, G ( el, ) el, u l(i) () Above iteration in () is referenced as incremental step. he coefficients { l, } determine which nodes should share their measurements { d ( i), u (i)} with node. he combination is then performed as l l w (i) l N (), () i l l ln his result in ()represents a convex combination of estimates from incremental step () fed by spatially distinct data { d( i), u (i)}, and it is referenced as diffusion step. he coefficients in { l, } determine which nodes should share their intermediate estimates () i with node. According to above analysis, one can obtain the following general diffusion method by combining (9),(0) and () ( i -) l, wl ( i -) diffusion I ln l, G ( dl ( i) u l(i) ( i -)) ( i) ( i ) incremental (3) ln ( d (i) ( )) l u l(i) i - u l(i) w (i) l, l () i diffusion II ln where (,, N). Details on the selection of the weights l,, l,,and l, can be found in [8]. Remar: One can see that the equation (3) contains an extra scaling factor G ( e ( i)), which is an exponential function of the error. When a large error occurs (possibly caused by an outlier), this scaling factor will approach zero, which endows the D with the outlier rejection property and will improve significantly the adaptation performance in impulsive noises. Remar: he ernel size has significant influence on the performance of the D, similar to most ernel methods. In general, a larger ernel size maes the algorithm less robust to the outliers, while a smaller ernel size maes the algorithm stall. 3. AC and CA diffusion l l, (9)

5 he non-negative real coefficients { l, }, { l, }, { l, } in (3) are corresponding to the{ l, } entries of matrices P, P and P 3,respectively, and satisfy P, P, P 3 where denotes the N vector with unit entries. Below we develop the AC and CA diffusion algorithms. AC diffusion :When P I, P I, the algorithm (3) will reduce to the uncomplicated AC diffusion (ACD) version as G ( d ( i) u (i)w ( i -)) ( i) w ( i ) ( d ( i) u (i)w ( i -)) u (i) (4) w (i) l, l () i ln CA diffusion :Similar to the AC version, one can get a simple CA diffusion (CAD) algorithm by choosing P I and P 3 I : ( i -) l, wl ( i -) ln G ( d ( i) u (i) ( i -)) w ( i) ( i ) ( d ( i) u (i) ( i )) u (i) he equations of (4) and (5) are similar to the AC diffusion LMS (ACLMS)[8], and the CA diffusion LMS (CALMS) [6], respectively. Clearly, the ACD and CAD can be viewed e as the ACDLMS and CADLMS with a variable step size exp( ), where e 3 is d ( i) u (i)w ( i -) and d ( i) u (i) ( i -) for AC and CA versions, respectively. Further, as ernel size, we have, step size 3 G ( el( i)) (5), which leads to the AC and CA diffusion LMS with fixed. In addition, no exchange of data is needed during the adaptation of the step size, which maes the communication cost relatively low. Remar3: he AC version usually outperforms the CA version [7]. Similarly, the ACD algorithm tends to outperform the CAD. According to (4) and (5), we now that for computing a new estimate, the ACD uses the measurement from all nodes m in the neighborhood of nodes l, which are neighbors of. hus, the AC version effectively uses data from nodes that are two hops away in every iterations, while the CA version uses data from nodes that are one hop away. his will be illustrated in the simulation part. Remar4: he number of nodes connected to the node is denoted by N. he computational complexity of the ACLMS for node at each time includes ( N ) M multiplications and ( N ) M additions [9]. For the proposed ACD, an extra computational cost is the evaluation of the exponential function of the error, which is not expensive. hus the new methods are also computationally efficient for DNE problem. 4. Performance analysis In the following, we study the convergence performance of the proposed ACD algorithm (4). he analysis of the CAD algorithm is similar but not studied here. For tractable analysis, we adopt the following assumptions: Assumption : All regressors u arise from Gaussian sources with zero-mean and spatially and temporally independent.

6 Assumption : he error nonlinearity G ( el, ( i)) is independent of the regressors u. Since nodes exchange data amongst themselves, their current update will then be affected by the weighted average of the previous estimates. herefore, to account for this inter-node dependence, it is suitable to study the performance of the whole networ. Some new variables need to be introduced. he proposed ACD algorithm can be expressed as ( i) w ( i ) ( i) e ( i) u ( i) w ( i ) ( i) e ( i) u ( i) w ( i) l, l ( i) ln where ( i) G ( d ( i) u ( i) w ( i -)), and ( i) ( i) (6) as a new step size factor. Furthermore, some other new variables need to be introduced and the local ones are transformed into global variables as follows: W( i) col{w ( i),w ( i), w N ( i)} (7) ( i) col{ ( i), ( i), N ( i)} (8) U( i) diag{ u( i), u( i), un ( i)} (9) ( i) diag{ ( i), ( i), N ( i)} (0) D( i) col{d ( i),d ( i), dn ( i)} () V( i) col{ v ( i), v ( i), v ( i)} () According the defined new variables above, a completely new set of equations representing the entire networ is formed, starting with the relation between the measurements D( i) U( i) Wo V( i) (3) where Wo Iwo,and I col{ IM, IM, IM } M is M matrix. hen, the update equations can be remodeled to represent the global networ ( i) W ( i ) ( i) U ( i)(d( i) U( i) W( i )) ( i) ( i) W ( i) ( i) where I, is weighting matrix, where{ } M l l, denotes Kronecer product, () i is the diagonal matrix and () i is defined by ( e( i)) ( e( i)) exp( ) I, exp( ) I, M M () i ( en ( i)) exp( ) I M With the above set of equations, the mean and mean square analysis of the ACD algorithm can be carried out. We first give the weight error vector for node as w ( i) w w ( i) (6) o he mean analysis considers the stability of the algorithm and derives a bound on the step size that guarantees the convergence in mean. he mean square analysis derives transient and steady-state expressions for the mean square deviation (MSD). he MSD is defined as MSD E[ w ( i) ] E[ w w ( i) ] (7) 4. Mean performance opt Similar to [6-], we define a global weight error vector as W ( i) Wo W( i) (8) Since Wo Wo,by incorporating the global weight error vector into (4),we have N (4) (5)

7 W ( i) W W ( i) o W () i o Wo [ W ( i ) ( i) U ( i)(d( i) U( i) W( i ))] W ( i ) [ ( i) U ( i)(d( i) U( i) W( i ))] W ( i ) [ ( i) U ( i)(u( i) Wo V( i) U(i) W( i ))] W( i ) [ ( i) U (i)(u( i) W(i ) V( i))] [I ( i) U ( i) U( i)]w( i ) ( i) U ( i) V( i) Here, we employ the Assumption to conclude that the matrix () i is independent of the regressor matrix U( i ). Consequently, we have E[ ( i)u ( i)u( i)] E[ ( i)]e[u ( i)u( i)] (30) where R E[U ( ) U( )] is the auto-correlation matrix of U( i ). aing the expectation on both sides of (9) gives U i i E[ W( i)] [I E[ ( i)]e[u ( i) U( i)]]e[w( i )] E[ ( i)]e[u ( i)]e[v( i)] [I E[ ( i)] R ]E[W( i )] E[ ( i)]e[u ( i)]e[v( i)] where, by Assumption, the expectation of the second term of the right hand side of (3) is zero. hen, we have E[ W( i)] [I E[ ( i)] R ]E[W( i )] (3) From (3), to ensure the stability in the mean, it should hold that max ( [I E[ ( i)]e[u ( i)u( i)]]) max ( ) (33) where [I E[ ( i)]e[u ( i)u( i)]], and (.) max denotes the maximum eigenvalue of a matrix. According to the relation BZ B Z, we derive max ( ) max ( ) (34) Since and for non cooperative schemes, we have I. It follows that ( ) ( ) (35) U U (9) (3) max he cooperation mode can enhance the stability of the system [7]. he algorithm will therefore be stable in the mean if n max [I E[ ( i)]r ] 0,n (36) u, i0 which holds true if the mean of the step size satisfies 0 E[ ( i)] (37) (R ) As ( i) ( i), we further derive max u, max u, 0 (R ) E[ ( i)] (38) his condition guarantees the asymptotic unbiasedness of the AC diffusion (5). If the weight l norm of each node is smaller than, we have e ( i) d ( i) w ( i -) u ( i) w ( i -) u ( i) d ( i) u ( i) d ( i) It follows easily that [30] 0,,..., N R E[ G ( u ( i) d ( i) )] max u, As a result, the algorithm will be stable when the step size is within the bound of (40). (40) (39)

8 Remar5: he condition of (40) is similar to those in [6,0]. he only difference is the extra term E[ G ( )], namely the expectation of the error nonlinearity introduced by. 4. Mean square performance Next, the mean square performance of the AC diffusion is studied. Computing the weighted norm of (9) and taing the expectations, we have where E[ W ( i) ] E i i i i i i i E i E i [ [I ( ) U ( ) U( )]W( ) ( ) U ( ) V( ) ] [ W( ) ] [ W( ) ] ( i) U( i) E[ W( i) ] U( i) ( i) E[ W( i ) ] E[V ( i) ( i) ( i) V ( i)] U( i) ( i) ( i) U( i) E[ W( i ) ] E[V ( i) ( i) ( i) V ( i)] (4) ( i) ( i) U ( i) (4) B( i) U ( i) U( i) U( i) U( i) ( i) B U( i) U( i) ( i) ( i) U ( i) U( i) ( i) U( i) U( i) ( i) U( i) ( i) ( i) U( i) Using the data independence assumption [3] and applying the expectation operator, we get For ease of notation, we denote E[ ] decomposed as E[ ] E[ ( i) U( i)] E[U( i) ( i) ] E[U( i) ( i) ] E[ ( i) U( i)] BE[ ( i)]e[u ( i) U( i)] E[U( i) U( i)]e[ ( i)]b E[U( i) ( i) ] ( i) U( i)] Σ (43) (44). Under Assumption, the auto-correlation matrix can be RU E[U ( i)u( i)] Q Q (45) where is a diagonal matrix containing the eigenvalues for the entire networ and Q is a matrix containing the eigenvectors corresponding to these eigenvalues. Using this decomposition, we define W( i) Q W ( i), U( i) U( i)q, Q Q, Q Q, Σ Q Σ Q, Q (i) Q (i), where the input regerssors are considered independent of each other at each node and the step size matrix (i) is bloc diagonal. So it does not transform since Q Q I.hen, one can rewrite (4) as E[ W( i) E[ W( i ) ] E[V ( i) ( i) ( i) V( i)] (46) where where ( i) ( i) U ( i). Σ Σ BE[ ( i)]e[u ( i) U( i)] E[U( i) U( i)]e[ ( i)]b E[U( i) ( i) ] ( i) U( i)] It can be seen that E[U ( i) U( i)]. Using the bvec operator, we define bvec{ Σ }, where bvec{} operator divides the matrix into smaller blocs and then applies the vec operator to each of the smaller blocs. Let R V V I M be the bloc diagonal noise covariance matrix for the entire networ, where denotes the bloc Kronecer product and V is a diagonal noise variance matrix for the networ. Hence, the second term of the right hand side of (46) is (47)

9 E[V ( i) ( i) ( i) V( i)] χ ( i) (48) where χ ( i) bvec{r E[ ( i)] }. he fourth order moment [U( ) E i ( i) ] ( i) U( i)] in (47) remains to be V evaluated. Using the step size independence assumption and the operator, we have According to [3], we have in which the matrix A is given by bvec{ E[U( i) ( i) ] ( i)u( i)]} (E[ ( i) ( i)]) A(B B ) (49) A diag{a,a,,a } (50) A diag{,,,, N } (5) where defines a diagonal eigenvalue matrix and is the eigenvalue vector for node. he output of the matrix E[ ( i) ( i)] can be written as (E[ ( i) ( i)]) E[diag{ ( i) I ( i) I,, M M ( i) I ( i) I, ( i) I ( i) I }] M M M N M E[diag{ ( i) ( i) I,, M ( i) I, ( i) ( i) I }] N M M diag{ E[ ( i)]e[ ( i)]i,, M E[ ( i)]i, E[ ( i)]e[ ( i)]i }] N M M Now applying the bvec operator to the weighting matrixσusing the relation bvec[ Σ ], we can get bac the original Σthrough bvec[ ] Σ, and where hen (46) taes the following form bvec[ Σ] [I (I E[ ( i)]) ( E[ ( i)] I )] M N (E[ ( i) ( i)])a(b B ) F( i) F( i) [I (I E[ ( i)]) ( E[ ( i)] I )] M N (E[ ( i) ( i)])a(b B ) E[ W( i) E[ W( i ) ] ( i) F (i) χ N (5) (53) (54) (55) which characterizes the transient behavior of the networ. Although (55) does not explicitly show the performance of the ACD, it is in fact subsumed in the weighting matrix, F(i) which varies for each iteration. However, (54) clearly shows the effect of the proposed algorithm on the performance through the presence of the diagonal step size matrix (i). 5. Simulation results In order to verify the performance of the proposed D algorithm in distributed networ estimation case, the topology of the networ with 0 nodes is generated as a realization of the random geometric graph model as shown in Figure. he location coordinates of the agents in the square region [0,.] [0,.]. he unnown parameter vector is set to randn(m,) ( M 0),where randn() is the M function of generating Gaussian random. he input regressors are zero-mean Gaussian, independent in time and space with size M=0. For each simulation, the number of repetitions is set at 500 and all the results are obtained by taing the ensemble average of the networ MSD over 00 independent Monte Carlo runs. o illustrate the robust performance of the proposed algorithms, the noise at each node is assumed to be independent of the noises at other nodes, and is generated by the multiplicative model, defined as n ( i) a ( i) A ( i), where a () i is a binary independent identically distributed occurrence process with

10 y p[ a ( i) ] c, p[ a (i) 0] c, where c is the arrival probability (AP) ; whereas A () i is a process uncorrelated with a (i). he variance of A () i is chosen to be substantially greater (possibly infinite) than that of a () i to represent the impulsive noise. In this paper, we consider A () i as an alpha-stable noise. he alpha-stable distribution as an impulsive noise model is widely applied in the literature [4-5]. he characteristic function of alpha-stable process is defined by f(t) exp{j t t [ j sgn(t)s(t, )]} (56) in which tan if S(t, ) log t if where (0, ] is the characteristic factor, is the location parameter, [,] (57) is the symmetry parameter, and 0 is the dispersion parameter. he characteristic factor measures the tail heaviness of the distribution. he smaller is, the heavier the tail is. In addition, measures the dispersion of the distribution, which plays a role similar to the variance of Gaussian distribution. And then the parameters vector of the noise model is defined as V stable (,,, ). Unless otherwise mentioned, we set the AP at 0., and V stable (.,0,,0) in the simulations below. Furthermore, we set the linear combination coefficients employing the Metropolies rule [33] Figure. Networ topology with N=0 nodes x 5. Performance comparison among the new methods and other algorithms First, the proposed algorithms (ACD and CAD) are compared with some existing algorithms, including the non cooperation LMS, the AC and CA DLMS, the DRLS, the DLMP (including ACDLMP and CADLMP), and DMEE. Among these algorithms, the DLMP and DMEE algorithms can also address the DNE problem in an impulsive noise environment. o guarantee almost the same initial convergence rate, we set the step-sizes at 0.03,0.06,0.06 for the mentioned LMS based diffusion, D and DMEE algorithms, respectively. he p is. for DLMP algorithm. Further, the ernel size is chosen as.0 for D and DMEE algorithms. he window length is L=8 for DMEE. All parameters are set by scanning for the best results. Figure shows the convergence curves in terms of MSD. One can observe that the convergence curve of the DLMP, DMEE and D wor well when large outliers occur, while other

11 mentioned algorithms fluctuate dramatically due to the sensitivity to the impulsive noises. As can be seen from the results, the proposed D algorithm has excellent performance in convergence rate and accuracy compared with other methods. he results confirm that the proposed algorithm exhibits a significant improvement in robust performance in impulsive noise environments. he steady-state MSDs at each node are shown in Figure 3. As expected, the AC diffusion algorithm performs better than all other algorithms. Although the performance of D is very close to that of DMEE, its computational complexity is much lower. For this reason, we conclude that the proposed D maes more sense than DMEE for applications in practice. In the subsequent simulations, we omit the results of ACDLMS, CADLMS, DRLS and NOCORPORAION because they often don t convergence in an impulsive noise environment ACDLMS ACDLMP(p=.) DRLS ACD CADLMS CADLMP(p=.) CAD NONCORPORAION DMEE iterations Figure. Convergence curves in terms of MSD ACDLMS ACDLMP(p=.) DRLS ACD CADLMS CADLMP(p=.) CAD NONCORPORAION DMEE Node Figure 3. MSD at steady-state for 0 nodes

12 Second, we compare the performance of the proposed D with that of the DLMP under different p value in terms of the MSD to show the robust performance. he p values of DLMP are selected at,.,.,.4, and, respectively. he other parameters for the algorithms eep the same as those in the first simulation. he convergence curves in terms of MSD are shown in Figure 4. One can observe that the DLMP and D wor well under the impulsive noise disturbances. he results confirm the fact that the DLMP (with smaller p values) and D are robust to the impulsive noises (especially with large outliers). Furthermore, the steady-state MSDs of the DLMP and D algorithms are shown in Figure 5. As expected, the AC and CA diffusion algorithms perform better than the AC and CA DLMP algorithms. We see that the D outperforms the DLMP algorithms in that it achieves a lower steady-state MSD at each node. his result can be explained by that the contains an exponential term, which reduces the influence of the large outliers significantly ACDLMP(p=) ACDLMP(p=.) DACDLMP(p=.) ACDLMP(p=.4) ACD CADLMP(p=) CADLMP(p=.) CADLMP(p=.) CADLMP(p=.4) CAD iterations Figure 4. Convergence curves in terms of MSD -35 ACDLMP(p=) ACDLMP(p=.) ACDLMP(p=.) ACDLMP(p=.4) ACD CADLMP(p=) CADLMP(p=.) CADLMP(p=.) CADLMP(p=.4) CAD Node Figure 5. MSD at steady-state for 0 nodes

13 hird, we show how the exponential parameter in the noise model affects the performance. From the above simulation results, we now that the AC version diffusion algorithm is better than the CA version. So, we compare only the performance of ACDLMP and ACD. We set the exponential parameter at,.,.3,.4,.5,.6,.7 and.8 respectively. he other experimental settings are the same as in the previous simulation. he steady-state MSDs averaged over the last 00 iterations for different values are plotted in Figure 6. It is evident that the ACD is robust consistently for different values. he performance of the ACDLMP (p=) becomes better and better when is increasing from.0 to.8. his is because that the alpha-stable distribution approaches Gaussian distribution when is close to DLMP(p=) DLMP(p=.) DLMP(p=.4) DLMP(p=) D Figure 6. Steady-state MSD of different algorithms Fourth, we compare the performance of the AC algorithm with the DMEE with different window lengths (5,6,8,0,). We set M=5. For eeping the same initial convergence rate, we set the step size at 0.05 for DMEE (L=5,6,8,0), and 0.06 for DMEE (L=) and ACD. Figure 7 shows the convergence curves of DMEE with different values of L and D. We observe that the ACD algorithm exhibits better performance than the DMEE (L=6,8,0,), while they achieve almost the same performance when L=5 for DMEE. From the results we can see that the window length has important effects on the performance of DMEE (seen also detailed analysis in[7]), which will bring a hard problem of the parameter selection. hus, the D has more advantage in addressing DNE in impulsive noise environments.

14 DMEE(L=5) DMEE(L=6) DMEE(L=8) DMEE(L=0) DMEE(L=) ACD iterations(ac) Figure 7. Convergence curves of ACD and DMEE with different window lengths L 5. Performance of D with different parameters First, we show how the ernel size affects the performance. he ernel size is a ey parameter for the proposed diffusion version algorithms (AC and CA D). Suppose the step sizes of the proposed algorithms used at each node are set at Figure 8 shows the convergence curves of each algorithm in terms of the networ MSD with different ernel sizes. One can observe that in this example, when ernel size is.0, both the AC and CA version algorithms perform very well = = =3 =4 = (a)iterations(ac) (b)iterations(ca) Figure 8. Convergence curves of D D with different = = =3 =4 =5 Second, we investigate how the parameter c in the noise model affects the performance of D. We set the c value at 0., 0., 0.4, and 0.8, respectively. he step-size and ernel size are 0.8 and.0, respectively. he convergence curves with different c values are shown in Figure 9. As one can see, the steady-state MSD is increasing with the c value increasing. his is because that the outliers will occur more and more frequently when the c value becomes larger.

15 0-0 c = 0. c = 0. c = 0.4 c = 0.5 c = (a)iterations(ac) c = 0. c = 0. c = 0.4 c = 0.5 c = (b)iterations(ca) Figure 9. Convergence curves of D with different c values Finally, we show the joint effects of the ernel size (,,3,4,5,6,) and noise power in terms of different (,.,.4,.6,.8,) on the performance. We mainly evaluate the AC diffusion algorithm in the remaining simulations. he other parameters are the same as those in the above simulations. he steady-state MSDs are shown in Figure0, from which one can see that a smaller ernel size is particularly useful for a noise with smaller = =. =.4 =.6 =.8 = Figure 0. Steady-state MSD of the ACD 6. Conclusions In this paper, two robust based diffusion algorithms, namely the AC and CA diffusion algorithms, are developed to improving the performance of the distributed estimation over networ in impulsive noise environments. he new algorithms show strong robustness against impulsive

16 disturbances as is very effective to handle non-gaussian noises with large outliers. Mean and mean square convergence analysis has been carried out, and a sufficient condition for ensuring the mean square stability is obtained. Simulation results illustrate that the based diffusion algorithms perform very well. Especially, the ACD can achieve better performance than the robust DLMP algorithm in terms of the MSD. Although DMEE with proper L can achieve almost the same performance as that of AC, its computational complexity is much higher. Acnowledgments his wor was supported by the 973 Program (05CB35703) and the National Natural Science Foundation of China (No. 6375, No ). References [] D. Estrin, G. Pottie, and M. Srivastava., Instrumenting the world with wireless sensor networs, In Proc. IEEE Int. Conf. Acoustics, Speech, signal Processing (ICASSP), Salt Lae City, U, May 00, pp [] D. Li, K. D. Wong, Y. H. Hu, and A. M. Sayeed., Detection, classification, and tracing of targets, IEEE Signal Processing Magazine, 9()(00) 7-9. [3] I. Ayildiz, W. Su, Y. Sanarasubramaniam, and E. Cayirci., A survey on sensor networs, IEEE Communications magazine, 40(8)(00) 0 4. [4] L. A. Rossi, B. Krishnamachari, and C.-C. J. Kuo., Distributed parameter estimation for monitoring diffusion phenomena using physical models, In Proc. IEEE Conf. Sensor Ad Hoc Comm. Networs, Santa Clara, CA, Oct. 004, pp [5] F. Cattivelli and A. H. Sayed., Diffusion LMS strategies for distributed estimation, IEEE ransactions on Signal Processing, 58(3)(00) [6] C. G. Lopes and A. H. Sayed., Diffusion least-mean squares over adaptive networs: formulation and performance analysis, IEEE ransactions on Signal Processing, 56(7)(008) [7] N. aahashi, I. Yamada, A H. Sayed, Diffusion least-mean squares with adaptive combiners. IEEE International Conference on Acoustics, Speech and Signal Processing, 009. ICASSP 009. IEEE, 009: [8] aahashi N, Yamada I, Sayed A H., Diffusion least-mean squares with adaptive combiners: Formulation and performance analysis, IEEE ransactions on Signal Processing, 58 (9)(00) [9] F. S. Cattivelli, C. G. Lopes, and A. H. Sayed., Diffusion recursive least-squares for distributed estimation over adaptive networs, IEEE ransactions on Signal Processing, 56(5)(008) [0] Saeed M O B, Zerguine A, Zummo S A., A variable step-size strategy for distributed estimation over adaptive networs, EURASIP Journal on Advances in Signal Processing, ()(03) 4. [] Lee H S, Kim S E, Lee J W, et al., A Variable Step-Size Diffusion LMS Algorithm for Distributed Estimation, IEEE ransactions on Signal Processing, 63(7) [] Liu Y, Li C, Zhang Z., Diffusion sparse least-mean squares over networs, IEEE ransactions on Signal Processing, 60(8)(0) [3] Meng-Li Cao,Qing-Hao Meng, Ming Zeng, Biao Sun, Wei Li, Cheng-Jun Ding, Distributed Least-Squares Estimation of a Remote Chemical Source via Convex Combination in Wireless Sensor Networs, Sensors,4( 04) [4] Shao, M. and Niias, C. L., Signal processing with fractional lower order moments: stable processes and their applications, Proceedings of the IEEE, 8(7)(993) [5] WEN F., Diffusion least-mean P-power algorithms for distributed estimation in alpha-stable noise environments, Electronics letters, 49()(03) [6] Principe J C., Information theoretic learning: Renyi's entropy and ernel perspectives, Springer Science & Business Media, 00. [7] Li C, Shen P, Liu Y, et al., Diffusion information theoretic learning for distributed estimation over networ, IEEE ransactions on Signal Processing, 6(6)(03) [8] Weifeng Liu, Pusal P. Poharel, and Jose C. Principe., Correntropy: Properties and Applications in Non-Gaussian Signal Processing, IEEE ransactions on Signal Processing, 55()(007) [9] Badong Chen, JoséC. Príncipe., Maximum Correntropy Estimation Is a Smoothed MAP Estimation, IEEE Signal Processing Letters, 9(8)(0) [0] Zhong F, Li D, Zhang J., Robust locality preserving projection based on maximum correntropy criterion, Journal of Visual Communication and Image Representation, 5(7)(04) [] Xing H J, Ren H R., Regularized correntropy criterion based feature extraction for novelty detection, Neurocomputing,33(04)

17 [] Chen X, Yang J, Liang J, Ye Q., Recursive robust least squares support vector regression based on maximum correntropy criterion, Neurocomputing, 97(0) [3] Abhishe Singh, Jose C. Principe., Using Correntropy as Cost Function in Adaptive Filters, In Proceedings of International Joint Conference on Neural Networs, Atlanta, GA, June 009, pp [4] Songlin Zhao, Badong Chen, Jose C. Principe, Kernel Adaptive Filtering with Maximum Correntropy Criterion, In Proceedings of International Joint Conference on Neural Networs, San Jose, CA, Aug 0, pp [5] Qu Hua, Ma Wentao, Zhao Jihong, Wang ao, A New Learning Algorithm Based on for Colored Noise Interference Cancellation, Journal of Information & Computational Science, 0(7)(03) [6] Ma W, Qu H, Gui G, et al., Maximum correntropy criterion based sparse adaptive filtering algorithms for robust channel estimation under non-gaussian environments, Journal of the Franlin Institute, 35(7)(05) [7] Chen B, Xing L, Liang J, Zheng N, J.C. Principe., Steady-state mean-square error analysis for adaptive filtering under the maximum correntropy criterion, IEEE Signal Processing Letters, (7)(04) [8] Wu Z, Peng S, Chen B, Zhao H, Robust Hammerstein Adaptive Filtering under Maximum Correntropy Criterion. Entropy, 05, 7(0), [9] Lee J W, Kim S E, Song W J., Data-selective diffusion LMS for reducing communication overhead, Signal Processing, 3(05)-7. [30] Chen B, Wang J, Zhao H, et al., Convergence of a Fixed-Point Algorithm under Maximum Correntropy Criterion, IEEE Signal Processing Letters, (0)(05) [3] AH Sayed., Fundamentals of Adaptive Filtering, Wiley, New Yor, 003. [3] AI Sulyman, A Zerguine., Convergence and steady-state analysis of a variable step-size NLMS algorithm, Signal Processing, 83(6)(003) [33] Xiao L, Boyd S., Fast linear iterations for distributed averaging, Systems & Control Letters, 53()(004)65 78.

Robust diffusion maximum correntropy criterion algorithm for distributed network estimation

Robust diffusion maximum correntropy criterion algorithm for distributed network estimation Robust diffusion maximum correntropy criterion algorithm for distributed networ estimation Wentao Ma, Hua Qu, Badong Chen *, Jihong Zhao,, Jiandong Duan 3 School of Electronic and Information Engineering,

More information

Trimmed Diffusion Least Mean Squares for Distributed Estimation

Trimmed Diffusion Least Mean Squares for Distributed Estimation Trimmed Diffusion Least Mean Squares for Distributed Estimation Hong Ji, Xiaohan Yang, Badong Chen School of Electronic and Information Engineering Xi an Jiaotong University Xi an 710049, P.R. China chenbd@mail.xjtu.edu.cn,

More information

STEADY-STATE MEAN SQUARE PERFORMANCE OF A SPARSIFIED KERNEL LEAST MEAN SQUARE ALGORITHM.

STEADY-STATE MEAN SQUARE PERFORMANCE OF A SPARSIFIED KERNEL LEAST MEAN SQUARE ALGORITHM. STEADY-STATE MEAN SQUARE PERFORMANCE OF A SPARSIFIED KERNEL LEAST MEAN SQUARE ALGORITHM Badong Chen 1, Zhengda Qin 1, Lei Sun 2 1 Institute of Artificial Intelligence and Robotics, Xi an Jiaotong University,

More information

Analysis of incremental RLS adaptive networks with noisy links

Analysis of incremental RLS adaptive networks with noisy links Analysis of incremental RLS adaptive networs with noisy lins Azam Khalili, Mohammad Ali Tinati, and Amir Rastegarnia a) Faculty of Electrical and Computer Engineering, University of Tabriz Tabriz 51664,

More information

Diffusion LMS Algorithms for Sensor Networks over Non-ideal Inter-sensor Wireless Channels

Diffusion LMS Algorithms for Sensor Networks over Non-ideal Inter-sensor Wireless Channels Diffusion LMS Algorithms for Sensor Networs over Non-ideal Inter-sensor Wireless Channels Reza Abdolee and Benoit Champagne Electrical and Computer Engineering McGill University 3480 University Street

More information

Recursive Least Squares for an Entropy Regularized MSE Cost Function

Recursive Least Squares for an Entropy Regularized MSE Cost Function Recursive Least Squares for an Entropy Regularized MSE Cost Function Deniz Erdogmus, Yadunandana N. Rao, Jose C. Principe Oscar Fontenla-Romero, Amparo Alonso-Betanzos Electrical Eng. Dept., University

More information

STOCHASTIC INFORMATION GRADIENT ALGORITHM BASED ON MAXIMUM ENTROPY DENSITY ESTIMATION. Badong Chen, Yu Zhu, Jinchun Hu and Ming Zhang

STOCHASTIC INFORMATION GRADIENT ALGORITHM BASED ON MAXIMUM ENTROPY DENSITY ESTIMATION. Badong Chen, Yu Zhu, Jinchun Hu and Ming Zhang ICIC Express Letters ICIC International c 2009 ISSN 1881-803X Volume 3, Number 3, September 2009 pp. 1 6 STOCHASTIC INFORMATION GRADIENT ALGORITHM BASED ON MAXIMUM ENTROPY DENSITY ESTIMATION Badong Chen,

More information

A Quality-aware Incremental LMS Algorithm for Distributed Adaptive Estimation

A Quality-aware Incremental LMS Algorithm for Distributed Adaptive Estimation International Journal of Automation and Computing 11(6), December 2014, 676-682 DOI: 10.1007/s11633-014-0838-x A Quality-aware Incremental LMS Algorithm for Distributed Adaptive Estimation Wael M. Bazzi

More information

EUSIPCO

EUSIPCO EUSIPCO 3 569736677 FULLY ISTRIBUTE SIGNAL ETECTION: APPLICATION TO COGNITIVE RAIO Franc Iutzeler Philippe Ciblat Telecom ParisTech, 46 rue Barrault 753 Paris, France email: firstnamelastname@telecom-paristechfr

More information

Variable Learning Rate LMS Based Linear Adaptive Inverse Control *

Variable Learning Rate LMS Based Linear Adaptive Inverse Control * ISSN 746-7659, England, UK Journal of Information and Computing Science Vol., No. 3, 6, pp. 39-48 Variable Learning Rate LMS Based Linear Adaptive Inverse Control * Shuying ie, Chengjin Zhang School of

More information

Root-MUSIC Time Delay Estimation Based on Propagator Method Bin Ba, Yun Long Wang, Na E Zheng & Han Ying Hu

Root-MUSIC Time Delay Estimation Based on Propagator Method Bin Ba, Yun Long Wang, Na E Zheng & Han Ying Hu International Conference on Automation, Mechanical Control and Computational Engineering (AMCCE 15) Root-MUSIC ime Delay Estimation Based on ropagator Method Bin Ba, Yun Long Wang, Na E Zheng & an Ying

More information

ESTIMATION problem plays a key role in many fields,

ESTIMATION problem plays a key role in many fields, 1 Maximum Correntropy Unscented Filter Xi Liu, Badong Chen, Bin Xu, Zongze Wu and Paul Honeine arxiv:1608.07526v1 stat.ml 26 Aug 2016 Abstract The unscented transformation UT) is an efficient method to

More information

A STATE-SPACE APPROACH FOR THE ANALYSIS OF WAVE AND DIFFUSION FIELDS

A STATE-SPACE APPROACH FOR THE ANALYSIS OF WAVE AND DIFFUSION FIELDS ICASSP 2015 A STATE-SPACE APPROACH FOR THE ANALYSIS OF WAVE AND DIFFUSION FIELDS Stefano Maranò Donat Fäh Hans-Andrea Loeliger ETH Zurich, Swiss Seismological Service, 8092 Zürich ETH Zurich, Dept. Information

More information

Maximum Likelihood Diffusive Source Localization Based on Binary Observations

Maximum Likelihood Diffusive Source Localization Based on Binary Observations Maximum Lielihood Diffusive Source Localization Based on Binary Observations Yoav Levinboo and an F. Wong Wireless Information Networing Group, University of Florida Gainesville, Florida 32611-6130, USA

More information

Improved least-squares-based combiners for diffusion networks

Improved least-squares-based combiners for diffusion networks Improved least-squares-based combiners for diffusion networs Jesus Fernandez-Bes, Luis A. Azpicueta-Ruiz, Magno T. M. Silva, and Jerónimo Arenas-García Universidad Carlos III de Madrid Universidade de

More information

DISTRIBUTED DIFFUSION-BASED LMS FOR NODE-SPECIFIC PARAMETER ESTIMATION OVER ADAPTIVE NETWORKS. & C.T.I RU-8, 26500, Rio - Patra, Greece

DISTRIBUTED DIFFUSION-BASED LMS FOR NODE-SPECIFIC PARAMETER ESTIMATION OVER ADAPTIVE NETWORKS. & C.T.I RU-8, 26500, Rio - Patra, Greece 2014 IEEE International Conference on Acoustic, Speech Signal Processing (ICASSP) DISTRIBUTED DIFFUSION-BASED LMS FOR NODE-SPECIFIC PARAMETER ESTIMATION OVER ADAPTIVE NETWORKS Niola Bogdanović 1, Jorge

More information

Adaptive Filter Theory

Adaptive Filter Theory 0 Adaptive Filter heory Sung Ho Cho Hanyang University Seoul, Korea (Office) +8--0-0390 (Mobile) +8-10-541-5178 dragon@hanyang.ac.kr able of Contents 1 Wiener Filters Gradient Search by Steepest Descent

More information

City, University of London Institutional Repository

City, University of London Institutional Repository City Research Online City, University of London Institutional Repository Citation: Zhao, S., Shmaliy, Y. S., Khan, S. & Liu, F. (2015. Improving state estimates over finite data using optimal FIR filtering

More information

A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1

A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1 A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1 Jinglin Zhou Hong Wang, Donghua Zhou Department of Automation, Tsinghua University, Beijing 100084, P. R. China Control Systems Centre,

More information

IN recent years, the problems of sparse signal recovery

IN recent years, the problems of sparse signal recovery IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 1, NO. 2, APRIL 2014 149 Distributed Sparse Signal Estimation in Sensor Networs Using H -Consensus Filtering Haiyang Yu Yisha Liu Wei Wang Abstract This paper

More information

MANY real-word applications require complex nonlinear

MANY real-word applications require complex nonlinear Proceedings of International Joint Conference on Neural Networks, San Jose, California, USA, July 31 August 5, 2011 Kernel Adaptive Filtering with Maximum Correntropy Criterion Songlin Zhao, Badong Chen,

More information

Fisher Information Matrix-based Nonlinear System Conversion for State Estimation

Fisher Information Matrix-based Nonlinear System Conversion for State Estimation Fisher Information Matrix-based Nonlinear System Conversion for State Estimation Ming Lei Christophe Baehr and Pierre Del Moral Abstract In practical target tracing a number of improved measurement conversion

More information

Submitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Co

Submitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Co Submitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Computational Neuro-Engineering Laboratory University

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group

NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION M. Schwab, P. Noll, and T. Sikora Technical University Berlin, Germany Communication System Group Einsteinufer 17, 1557 Berlin (Germany) {schwab noll

More information

Local Strong Convexity of Maximum-Likelihood TDOA-Based Source Localization and Its Algorithmic Implications

Local Strong Convexity of Maximum-Likelihood TDOA-Based Source Localization and Its Algorithmic Implications Local Strong Convexity of Maximum-Likelihood TDOA-Based Source Localization and Its Algorithmic Implications Huikang Liu, Yuen-Man Pun, and Anthony Man-Cho So Dept of Syst Eng & Eng Manag, The Chinese

More information

Structured Stochastic Uncertainty

Structured Stochastic Uncertainty Structured Stochastic Uncertainty Bassam Bamieh Department of Mechanical Engineering University of California at Santa Barbara Santa Barbara, CA, 9306 bamieh@engineeringucsbedu Abstract We consider linear

More information

Near Optimal Adaptive Robust Beamforming

Near Optimal Adaptive Robust Beamforming Near Optimal Adaptive Robust Beamforming The performance degradation in traditional adaptive beamformers can be attributed to the imprecise knowledge of the array steering vector and inaccurate estimation

More information

Model-based Correlation Measure for Gain and Offset Nonuniformity in Infrared Focal-Plane-Array Sensors

Model-based Correlation Measure for Gain and Offset Nonuniformity in Infrared Focal-Plane-Array Sensors Model-based Correlation Measure for Gain and Offset Nonuniformity in Infrared Focal-Plane-Array Sensors César San Martin Sergio Torres Abstract In this paper, a model-based correlation measure between

More information

FERMENTATION BATCH PROCESS MONITORING BY STEP-BY-STEP ADAPTIVE MPCA. Ning He, Lei Xie, Shu-qing Wang, Jian-ming Zhang

FERMENTATION BATCH PROCESS MONITORING BY STEP-BY-STEP ADAPTIVE MPCA. Ning He, Lei Xie, Shu-qing Wang, Jian-ming Zhang FERMENTATION BATCH PROCESS MONITORING BY STEP-BY-STEP ADAPTIVE MPCA Ning He Lei Xie Shu-qing Wang ian-ming Zhang National ey Laboratory of Industrial Control Technology Zhejiang University Hangzhou 3007

More information

Impulsive Noise Filtering In Biomedical Signals With Application of New Myriad Filter

Impulsive Noise Filtering In Biomedical Signals With Application of New Myriad Filter BIOSIGAL 21 Impulsive oise Filtering In Biomedical Signals With Application of ew Myriad Filter Tomasz Pander 1 1 Division of Biomedical Electronics, Institute of Electronics, Silesian University of Technology,

More information

2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST A General Class of Nonlinear Normalized Adaptive Filtering Algorithms

2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST A General Class of Nonlinear Normalized Adaptive Filtering Algorithms 2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST 1999 A General Class of Nonlinear Normalized Adaptive Filtering Algorithms Sudhakar Kalluri, Member, IEEE, and Gonzalo R. Arce, Senior

More information

Statistical Learning Theory and the C-Loss cost function

Statistical Learning Theory and the C-Loss cost function Statistical Learning Theory and the C-Loss cost function Jose Principe, Ph.D. Distinguished Professor ECE, BME Computational NeuroEngineering Laboratory and principe@cnel.ufl.edu Statistical Learning Theory

More information

Ch 4. Linear Models for Classification

Ch 4. Linear Models for Classification Ch 4. Linear Models for Classification Pattern Recognition and Machine Learning, C. M. Bishop, 2006. Department of Computer Science and Engineering Pohang University of Science and echnology 77 Cheongam-ro,

More information

12.4 Known Channel (Water-Filling Solution)

12.4 Known Channel (Water-Filling Solution) ECEn 665: Antennas and Propagation for Wireless Communications 54 2.4 Known Channel (Water-Filling Solution) The channel scenarios we have looed at above represent special cases for which the capacity

More information

MITIGATING UNCORRELATED PERIODIC DISTURBANCE IN NARROWBAND ACTIVE NOISE CONTROL SYSTEMS

MITIGATING UNCORRELATED PERIODIC DISTURBANCE IN NARROWBAND ACTIVE NOISE CONTROL SYSTEMS 17th European Signal Processing Conference (EUSIPCO 29) Glasgow, Scotland, August 24-28, 29 MITIGATING UNCORRELATED PERIODIC DISTURBANCE IN NARROWBAND ACTIVE NOISE CONTROL SYSTEMS Muhammad Tahir AKHTAR

More information

CTA diffusion based recursive energy detection

CTA diffusion based recursive energy detection CTA diffusion based recursive energy detection AHTI AINOMÄE KTH Royal Institute of Technology Department of Signal Processing. Tallinn University of Technology Department of Radio and Telecommunication

More information

An Adaptive Sensor Array Using an Affine Combination of Two Filters

An Adaptive Sensor Array Using an Affine Combination of Two Filters An Adaptive Sensor Array Using an Affine Combination of Two Filters Tõnu Trump Tallinn University of Technology Department of Radio and Telecommunication Engineering Ehitajate tee 5, 19086 Tallinn Estonia

More information

NON-LINEAR CONTROL OF OUTPUT PROBABILITY DENSITY FUNCTION FOR LINEAR ARMAX SYSTEMS

NON-LINEAR CONTROL OF OUTPUT PROBABILITY DENSITY FUNCTION FOR LINEAR ARMAX SYSTEMS Control 4, University of Bath, UK, September 4 ID-83 NON-LINEAR CONTROL OF OUTPUT PROBABILITY DENSITY FUNCTION FOR LINEAR ARMAX SYSTEMS H. Yue, H. Wang Control Systems Centre, University of Manchester

More information

Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels

Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels Bijit Kumar Das 1, Mrityunjoy Chakraborty 2 Department of Electronics and Electrical Communication Engineering Indian Institute

More information

Blind Channel Equalization in Impulse Noise

Blind Channel Equalization in Impulse Noise Blind Channel Equalization in Impulse Noise Rubaiyat Yasmin and Tetsuya Shimamura Graduate School of Science and Engineering, Saitama University 255 Shimo-okubo, Sakura-ku, Saitama 338-8570, Japan yasmin@sie.ics.saitama-u.ac.jp

More information

Channel Estimation with Low-Precision Analog-to-Digital Conversion

Channel Estimation with Low-Precision Analog-to-Digital Conversion Channel Estimation with Low-Precision Analog-to-Digital Conversion Onkar Dabeer School of Technology and Computer Science Tata Institute of Fundamental Research Mumbai India Email: onkar@tcs.tifr.res.in

More information

Multiple Similarities Based Kernel Subspace Learning for Image Classification

Multiple Similarities Based Kernel Subspace Learning for Image Classification Multiple Similarities Based Kernel Subspace Learning for Image Classification Wang Yan, Qingshan Liu, Hanqing Lu, and Songde Ma National Laboratory of Pattern Recognition, Institute of Automation, Chinese

More information

Sliding Window Recursive Quadratic Optimization with Variable Regularization

Sliding Window Recursive Quadratic Optimization with Variable Regularization 11 American Control Conference on O'Farrell Street, San Francisco, CA, USA June 29 - July 1, 11 Sliding Window Recursive Quadratic Optimization with Variable Regularization Jesse B. Hoagg, Asad A. Ali,

More information

MULTI-RESOLUTION SIGNAL DECOMPOSITION WITH TIME-DOMAIN SPECTROGRAM FACTORIZATION. Hirokazu Kameoka

MULTI-RESOLUTION SIGNAL DECOMPOSITION WITH TIME-DOMAIN SPECTROGRAM FACTORIZATION. Hirokazu Kameoka MULTI-RESOLUTION SIGNAL DECOMPOSITION WITH TIME-DOMAIN SPECTROGRAM FACTORIZATION Hiroazu Kameoa The University of Toyo / Nippon Telegraph and Telephone Corporation ABSTRACT This paper proposes a novel

More information

Convergence Evaluation of a Random Step-Size NLMS Adaptive Algorithm in System Identification and Channel Equalization

Convergence Evaluation of a Random Step-Size NLMS Adaptive Algorithm in System Identification and Channel Equalization Convergence Evaluation of a Random Step-Size NLMS Adaptive Algorithm in System Identification and Channel Equalization 1 Shihab Jimaa Khalifa University of Science, Technology and Research (KUSTAR) Faculty

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface

More information

Recursive Generalized Eigendecomposition for Independent Component Analysis

Recursive Generalized Eigendecomposition for Independent Component Analysis Recursive Generalized Eigendecomposition for Independent Component Analysis Umut Ozertem 1, Deniz Erdogmus 1,, ian Lan 1 CSEE Department, OGI, Oregon Health & Science University, Portland, OR, USA. {ozertemu,deniz}@csee.ogi.edu

More information

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER Zhen Zhen 1, Jun Young Lee 2, and Abdus Saboor 3 1 Mingde College, Guizhou University, China zhenz2000@21cn.com 2 Department

More information

Distributed Consensus Optimization

Distributed Consensus Optimization Distributed Consensus Optimization Ming Yan Michigan State University, CMSE/Mathematics September 14, 2018 Decentralized-1 Backgroundwhy andwe motivation need decentralized optimization? I Decentralized

More information

Computing Maximum Entropy Densities: A Hybrid Approach

Computing Maximum Entropy Densities: A Hybrid Approach Computing Maximum Entropy Densities: A Hybrid Approach Badong Chen Department of Precision Instruments and Mechanology Tsinghua University Beijing, 84, P. R. China Jinchun Hu Department of Precision Instruments

More information

VISION TRACKING PREDICTION

VISION TRACKING PREDICTION VISION RACKING PREDICION Eri Cuevas,2, Daniel Zaldivar,2, and Raul Rojas Institut für Informati, Freie Universität Berlin, ausstr 9, D-495 Berlin, German el 0049-30-83852485 2 División de Electrónica Computación,

More information

IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE?

IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE? IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE? Dariusz Bismor Institute of Automatic Control, Silesian University of Technology, ul. Akademicka 16, 44-100 Gliwice, Poland, e-mail: Dariusz.Bismor@polsl.pl

More information

KNOWN approaches for improving the performance of

KNOWN approaches for improving the performance of IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 58, NO. 8, AUGUST 2011 537 Robust Quasi-Newton Adaptive Filtering Algorithms Md. Zulfiquar Ali Bhotto, Student Member, IEEE, and Andreas

More information

sparse and low-rank tensor recovery Cubic-Sketching

sparse and low-rank tensor recovery Cubic-Sketching Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru

More information

On the Stability of the Least-Mean Fourth (LMF) Algorithm

On the Stability of the Least-Mean Fourth (LMF) Algorithm XXI SIMPÓSIO BRASILEIRO DE TELECOMUNICACÕES-SBT 4, 6-9 DE SETEMBRO DE 4, BELÉM, PA On the Stability of the Least-Mean Fourth (LMF) Algorithm Vítor H. Nascimento and José Carlos M. Bermudez + Abstract We

More information

A New Nonlinear Filtering Method for Ballistic Target Tracking

A New Nonlinear Filtering Method for Ballistic Target Tracking th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 9 A New Nonlinear Filtering Method for Ballistic arget racing Chunling Wu Institute of Electronic & Information Engineering

More information

Using Entropy and 2-D Correlation Coefficient as Measuring Indices for Impulsive Noise Reduction Techniques

Using Entropy and 2-D Correlation Coefficient as Measuring Indices for Impulsive Noise Reduction Techniques Using Entropy and 2-D Correlation Coefficient as Measuring Indices for Impulsive Noise Reduction Techniques Zayed M. Ramadan Department of Electronics and Communications Engineering, Faculty of Engineering,

More information

Performance Analysis of a Threshold-Based Relay Selection Algorithm in Wireless Networks

Performance Analysis of a Threshold-Based Relay Selection Algorithm in Wireless Networks Communications and Networ, 2010, 2, 87-92 doi:10.4236/cn.2010.22014 Published Online May 2010 (http://www.scirp.org/journal/cn Performance Analysis of a Threshold-Based Relay Selection Algorithm in Wireless

More information

Lecture 16: Introduction to Neural Networks

Lecture 16: Introduction to Neural Networks Lecture 16: Introduction to Neural Networs Instructor: Aditya Bhasara Scribe: Philippe David CS 5966/6966: Theory of Machine Learning March 20 th, 2017 Abstract In this lecture, we consider Bacpropagation,

More information

Musical noise reduction in time-frequency-binary-masking-based blind source separation systems

Musical noise reduction in time-frequency-binary-masking-based blind source separation systems Musical noise reduction in time-frequency-binary-masing-based blind source separation systems, 3, a J. Čermá, 1 S. Arai, 1. Sawada and 1 S. Maino 1 Communication Science Laboratories, Corporation, Kyoto,

More information

1 EM algorithm: updating the mixing proportions {π k } ik are the posterior probabilities at the qth iteration of EM.

1 EM algorithm: updating the mixing proportions {π k } ik are the posterior probabilities at the qth iteration of EM. Université du Sud Toulon - Var Master Informatique Probabilistic Learning and Data Analysis TD: Model-based clustering by Faicel CHAMROUKHI Solution The aim of this practical wor is to show how the Classification

More information

Shape of Gaussians as Feature Descriptors

Shape of Gaussians as Feature Descriptors Shape of Gaussians as Feature Descriptors Liyu Gong, Tianjiang Wang and Fang Liu Intelligent and Distributed Computing Lab, School of Computer Science and Technology Huazhong University of Science and

More information

A New SLNR-based Linear Precoding for. Downlink Multi-User Multi-Stream MIMO Systems

A New SLNR-based Linear Precoding for. Downlink Multi-User Multi-Stream MIMO Systems A New SLNR-based Linear Precoding for 1 Downlin Multi-User Multi-Stream MIMO Systems arxiv:1008.0730v1 [cs.it] 4 Aug 2010 Peng Cheng, Meixia Tao and Wenjun Zhang Abstract Signal-to-leaage-and-noise ratio

More information

Ch5: Least Mean-Square Adaptive Filtering

Ch5: Least Mean-Square Adaptive Filtering Ch5: Least Mean-Square Adaptive Filtering Introduction - approximating steepest-descent algorithm Least-mean-square algorithm Stability and performance of the LMS algorithm Robustness of the LMS algorithm

More information

CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION

CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION 59 CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION 4. INTRODUCTION Weighted average-based fusion algorithms are one of the widely used fusion methods for multi-sensor data integration. These methods

More information

CCA BASED ALGORITHMS FOR BLIND EQUALIZATION OF FIR MIMO SYSTEMS

CCA BASED ALGORITHMS FOR BLIND EQUALIZATION OF FIR MIMO SYSTEMS CCA BASED ALGORITHMS FOR BLID EQUALIZATIO OF FIR MIMO SYSTEMS Javier Vía and Ignacio Santamaría Dept of Communications Engineering University of Cantabria 395 Santander, Cantabria, Spain E-mail: {jvia,nacho}@gtasdicomunicanes

More information

Results on stability of linear systems with time varying delay

Results on stability of linear systems with time varying delay IET Control Theory & Applications Brief Paper Results on stability of linear systems with time varying delay ISSN 75-8644 Received on 8th June 206 Revised st September 206 Accepted on 20th September 206

More information

A Distributed Newton Method for Network Utility Maximization, I: Algorithm

A Distributed Newton Method for Network Utility Maximization, I: Algorithm A Distributed Newton Method for Networ Utility Maximization, I: Algorithm Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract Most existing wors use dual decomposition and first-order

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Brown University CSCI 295-P, Spring 213 Prof. Erik Sudderth Lecture 11: Inference & Learning Overview, Gaussian Graphical Models Some figures courtesy Michael Jordan s draft

More information

CLOSE-TO-CLEAN REGULARIZATION RELATES

CLOSE-TO-CLEAN REGULARIZATION RELATES Worshop trac - ICLR 016 CLOSE-TO-CLEAN REGULARIZATION RELATES VIRTUAL ADVERSARIAL TRAINING, LADDER NETWORKS AND OTHERS Mudassar Abbas, Jyri Kivinen, Tapani Raio Department of Computer Science, School of

More information

Machine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece.

Machine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece. Machine Learning A Bayesian and Optimization Perspective Academic Press, 2015 Sergios Theodoridis 1 1 Dept. of Informatics and Telecommunications, National and Kapodistrian University of Athens, Athens,

More information

Porcupine Neural Networks: (Almost) All Local Optima are Global

Porcupine Neural Networks: (Almost) All Local Optima are Global Porcupine Neural Networs: (Almost) All Local Optima are Global Soheil Feizi, Hamid Javadi, Jesse Zhang and David Tse arxiv:1710.0196v1 [stat.ml] 5 Oct 017 Stanford University Abstract Neural networs have

More information

Notes on Latent Semantic Analysis

Notes on Latent Semantic Analysis Notes on Latent Semantic Analysis Costas Boulis 1 Introduction One of the most fundamental problems of information retrieval (IR) is to find all documents (and nothing but those) that are semantically

More information

1 Introduction 198; Dugard et al, 198; Dugard et al, 198) A delay matrix in such a lower triangular form is called an interactor matrix, and almost co

1 Introduction 198; Dugard et al, 198; Dugard et al, 198) A delay matrix in such a lower triangular form is called an interactor matrix, and almost co Multivariable Receding-Horizon Predictive Control for Adaptive Applications Tae-Woong Yoon and C M Chow y Department of Electrical Engineering, Korea University 1, -a, Anam-dong, Sungbu-u, Seoul 1-1, Korea

More information

ON SOME EXTENSIONS OF THE NATURAL GRADIENT ALGORITHM. Brain Science Institute, RIKEN, Wako-shi, Saitama , Japan

ON SOME EXTENSIONS OF THE NATURAL GRADIENT ALGORITHM. Brain Science Institute, RIKEN, Wako-shi, Saitama , Japan ON SOME EXTENSIONS OF THE NATURAL GRADIENT ALGORITHM Pando Georgiev a, Andrzej Cichocki b and Shun-ichi Amari c Brain Science Institute, RIKEN, Wako-shi, Saitama 351-01, Japan a On leave from the Sofia

More information

ON-LINE BLIND SEPARATION OF NON-STATIONARY SIGNALS

ON-LINE BLIND SEPARATION OF NON-STATIONARY SIGNALS Yugoslav Journal of Operations Research 5 (25), Number, 79-95 ON-LINE BLIND SEPARATION OF NON-STATIONARY SIGNALS Slavica TODOROVIĆ-ZARKULA EI Professional Electronics, Niš, bssmtod@eunet.yu Branimir TODOROVIĆ,

More information

BILINEAR forms were addressed in the literature in

BILINEAR forms were addressed in the literature in IEEE SIGNAL PROCESSING LEERS, VOL. 24, NO. 5, MAY 2017 653 On the Identification of Bilinear Forms With the Wiener Filter Jacob Benesty, Constantin Paleologu, Member, IEEE, and Silviu Ciochină, Senior

More information

Asymptotic Distribution of The Number of Isolated Nodes in Wireless Ad Hoc Networks with Unreliable Nodes and Links

Asymptotic Distribution of The Number of Isolated Nodes in Wireless Ad Hoc Networks with Unreliable Nodes and Links Asymptotic Distribution of The Number of Isolated Nodes in Wireless Ad Hoc Networs with Unreliable Nodes and Lins Chih-Wei Yi, Peng-Jun Wan, Kuo-Wei Lin and Chih-Hao Huang Department of Computer Science

More information

Comparative Performance Analysis of Three Algorithms for Principal Component Analysis

Comparative Performance Analysis of Three Algorithms for Principal Component Analysis 84 R. LANDQVIST, A. MOHAMMED, COMPARATIVE PERFORMANCE ANALYSIS OF THR ALGORITHMS Comparative Performance Analysis of Three Algorithms for Principal Component Analysis Ronnie LANDQVIST, Abbas MOHAMMED Dept.

More information

Adaptive Filtering Part II

Adaptive Filtering Part II Adaptive Filtering Part II In previous Lecture we saw that: Setting the gradient of cost function equal to zero, we obtain the optimum values of filter coefficients: (Wiener-Hopf equation) Adaptive Filtering,

More information

A METHOD OF ADAPTATION BETWEEN STEEPEST- DESCENT AND NEWTON S ALGORITHM FOR MULTI- CHANNEL ACTIVE CONTROL OF TONAL NOISE AND VIBRATION

A METHOD OF ADAPTATION BETWEEN STEEPEST- DESCENT AND NEWTON S ALGORITHM FOR MULTI- CHANNEL ACTIVE CONTROL OF TONAL NOISE AND VIBRATION A METHOD OF ADAPTATION BETWEEN STEEPEST- DESCENT AND NEWTON S ALGORITHM FOR MULTI- CHANNEL ACTIVE CONTROL OF TONAL NOISE AND VIBRATION Jordan Cheer and Stephen Daley Institute of Sound and Vibration Research,

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fifth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada International Edition contributions by Telagarapu Prabhakar Department

More information

A Minimum Error Entropy criterion with Self Adjusting. Step-size (MEE-SAS)

A Minimum Error Entropy criterion with Self Adjusting. Step-size (MEE-SAS) Signal Processing - EURASIP (SUBMIED) A Minimum Error Entropy criterion with Self Adjusting Step-size (MEE-SAS) Seungju Han *, Sudhir Rao *, Deniz Erdogmus, Kyu-Hwa Jeong *, Jose Principe * Corresponding

More information

Set-valued Observer-based Active Fault-tolerant Model Predictive Control

Set-valued Observer-based Active Fault-tolerant Model Predictive Control Set-valued Observer-based Active Fault-tolerant Model Predictive Control Feng Xu 1,2, Vicenç Puig 1, Carlos Ocampo-Martinez 1 and Xueqian Wang 2 1 Institut de Robòtica i Informàtica Industrial (CSIC-UPC),Technical

More information

MULTICHANNEL SIGNAL PROCESSING USING SPATIAL RANK COVARIANCE MATRICES

MULTICHANNEL SIGNAL PROCESSING USING SPATIAL RANK COVARIANCE MATRICES MULTICHANNEL SIGNAL PROCESSING USING SPATIAL RANK COVARIANCE MATRICES S. Visuri 1 H. Oja V. Koivunen 1 1 Signal Processing Lab. Dept. of Statistics Tampere Univ. of Technology University of Jyväskylä P.O.

More information

ANALYSIS OF NONLINEAR PARTIAL LEAST SQUARES ALGORITHMS

ANALYSIS OF NONLINEAR PARTIAL LEAST SQUARES ALGORITHMS ANALYSIS OF NONLINEAR PARIAL LEAS SQUARES ALGORIHMS S. Kumar U. Kruger,1 E. B. Martin, and A. J. Morris Centre of Process Analytics and Process echnology, University of Newcastle, NE1 7RU, U.K. Intelligent

More information

ADMM and Fast Gradient Methods for Distributed Optimization

ADMM and Fast Gradient Methods for Distributed Optimization ADMM and Fast Gradient Methods for Distributed Optimization João Xavier Instituto Sistemas e Robótica (ISR), Instituto Superior Técnico (IST) European Control Conference, ECC 13 July 16, 013 Joint work

More information

Algorithm for Multiple Model Adaptive Control Based on Input-Output Plant Model

Algorithm for Multiple Model Adaptive Control Based on Input-Output Plant Model BULGARIAN ACADEMY OF SCIENCES CYBERNEICS AND INFORMAION ECHNOLOGIES Volume No Sofia Algorithm for Multiple Model Adaptive Control Based on Input-Output Plant Model sonyo Slavov Department of Automatics

More information

7. Variable extraction and dimensionality reduction

7. Variable extraction and dimensionality reduction 7. Variable extraction and dimensionality reduction The goal of the variable selection in the preceding chapter was to find least useful variables so that it would be possible to reduce the dimensionality

More information

A DELAY-DEPENDENT APPROACH TO DESIGN STATE ESTIMATOR FOR DISCRETE STOCHASTIC RECURRENT NEURAL NETWORK WITH INTERVAL TIME-VARYING DELAYS

A DELAY-DEPENDENT APPROACH TO DESIGN STATE ESTIMATOR FOR DISCRETE STOCHASTIC RECURRENT NEURAL NETWORK WITH INTERVAL TIME-VARYING DELAYS ICIC Express Letters ICIC International c 2009 ISSN 1881-80X Volume, Number (A), September 2009 pp. 5 70 A DELAY-DEPENDENT APPROACH TO DESIGN STATE ESTIMATOR FOR DISCRETE STOCHASTIC RECURRENT NEURAL NETWORK

More information

Dynamic Power Allocation and Routing for Time Varying Wireless Networks

Dynamic Power Allocation and Routing for Time Varying Wireless Networks Dynamic Power Allocation and Routing for Time Varying Wireless Networks X 14 (t) X 12 (t) 1 3 4 k a P ak () t P a tot X 21 (t) 2 N X 2N (t) X N4 (t) µ ab () rate µ ab µ ab (p, S 3 ) µ ab µ ac () µ ab (p,

More information

Ch6-Normalized Least Mean-Square Adaptive Filtering

Ch6-Normalized Least Mean-Square Adaptive Filtering Ch6-Normalized Least Mean-Square Adaptive Filtering LMS Filtering The update equation for the LMS algorithm is wˆ wˆ u ( n 1) ( n) ( n) e ( n) Step size Filter input which is derived from SD as an approximation

More information

DECENTRALIZED algorithms are used to solve optimization

DECENTRALIZED algorithms are used to solve optimization 5158 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 64, NO. 19, OCTOBER 1, 016 DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers Aryan Mohtari, Wei Shi, Qing Ling,

More information

Quadratic Extended Filtering in Nonlinear Systems with Uncertain Observations

Quadratic Extended Filtering in Nonlinear Systems with Uncertain Observations Applied Mathematical Sciences, Vol. 8, 2014, no. 4, 157-172 HIKARI Ltd, www.m-hiari.com http://dx.doi.org/10.12988/ams.2014.311636 Quadratic Extended Filtering in Nonlinear Systems with Uncertain Observations

More information

Study on State Estimator for Wave-piercing Catamaran Longitudinal. Motion Based on EKF

Study on State Estimator for Wave-piercing Catamaran Longitudinal. Motion Based on EKF 5th International Conference on Civil Engineering and ransportation (ICCE 205) Study on State Estimator for Wave-piercing Catamaran Longitudinal Motion Based on EKF SHAO Qiwen, *LIU Sheng, WANG Wugui2

More information

MACHINE LEARNING. Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA

MACHINE LEARNING. Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA 1 MACHINE LEARNING Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA 2 Practicals Next Week Next Week, Practical Session on Computer Takes Place in Room GR

More information

Distributed Optimization over Networks Gossip-Based Algorithms

Distributed Optimization over Networks Gossip-Based Algorithms Distributed Optimization over Networks Gossip-Based Algorithms Angelia Nedić angelia@illinois.edu ISE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign Outline Random

More information

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL. Adaptive Filtering Fundamentals of Least Mean Squares with MATLABR Alexander D. Poularikas University of Alabama, Huntsville, AL CRC Press Taylor & Francis Croup Boca Raton London New York CRC Press is

More information

Dimension Reduction Techniques. Presented by Jie (Jerry) Yu

Dimension Reduction Techniques. Presented by Jie (Jerry) Yu Dimension Reduction Techniques Presented by Jie (Jerry) Yu Outline Problem Modeling Review of PCA and MDS Isomap Local Linear Embedding (LLE) Charting Background Advances in data collection and storage

More information