Robust RLS in the Presence of Correlated Noise Using Outlier Sparsity
|
|
- Luke Goodman
- 5 years ago
- Views:
Transcription
1 IEEE TRANSACTIONS ON SIGNAL PROCESSING (TO APPEAR) Robust in the Presence of Correlated Noise Using Outlier Sparsity Shahroh Farahmand and Georgios B. Giannais Abstract Relative to batch alternatives, the recursive leastsquares () algorithm has well-appreciated merits of reduced complexity and storage requirements for online processing of stationary signals, and also for tracing slowly-varying nonstationary signals. However, is challenged when in addition to noise, outliers are also present in the data due to e.g., impulsive disturbances. Existing robust approaches are resilient to outliers, but require the nominal noise to be white an assumption that may not hold in e.g., sensor networs where neighboring sensors are affected by correlated ambient noise. Pre-whitening with the nown noise covariance is not a viable option because it spreads the outliers to noncontaminated measurements, which leads to inefficient utilization of the available data and unsatisfactory performance. In this correspondence, a robust algorithm is developed capable of handling outliers and correlated noise simultaneously. In the proposed method, outliers are treated as nuisance variables and estimated jointly with the wanted parameters. Identifiability is ensured by exploiting the sparsity of outliers, which is effected via regularizing the least-squares (LS) criterion with the l -norm of the outlier vectors. This leads to an optimization problem whose solution yields the robust estimates. For low-complexity real-time operation, a sub-optimal online algorithm is proposed, which entails closed-form updates per time step in the spirit of. Simulations demonstrate the effectiveness and improved performance of the proposed method in comparison with the non-robust, and its state-of-the-art robust renditions. I. INTRODUCTION Online processing with reduced complexity and storage requirements relative to batch alternatives, as well as adaptability to slow parameter variations, are the main reasons behind the popularity of the recursive least-squares () algorithm for adaptive estimation of linear regression models. Despite its success in various applications, is challenged by the presence of outliers, that is measurement or noise samples not adhering to a pre-specified nominal model. One major source of outliers is impulsive noise, which appears for example as double-tal in an echo cancelation setting [3. In another application, which motivates the present contribution, outliers may arise with low-cost unreliable sensors deployed for estimating the coefficients of a linear regression model involving also (possibly correlated) ambient noise. Outlier-resilient approaches are imperative in such applications; see e.g., [0. Robust schemes have been reported in [3, [5, [, [3, and [6. Huber s M-estimation criterion was considered Copyright (c) 0 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending a request to pubs-permissions@ieee.org. The authors are with the Dept. of Electrical and Computer Engr. at the Univ. of Minnesota, Minneapolis, MN 55455, USA; ( s: {shahroh, georgios}@umn.edu). This wor was supported by the AFOSR MURI Grant FA in [, Hampel s three-part redescending M-estimate was advocated in [6, and a modified Huber cost was put forth in [5. These approaches improve upon the non-robust when outliers are present, but they also have limitations: i) some entail costs that are not convex [5, thus providing no guarantee for achieving the global optimum; ii) the optimization problem to be solved per time step has dimensionality (and hence computations and storage) growing linearly with time; and iii) correlated nominal noise of nown covariance can not be handled, which is critical for estimation using a networ of distributed sensors because ambient noise is typically dependent across neighboring sensors. To mitigate the curse of dimensionality, [5 provides suboptimal online recursions in the spirit of, while [3 and [3 limit the effect of each new datum on the estimate, thus also ensuring low-complexity -lie iterations. However, all existing approaches cannot handle correlated ambient noise of nown covariance. It is worth stressing that pre-whitening is not recommended because it spreads the outliers to noncontaminated measurements rendering outlier-resilient estimation even more challenging. In this correspondence, outliers are treated as nuisance variables that are jointly estimated with the wanted regression coefficients. The identifiability challenge arising due to the extra unnowns is addressed by exploiting the outlier sparsity. The latter is effected by regularizing the LS cost with the l - norm of the outlier vectors. Consequently, one arrives at an optimization problem which can identify outliers while at the same time it can accommodate correlated ambient noise. Since dimensionality of this optimization problem also grows with time, its solution is utilized at the initialization stage, and provides an offline benchmar. To cope with dimensionality, an online variant is also developed, which enjoys low-complexity and closed-form updates in the spirit of. Initializing realtime updates with the offline benchmar, the novel online robust (OR-) algorithm is compared against and the robust schemes in [3 and [3 via simulations, which demonstrate its efficiency and improved performance. II. PROBLEM STATEMENT AND PRELIMINARIES Consider the outlier-aware linear regression model considered originally in [7, where (generally vector) measurements y R Dy involving the unnown vector x R Dx become available sequentially in time (indexed by ), in the presence of (possibly correlated) nominal noise v ; that is y = H x + v + o ()
2 IEEE TRANSACTIONS ON SIGNAL PROCESSING (TO APPEAR) where H denotes a nown regression matrix, o models the outliers arising from e.g., impulsive noise, and v is zeromean with nown covariance matrix R, uncorrelated with v : := [v T, v T,..., v T T, but possibly correlated across its own entries. The wanted vector x is either constant, or, varies slowly with. The goal is to estimate x and o, given the measurements y : := [y T, y T,..., y T T. To motivate the vector model in (), consider a sensor networ with each sensor measuring distances (in rectangular coordinates) between itself and the target that is desired to localize and trac. The sensor measurements are synchronized by the cloc of an access point (AP), and are corrupted by outliers as well as nominal noise that is correlated across neighboring sensors. Had o been equal to zero or nown for that matter, one could apply the algorithm on the outlier compensated measurements y o. Therefore, at time step, one would solve [, p. 5 ˆx := arg min x J (x ) := arg min x β i y i H i x o i R i where 0 < β is a pre-selected forgetting factor, and x R := xt Rx. Equating to zero the gradient of J (x ) in (), yields ( β i H T i R i H i ) ˆx = ( β i H T i R i (y i o i ) or in a compact matrix-vector notation as Φ ˆx = φ, with obvious definitions for φ and Φ assumed invertible. The latter can be solved online using the recursions (see e.g., [) ˆx = ˆx + W (y o H ˆx ) (3) W = β Φ H T (H β Φ H T + R ) Φ = β Φ W H β Φ. The challenge with o being unnown in addition to x is that () is under-determined, which gives rise to identifiability issues if J in () is to be jointly optimized over x and o : := [o T... o T T. Indeed, e.g., o = y and x = 0 is one choice of the unnown vectors always minimizing J in (). The ensuing section develops algorithms allowing joint estimation of x and o : (or o only), by exploiting the attribute of sparsity present in the outlier vectors. III. OUTLIER-SPARSITY-AWARE ROBUST To ensure uniqueness in the joint optimization over o : and x, consider regularizing the cost in (). Among candidate regularizers, popular ones include the l p -norm of outlier vectors with p = 0,,. A ey observation guiding the selection of p, is that outliers arising from sources such as impulsive noise are sparse. The degree of sparsity is quantified by the l 0 - (pseudo)norm of o i, which equals the number of its nonzero entries. Unfortunately, the l 0 -norm maes the optimization problem non-convex, and in fact NP-hard to solve. This motivates adopting the closest convex approximation of l 0, () ) namely the l -norm, which is nown to approach (and under certain conditions on the regressors coincide with) the solution obtained with the l 0 -norm [4. Different from [, [4 that rely on the l -norm of sparse vectors x, the novelty here is adoption of the l -norm of sparse outlier vectors for robust processing. Adopting this regularization of J in (), yields [ˆx, ô : = arg min x,o : β i [ y i H i x o i R i + λ i o i. (4) Besides being convex, (4) is nicely lined with Huber s M- estimates if the nominal noise v i is white [8. Indeed, with R i = I i, solving (4) amounts to solving [8 ˆx = arg min x D y β i ρ i (y i,d h T i,dx ) (5) d= where y i,d (h T i,d ) denotes the dth entry (row) of y i (respectively H i ), and ρ i is Huber s function given by ρ i (x) := { x, x λ i. x λ i λ i, x > λ i While solving (4) recovers an estimate of x, its practicality is progressively compromised with time because the number of variables to be estimated increases with. In addition, (4) can not be solved recursively, and at each time instant the entire optimization should be carried out anew. This shortcoming affects Huber M-estimates too, and similar limitations have been identified also by [5 and [6. For this reason, the offline benchmar solver of (4) will be used as a warm-start to initialize online methods developed in the next subsection. A. Online Robust (OR-) Since it is impractical to re-estimate past and current outliers per time instant, the main idea in this section is to estimate only the most recent outliers. Consequently, the proposed OR- proceeds in two steps per. Given ˆx from, the current o is estimated in the first step via ô = arg min o [ y H ˆx o + λ R o. (6) With ô available, ordinary is applied to the outlier-free measurements y ô [cf. (3) in the second step, which incurs complexity comparable to ordinary. Clearly, (6) solves a problem of fixed dimension D y per time step; hence, it offers a practical alternative to the offline benchmar in (4). If R is diagonal, denote it as diag(r,, r,,..., r,dyd y ), then (6) decouples into scalar sub-problems across entries of o as ô,d = arg min o,d [ (y,d h T,dˆx o,d ) r,dd + λ o,d d =,..., D y.,
3 IEEE TRANSACTIONS ON SIGNAL PROCESSING (TO APPEAR) 3 The solution to these sub-problems is available in closed form using the so termed least-absolute shrinage and selection operator (Lasso); see e.g., [9. In the present context, this closedform solution is given by the so-termed soft-thresholding operator as ô,d = max { y,d h T,dˆx λ r,dd, 0 } sign(y,d h T,dˆx ), d =,..., D y. (7) Thus, OR- with white nominal noise v amounts to simple non-iterative closed-form evaluations per time step. However, when v is colored, (6) can no longer be solved in closed form. If R is non-diagonal, general-purpose convex solvers such as SeDuMi can be utilized to solve (6) with complexity O(Dy 3.5 ) [4. In the next subsections, two methods based on coordinate descent (CD), and the alternating direction method of multipliers (AD-MoM) are developed to solve (6) iteratively. While both are asymptotically convergent, it suffices to run only a few iterations in practice, which reduces complexity of general solvers and parallels the low-complexity of OR- when v is white. B. CD based OR- To iteratively solve (6) with respect to o, consider initializing with o (0) = 0. Then, for j =,..., J, vector is updated one entry at a time using CD. Specifically, the dth entry at iteration j is updated as,:d,d := arg min o,d y H ˆx o,d o (j ),d+:d y R +λ o,d. (8) Problem (8) can be alternatively written as a scalar Lasso one, that is (,d := arg min o,d γ (j) o,d,d) + λ,d o,d (9) where γ (j),d := d [α,d r,i,d,i r,d,d Dy i=d+ r,i,d o (j ),i α := R (y H ˆx ), λ,d := λ / r,d,d with r,d,d := [R d,d denoting the (d, d ) entry of R The solution to (9) is given by (cf. (7)) { } ( ) γ,d = max (j) λ,d, 0 sign.,d γ (j),d. After J iterations, one sets ô = o (J). Global convergence of the iterates o (J) to the solution of (6) is guaranteed as J from the results in [5. In practice however, a fixed small J suffices. C. AD-MoM based OR- To decouple the non-differentiable term of the cost in (6) from the differentiable one, consider introducing the auxiliary variable c, and re-writing (6) in constrained form as [ô, ĉ = arg min o,c [ y H ˆx o R subject to o = c. The augmented Lagrangian thus becomes L(o, c, µ) = y H ˆx o R + λ c + µ T (o c ) + κ o c + λ c where µ is the Lagrange multiplier, and κ is a positive constant controlling the effect of the quadratic term introduced to ensure strict convexity of the cost. After initializing with c (0) = µ (0) = 0, AD-MoM recursions per iteration j (see e.g., [), in the present context amount to = (R + κi) ( R (y H ˆx ) + κc (j ) + µ (j )) (0a) { } ( ) c (j),d = max o(j),d + µ(j) d κ λ κ, 0 sign,d + µ(j) d, κ d =,..., D y µ (j) = µ (j ) + κ( (0b) c(j) ). (0c) After J iterations, set ô = o (J). Global convergence of the iterates o (J) to the solution of (6) is guaranteed as J from the results in [, p. 56. As with CD, a fixed small J suffices in practice. Note also that the matrix R needs to be computed only once, and used throughout the iterations. D. Parameter Selection Successful performance of OR- hinges on proper selection of the sparsity-enforcing coefficients λ in (8) and (0). Too large a λ brings OR- bac to the ordinary, while too small a λ creates many spurious outliers and degrades OR- performance. Suppose that λ is constant with respect to ; that is, let λ = λ y for all. Judicious tuning of λ y has been investigated for robust batch linear regression estimators, and two criteria to optimally select λ y can be found in [6 and [8. However, none of these algorithms can be run online. As a remedy, it is recommended to select λ y during the initialization (batch) phase of OR-, and retain its value for all future time instants. OR- initialization is carried out by solving the offline benchmar for = up to = 0 using SeDuMi; that is [ˆx 0, ô :0 = arg min x 0,o :0 0 [ β 0 i y i H i x 0 o i + λ R y o i. i ()
4 IEEE TRANSACTIONS ON SIGNAL PROCESSING (TO APPEAR) 4 Table I. OR- Algorithm Initialization. Collect measurements y for =,..., 0. Solve () to obtain ˆx 0. Use either of the two methods in [6 to determine the best λ = λ y, Repeat for 0 + Solve (6) to obtain ô. Use CD iterations in (9) or AD-MoM iterations in (0). Run in (3) for the outlier compensated measurements y ô to obtain ˆx. End for At time 0 either one of the two criteria in [6 is applied to () to find the best λ y. Using this λ y, () is solved to obtain the initial ˆx 0. Then, one proceeds to time 0 + and runs the OR- algorithm as detailed in the previous subsections. Table I summarizes the OR- algorithm. Remar. (Complexity of OR-) This remar quantifies the complexity order of OR- per time index. Starting from (3), the second sub-equation requires inversion of a D y D y matrix, which incurs complexity O(Dy) 3 once per time index ; while the third sub-equation involves matrix multiplications at complexity O(DxD y ). Hence, the complexity of the non-robust for the vector-matrix model () is O(Dy 3 +DxD y ). For OR-, one needs to solve (6) per time step also. Utilizing SeDuMi to solve (6) incurs complexity + D x D y ), where the second term corresponds to evaluating H ˆx. Consider now CD or AD-MoM iterations. Per time index, CD iterations must compute α, which incurs complexity O(Dy) 3 for matrix inversion, and O(D x D y ) O(D 3.5 y for matrix-vector multiplication. In each of the J CD iterations, one needs to evaluate D y times γ (j),d, which incurs overall complexity O(JDy). Thus, the overall complexity of optimizing (6) via CD iterations is O(Dy 3 + D x D y + JDy). In as much as J is selected on the order of O(D y ) or smaller, CD iterations do not increase the complexity order of the non-robust in (3). Furthermore, its complexity order will be smaller than that of SeDuMi. With regards to AD- MoM, two D y D y matrices must be inverted once per time step at complexity O(Dy). 3 The matrix-vector multiplication H ˆx is also performed once per time index at complexity O(D x D y ). Matrix-vector multiplications in (0a) should be carried out J times, which incurs complexity O(JDy). Thus, the overall complexity of optimizing (6) via AD-MoM iterations is O(Dy 3 + D x D y + JDy). Again, if J is on the order of O(D y ) or smaller, AD-MoM iterations do not incur higher complexity than the non-robust in (3). Remar. (Complexity of alternative robust methods) For comparison purposes with OR-, this remar discusses the complexity order of the schemes in [3, [5, [3, and [6. Focusing for simplicity on [3 which entails non-robust recursions, consider correlated measurements arriving in vectors of dimension D y. To de-correlate, pre-whitening with the square root matrix incurs complexity O(D 3 y). For each new datum extracted after pre-whitening, vector i in [3, Eq. (5) involves vector-matrix products at complexity O(D x). With D y such data per time step, the overall complexity grows to O(D xd y ). Note that complexity of non-robust in (3) is also O(D 3 y + D xd y ). It was further argued in the previous remar that OR- does not increase the complexity order. Since the robust schemes in [3, [5, [3, and [6 incur complexity at least as high as that of non-robust, it follows that they will incur at best the same order of complexity as the proposed OR-. IV. CONVERGENCE ANALYSIS For the stationary setup, almost sure convergence of OR- to the true x o is established in this section under ergodicity conditions on the regressors, nominal noise, and outlier vectors. The following assumptions are needed to this end: (as) The true parameter vector, denoted as x o, is time invariant. (as) Three ergodicity conditions hold: lim H i R i H T i = R > 0, lim lim with probability one (w.p.) (a) H T i R v i = r Hv, (w.p.) (b) H T i R o i = r Ho, (w.p.). (c) If the random vectors v i, o i, and h i are mixing, which is the case for most stationary processes with vanishing memory in practice, then () is satisfied. Since H i is uncorrelated with v i which is zero-mean, it follows readily that r Hv = 0. (as3) Regressors {h i } are zero-mean and uncorrelated with outliers {o i }, which implies that r Ho = 0. Since the true x o is constant, the forgetting factor is set to β =. The OR- algorithm is also slightly modified to facilitate convergence analysis. Specifically, for values exceeding a finite value, the estimated outlier vector ô is deterministically set to zero. This can be achieved by selecting (cf. [6) λ = λ y, = 0 +,...,, λ = max ( λ, R (y H ˆx ) ), >. (3) Note that the OR- estimate is the minimizer of J (x, ô : ) w.r.t. x with J ( ) defined by (), and ô : obtained from (6) with λ as in (3). It then follows from (as)-(as3) that lim J (x, ô : ) = xt R x x T ( R x 0 + r hv + r ho ) + lim ô T i R i H i x, (w.p.) = xt R x x T R x 0, (w.p.). (4a) (4b)
5 IEEE TRANSACTIONS ON SIGNAL PROCESSING (TO APPEAR) OR SeDuMi OR : 0 CD it OR : 0 CD it OR : 50 CD it OR : 00 CD it 0 0 OR SeDuMi OR : 0 AD MoM it OR : 0 AD MoM it OR : 50 AD MoM it OR : 00 AD MoM it Fig.. Learning curves of OR- and its CD-based implementation. Fig.. Learning curves of OR- and its AD-MoM based implementation. The last term in (4a) becomes zero because ô i = 0 >, and hence the summation assumes a finite value. Setting the gradient of (4b) equal to zero yields x = x o, which means that the true x o is recovered asymptotically. These arguments establish the following result. Proposition. Under (as)-(as3), the modified OR- iterates converge almost surely to the true x o. The ey observation in the convergence analysis is that even ordinary asymptotically converges to the true x o. Therefore, one can ensure convergence by maing OR- behave lie ordinary asymptotically. The intuition behind asymptotic convergence of the ordinary lies with the observation that outliers can be seen as an additional nominal noise contaminating the measurements. Therefore, outlier contaminated measurements can be seen as non-contaminated ones, but with larger nominal noise variance. Furthermore, (as3) ensures that no bias appears asymptotically. Hence, when infinitely many measurements are collected, the true x o is recovered. Both and OR- converge almost surely to the true x o. However, the finite-sample performance of the two algorithms is drastically different. Unlie, OR- maintains satisfactory performance even for a small finite. Simulations will demonstrate the superior performance of OR- compared to and two robust alternatives. V. SIMULATIONS A setup with D x = 0 and D y = 0 is tested. In the stationary case, x = x o, is selected as the all-one vector, with zero-mean, unit-variance, Gaussian noise added. Entries of the regressor matrix H are chosen as zero-mean, white, Gaussian distributed random variables with variance σh scaled to achieve an SNR = D x σh = in normal scale. Per time step, ambient zero-mean, colored, Gaussian noise v is simulated as the output of a first-order auto-regressive (AR) filter with pole at 0.95 to a unit variance white Gaussian input. Vectors v are uncorrelated across. Measurements are also contaminated by impulsive noise, which generates outliers. When an outlier occurs, which has a 0% chance of happening, Table II. OR- run-time Algorithm / Sub-algorithm Run-time in seconds.0549 OR- w/ SeDuMi OR- w/ AD-MoM 0 it OR- w/ AD-MoM 0 it OR- w/ AD-MoM 50 it OR- w/ AD-MoM 00 it Offline Initialization OR- w/ CD 0 it OR- w/ CD 0 it OR- w/ CD 50 it OR- w/ CD 00 it the corresponding observation is drawn from a uniform distribution independent of all other variables with variance equal to 0, 000. The way outliers are generated differs from the model assumed in (). This serves to signify the universality of the proposed algorithm as it demonstrates its operability even when the outliers are generated differently than modeled. The root mean-square error := ˆx x is used as performance metric. Furthermore, 0 = 0 and ranges from to, 000. It is assumed that the percentage of outliers is nown to all estimators. OR- utilizes this nowledge by using the corresponding criterion from [6 to tune the parameter λ y. A. Stationary Scenario Fig. depicts versus time for a sample realization. is initialized with the batch LS solution. It can be seen that performs poorly, while OR- performs satisfactorily. Initialization with the batch offline estimator in (4) clearly improves the accuracy of OR-. Different renditions of OR-, which result from the way (6) is solved, are also compared. The best performance is obtained when (6) is solved exactly using SeDuMi. To assess performance of the low-complexity CD-based solver, the is depicted for 0, 0, 50 and 00 iterations. With 00 CD iterations the performance of OR- comes very close to the SeDuMi based one. Liewise, with 00 AD-MoM iterations, estimation performance comes close to that of SeDuMi as can be verified from Fig.. AD-MoM s regularization scale in the augmented
6 IEEE TRANSACTIONS ON SIGNAL PROCESSING (TO APPEAR) OR R : Huber init R : Zero init OR R : Huber init R : Zero init Fig. 3. setup. Comparison between R- in [3 and OR- in a stationary Fig. 4. setup. Comparison between R- in [3 and OR- in a stationary Lagrangian is selected as κ = 0, which being positive ensures convergence due to the convexity of (6). (Convergence rate depends on the κ value; κ = 0 was observed to maintain a satisfactory convergence rate.) To compare complexities of the various OR- renditions, Table II lists the run-time of each algorithm depicted in Figs. and. Caution should be exercised though since run-time comparisons depend on how efficiently each algorithm is programmed in MATLAB. Nonetheless, the table provides an idea of each algorithms relative complexity. Clearly, AD-MoM and CD based renditions incur lower complexity than the SeDuMi based OR- solver even with 00 AD-MoM or CD iterations. Given the comparable performance of the AD-MoM and CD based solvers to that of SeDuMi, their use is well motivated. Note also that the bul of complexity for AD-MoM and CD based OR- solvers comes from the offline initialization stage which entails multiple SeDuMi runs. One SeDuMi run will be needed for each choice of the sparsity-promoting coefficient λ. Focusing on OR- with 00 AD-MoM iterations for example, the run-time excluding the offline initialization will tae about seconds. This comes close to that of non-robust, which is about second. OR- with 00 CD iterations is compared against the robust algorithm of [3, dubbed R-, and that of [3, dubbed R-, in Figs. 3 and 4, respectively. Both of these approaches are proposed for white ambient noise, and do not tae noise correlations into account. While one can pre-whiten the data by multiplying with R and then apply R- or R-, this spreads the outliers to all non-contaminated measurements and the resulting performance is as bad as that of. Simulations suggested that the preferred option is to ignore noise correlations which prevents outliers from spreading. This case has been considered in the simulations. To initialize R- and R- from = up to = 0 = 0, two different approaches are tested: i) solving the batch Huber cost via iteratively re-weighted LS (I); and ii) starting from the all-zero initialization, and then running R- or R-. The performance of R- is depicted in Fig. 3. While better than, R- is outperformed by OR OR R Fig. 5. Comparison between R- in [3 and OR- in a non-stationary setting. because R- does not account for noise correlations. The two initializations perform similarly. Fig. 4 depicts the performance of R- versus OR-. Huber initialization provides a considerable improvement in this case, but still OR- exhibits a noticeable improvement over R-. B. Non-Stationary Scenario For the non-stationary case, all parameters including x are selected similar to the stationary case. To obtain x, each entry of x is passed through a first-order AR filter with Gaussian noise input. Specifically, letting x,d denote the dth entry of x the corresponding entry of x + is generated according to x +,d = 0.95x,d e,d, where e,d is zero-mean, unit-variance, Gaussian noise assumed independent of all other noise terms. This procedure is repeated for > to obtain a slow-varying x. Figs. 5 and 6 depict the of OR- with β = 0.95 versus, R- of [3, and R- of [3. It can be seen that OR- maredly outperforms all three alternatives.
7 IEEE TRANSACTIONS ON SIGNAL PROCESSING (TO APPEAR) R OR Fig. 6. Comparison between R- in [3 and OR- in a non-stationary setting. VI. CONCLUSIONS AND CURRENT RESEARCH A robust algorithm was developed capable of simultaneously handling measurement outliers and accounting for possibly correlated nominal noise. To achieve these desirable properties, outliers were treated as nuisance variables and were estimated jointly with the parameter vector of interest. Identifiability was ensured by regularizing the LS cost with the l -norm of the outlier variables. The resulting optimization problem, dubbed as offline benchmar, was too complex to be solved per time step. For low-complexity implementation, a sub-optimal online algorithm was developed which involved only simple closed-form updates per time step. Initialization with the offline benchmar followed by online updates was advocated as the OR- algorithm of choice in practice. Numerical tests demonstrated improved performance of OR- compared to and state-of-the-art robust renditions. Current research focuses on robust online estimation of dynamical processes adhering to a Gauss-Marov model, which in addition to slow parameter changes that can be followed by OR-, enables tracing of fast-varying parameters too. Preliminary results in this direction can be found in [6. Acnowledgement. The authors wish to than Dr. Daniele Angelosante of ABB, Switzerland, for helpful discussions and suggestions on the early stages of this wor. REFERENCES [ D. Angelosante, J.-A. Bazerque, and G. B. Giannais, Online Adaptive Estimation of Sparse Signals: Where meets the -norm, IEEE Trans. on Signal Processing, vol. 58, no. 7, pp , July 00. [ D. P. Bertseas and J. N. Tsitsilis, Parallel and Distributed Computation: Numerical Methods, nd ed., Athena Scient., 999. [3 M. Z. A. Bhotto and A. Antoniou, Robust recursive least-squares adaptive-filtering algorithms for impulsive-noise environments, IEEE Signal Processing Letters, vol. 8, no. 3, pp , March 0. [4 E. J. Candes and T. Tao, Decoding by linear programming, IEEE Trans. on Information Theory, vol. 5, no., pp , Dec [5 S. C. Chan and Y. X. Zou, A recursive least M-estimate algorithm for robust adaptive filtering in impulsive noise: Fast algorithm and convergence performance analysis, IEEE Trans. on Signal Proc., vol. 5, no. 4, pp , Apr [6 S. Farahmand, D. Angelosante, and G. B. Giannais, Doubly robust Kalman smoothing by exploiting outlier sparsity, Proc. of Asilomar Conf. on Signals, Syst., and Comp., Pacific Grove, CA, Nov. 00. [7 J.-J. Fuchs, An inverse problem approach to robust regression, in Proc. of Int. Conf. Acoust., Speech, Signal Process., Phoenix, AZ, 999, pp [8 G. B. Giannais, G. Mateos, S. Farahmand, V. Keatos, and H. Zhu, USPACOR: Universal sparsity-controlling outlier rejection, Proc. of Intl. Conf. on Acoust., Speech, and Sig. Processing, Prague, Czech Republic, May 0. [9 T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning, nd ed. New Yor, NY: Springer, 009. [0 P. J. Huber and E. M. Ronchetti, Robust Statistics. New Yor, NJ, USA: Wiley, 009. [ S. M. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory, Prentice Hall, 993. [ P. Petrus, Robust Huber adaptive filter, IEEE Trans. on Signal Processing, vol. 47, no. 4, pp. 9 33, April 999. [3 L. Rey Vega, H. Rey, J. Benesty, and S. Tressens, A fast robust recursive least-squares algorithm, IEEE Trans. on Signal Processing, vol. 57, no. 3, pp. 09 6, March 009. [4 SeDuMi. [Online. [5 P. Tseng, Convergence of a bloc coordinate descent method for nondifferentiable minimization, Journal of Optimization Theory and Applications, vol. 09, no. 3, pp , June 00. [6 Y. Zou, S. C. Chan, and T. S. Ng, A recursive least M-estimate (RLM) adaptive filter for robust filtering in impulsive noise, IEEE Signal Processing Letters, vol. 7, no., pp , Nov. 000.
Robust RLS in the Presence of Correlated Noise Using Outlier Sparsity
3308 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 60, NO. 6, JUNE 202 Robust RLS in the Presence of Correlated Noise Using Outlier Sparsity Shahroh Farahmand and Georgios B. Giannais Abstract Relative
More informationKNOWN approaches for improving the performance of
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 58, NO. 8, AUGUST 2011 537 Robust Quasi-Newton Adaptive Filtering Algorithms Md. Zulfiquar Ali Bhotto, Student Member, IEEE, and Andreas
More informationSparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels
Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels Bijit Kumar Das 1, Mrityunjoy Chakraborty 2 Department of Electronics and Electrical Communication Engineering Indian Institute
More informationDiffusion LMS Algorithms for Sensor Networks over Non-ideal Inter-sensor Wireless Channels
Diffusion LMS Algorithms for Sensor Networs over Non-ideal Inter-sensor Wireless Channels Reza Abdolee and Benoit Champagne Electrical and Computer Engineering McGill University 3480 University Street
More informationExploiting Sparsity for Wireless Communications
Exploiting Sparsity for Wireless Communications Georgios B. Giannakis Dept. of ECE, Univ. of Minnesota http://spincom.ece.umn.edu Acknowledgements: D. Angelosante, J.-A. Bazerque, H. Zhu; and NSF grants
More informationA NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1
A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1 Jinglin Zhou Hong Wang, Donghua Zhou Department of Automation, Tsinghua University, Beijing 100084, P. R. China Control Systems Centre,
More informationMaximum Likelihood Diffusive Source Localization Based on Binary Observations
Maximum Lielihood Diffusive Source Localization Based on Binary Observations Yoav Levinboo and an F. Wong Wireless Information Networing Group, University of Florida Gainesville, Florida 32611-6130, USA
More informationAnalysis of incremental RLS adaptive networks with noisy links
Analysis of incremental RLS adaptive networs with noisy lins Azam Khalili, Mohammad Ali Tinati, and Amir Rastegarnia a) Faculty of Electrical and Computer Engineering, University of Tabriz Tabriz 51664,
More informationRecursive Least Squares for an Entropy Regularized MSE Cost Function
Recursive Least Squares for an Entropy Regularized MSE Cost Function Deniz Erdogmus, Yadunandana N. Rao, Jose C. Principe Oscar Fontenla-Romero, Amparo Alonso-Betanzos Electrical Eng. Dept., University
More informationPrediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate
www.scichina.com info.scichina.com www.springerlin.com Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate WEI Chen & CHEN ZongJi School of Automation
More informationEE 381V: Large Scale Optimization Fall Lecture 24 April 11
EE 381V: Large Scale Optimization Fall 2012 Lecture 24 April 11 Lecturer: Caramanis & Sanghavi Scribe: Tao Huang 24.1 Review In past classes, we studied the problem of sparsity. Sparsity problem is that
More informationAsymptotically Optimal and Bandwith-efficient Decentralized Detection
Asymptotically Optimal and Bandwith-efficient Decentralized Detection Yasin Yılmaz and Xiaodong Wang Electrical Engineering Department, Columbia University New Yor, NY 10027 Email: yasin,wangx@ee.columbia.edu
More informationChapter 2 Fundamentals of Adaptive Filter Theory
Chapter 2 Fundamentals of Adaptive Filter Theory In this chapter we will treat some fundamentals of the adaptive filtering theory highlighting the system identification problem We will introduce a signal
More informationSet-Membership Constrained Particle Filter: Distributed Adaptation for Sensor Networks
IEEE TRANSACTIONS ON SIGNAL PROCESSING 1 Set-Membership Constrained Particle Filter: Distributed Adaptation for Sensor Networs Shahroh Farahmand, Stergios I. Roumeliotis, Member, IEEE, and Georgios B.
More informationCompressed Sensing and Sparse Recovery
ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing
More informationExpressions for the covariance matrix of covariance data
Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden
More informationRiccati difference equations to non linear extended Kalman filter constraints
International Journal of Scientific & Engineering Research Volume 3, Issue 12, December-2012 1 Riccati difference equations to non linear extended Kalman filter constraints Abstract Elizabeth.S 1 & Jothilakshmi.R
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationCity, University of London Institutional Repository
City Research Online City, University of London Institutional Repository Citation: Zhao, S., Shmaliy, Y. S., Khan, S. & Liu, F. (2015. Improving state estimates over finite data using optimal FIR filtering
More informationFERMENTATION BATCH PROCESS MONITORING BY STEP-BY-STEP ADAPTIVE MPCA. Ning He, Lei Xie, Shu-qing Wang, Jian-ming Zhang
FERMENTATION BATCH PROCESS MONITORING BY STEP-BY-STEP ADAPTIVE MPCA Ning He Lei Xie Shu-qing Wang ian-ming Zhang National ey Laboratory of Industrial Control Technology Zhejiang University Hangzhou 3007
More informationSPARSE signal representations have gained popularity in recent
6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying
More informationStochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions
International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.
More informationQ-Learning and Stochastic Approximation
MS&E338 Reinforcement Learning Lecture 4-04.11.018 Q-Learning and Stochastic Approximation Lecturer: Ben Van Roy Scribe: Christopher Lazarus Javier Sagastuy In this lecture we study the convergence of
More informationMULTICHANNEL SIGNAL PROCESSING USING SPATIAL RANK COVARIANCE MATRICES
MULTICHANNEL SIGNAL PROCESSING USING SPATIAL RANK COVARIANCE MATRICES S. Visuri 1 H. Oja V. Koivunen 1 1 Signal Processing Lab. Dept. of Statistics Tampere Univ. of Technology University of Jyväskylä P.O.
More informationOnline Adaptive Estimation of Sparse Signals: where RLS meets the l 1 -norm
IEEE TRASACTIOS O SIGAL PROCESSIG (TO APPEAR) Online Adaptive Estimation of Sparse Signals: where meets the l -norm Daniele Angelosante, Member, IEEE, Juan-Andres Bazerque, Student Member, IEEE Georgios
More information1 Introduction 198; Dugard et al, 198; Dugard et al, 198) A delay matrix in such a lower triangular form is called an interactor matrix, and almost co
Multivariable Receding-Horizon Predictive Control for Adaptive Applications Tae-Woong Yoon and C M Chow y Department of Electrical Engineering, Korea University 1, -a, Anam-dong, Sungbu-u, Seoul 1-1, Korea
More informationNewton-like method with diagonal correction for distributed optimization
Newton-lie method with diagonal correction for distributed optimization Dragana Bajović Dušan Jaovetić Nataša Krejić Nataša Krlec Jerinić February 7, 2017 Abstract We consider distributed optimization
More informationRecovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies
July 12, 212 Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies Morteza Mardani Dept. of ECE, University of Minnesota, Minneapolis, MN 55455 Acknowledgments:
More informationStatistical and Adaptive Signal Processing
r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory
More informationAuxiliary signal design for failure detection in uncertain systems
Auxiliary signal design for failure detection in uncertain systems R. Nikoukhah, S. L. Campbell and F. Delebecque Abstract An auxiliary signal is an input signal that enhances the identifiability of a
More informationApproximating the Covariance Matrix with Low-rank Perturbations
Approximating the Covariance Matrix with Low-rank Perturbations Malik Magdon-Ismail and Jonathan T. Purnell Department of Computer Science Rensselaer Polytechnic Institute Troy, NY 12180 {magdon,purnej}@cs.rpi.edu
More informationA STATE-SPACE APPROACH FOR THE ANALYSIS OF WAVE AND DIFFUSION FIELDS
ICASSP 2015 A STATE-SPACE APPROACH FOR THE ANALYSIS OF WAVE AND DIFFUSION FIELDS Stefano Maranò Donat Fäh Hans-Andrea Loeliger ETH Zurich, Swiss Seismological Service, 8092 Zürich ETH Zurich, Dept. Information
More informationSparse Gaussian conditional random fields
Sparse Gaussian conditional random fields Matt Wytock, J. ico Kolter School of Computer Science Carnegie Mellon University Pittsburgh, PA 53 {mwytock, zkolter}@cs.cmu.edu Abstract We propose sparse Gaussian
More informationPrioritized Sweeping Converges to the Optimal Value Function
Technical Report DCS-TR-631 Prioritized Sweeping Converges to the Optimal Value Function Lihong Li and Michael L. Littman {lihong,mlittman}@cs.rutgers.edu RL 3 Laboratory Department of Computer Science
More informationAdaptive beamforming for uniform linear arrays with unknown mutual coupling. IEEE Antennas and Wireless Propagation Letters.
Title Adaptive beamforming for uniform linear arrays with unknown mutual coupling Author(s) Liao, B; Chan, SC Citation IEEE Antennas And Wireless Propagation Letters, 2012, v. 11, p. 464-467 Issued Date
More informationStability and the elastic net
Stability and the elastic net Patrick Breheny March 28 Patrick Breheny High-Dimensional Data Analysis (BIOS 7600) 1/32 Introduction Elastic Net Our last several lectures have concentrated on methods for
More informationChapter 2 Wiener Filtering
Chapter 2 Wiener Filtering Abstract Before moving to the actual adaptive filtering problem, we need to solve the optimum linear filtering problem (particularly, in the mean-square-error sense). We start
More informationModel-based Correlation Measure for Gain and Offset Nonuniformity in Infrared Focal-Plane-Array Sensors
Model-based Correlation Measure for Gain and Offset Nonuniformity in Infrared Focal-Plane-Array Sensors César San Martin Sergio Torres Abstract In this paper, a model-based correlation measure between
More informationEE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)
EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in
More informationNUMERICAL COMPUTATION OF THE CAPACITY OF CONTINUOUS MEMORYLESS CHANNELS
NUMERICAL COMPUTATION OF THE CAPACITY OF CONTINUOUS MEMORYLESS CHANNELS Justin Dauwels Dept. of Information Technology and Electrical Engineering ETH, CH-8092 Zürich, Switzerland dauwels@isi.ee.ethz.ch
More informationLecture 3. Linear Regression II Bastian Leibe RWTH Aachen
Advanced Machine Learning Lecture 3 Linear Regression II 02.11.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de/ leibe@vision.rwth-aachen.de This Lecture: Advanced Machine Learning Regression
More informationNON-LINEAR CONTROL OF OUTPUT PROBABILITY DENSITY FUNCTION FOR LINEAR ARMAX SYSTEMS
Control 4, University of Bath, UK, September 4 ID-83 NON-LINEAR CONTROL OF OUTPUT PROBABILITY DENSITY FUNCTION FOR LINEAR ARMAX SYSTEMS H. Yue, H. Wang Control Systems Centre, University of Manchester
More informationNear Optimal Adaptive Robust Beamforming
Near Optimal Adaptive Robust Beamforming The performance degradation in traditional adaptive beamformers can be attributed to the imprecise knowledge of the array steering vector and inaccurate estimation
More informationADAPTIVE FILTER THEORY
ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface
More informationDesign of Projection Matrix for Compressive Sensing by Nonsmooth Optimization
Design of Proection Matrix for Compressive Sensing by Nonsmooth Optimization W.-S. Lu T. Hinamoto Dept. of Electrical & Computer Engineering Graduate School of Engineering University of Victoria Hiroshima
More informationLocal Strong Convexity of Maximum-Likelihood TDOA-Based Source Localization and Its Algorithmic Implications
Local Strong Convexity of Maximum-Likelihood TDOA-Based Source Localization and Its Algorithmic Implications Huikang Liu, Yuen-Man Pun, and Anthony Man-Cho So Dept of Syst Eng & Eng Manag, The Chinese
More informationANYTIME OPTIMAL DISTRIBUTED KALMAN FILTERING AND SMOOTHING. University of Minnesota, 200 Union Str. SE, Minneapolis, MN 55455, USA
ANYTIME OPTIMAL DISTRIBUTED KALMAN FILTERING AND SMOOTHING Ioannis D. Schizas, Georgios B. Giannakis, Stergios I. Roumeliotis and Alejandro Ribeiro University of Minnesota, 2 Union Str. SE, Minneapolis,
More informationAdaptive MMSE Equalizer with Optimum Tap-length and Decision Delay
Adaptive MMSE Equalizer with Optimum Tap-length and Decision Delay Yu Gong, Xia Hong and Khalid F. Abu-Salim School of Systems Engineering The University of Reading, Reading RG6 6AY, UK E-mail: {y.gong,x.hong,k.f.abusalem}@reading.ac.uk
More informationSoft-Output Trellis Waveform Coding
Soft-Output Trellis Waveform Coding Tariq Haddad and Abbas Yongaçoḡlu School of Information Technology and Engineering, University of Ottawa Ottawa, Ontario, K1N 6N5, Canada Fax: +1 (613) 562 5175 thaddad@site.uottawa.ca
More informationDESIGN OF QUANTIZED FIR FILTER USING COMPENSATING ZEROS
DESIGN OF QUANTIZED FIR FILTER USING COMPENSATING ZEROS Nivedita Yadav, O.P. Singh, Ashish Dixit Department of Electronics and Communication Engineering, Amity University, Lucknow Campus, Lucknow, (India)
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More information9 Classification. 9.1 Linear Classifiers
9 Classification This topic returns to prediction. Unlike linear regression where we were predicting a numeric value, in this case we are predicting a class: winner or loser, yes or no, rich or poor, positive
More informationWeighted Gossip: Distributed Averaging Using Non-Doubly Stochastic Matrices
Weighted Gossip: Distributed Averaging Using Non-Doubly Stochastic Matrices Florence Bénézit ENS-INRIA, France Vincent Blondel UCL, Belgium Patric Thiran EPFL, Switzerland John Tsitsilis MIT, USA Martin
More informationOn the Stability of the Least-Mean Fourth (LMF) Algorithm
XXI SIMPÓSIO BRASILEIRO DE TELECOMUNICACÕES-SBT 4, 6-9 DE SETEMBRO DE 4, BELÉM, PA On the Stability of the Least-Mean Fourth (LMF) Algorithm Vítor H. Nascimento and José Carlos M. Bermudez + Abstract We
More informationMULTI-RESOLUTION SIGNAL DECOMPOSITION WITH TIME-DOMAIN SPECTROGRAM FACTORIZATION. Hirokazu Kameoka
MULTI-RESOLUTION SIGNAL DECOMPOSITION WITH TIME-DOMAIN SPECTROGRAM FACTORIZATION Hiroazu Kameoa The University of Toyo / Nippon Telegraph and Telephone Corporation ABSTRACT This paper proposes a novel
More informationA Subspace Approach to Estimation of. Measurements 1. Carlos E. Davila. Electrical Engineering Department, Southern Methodist University
EDICS category SP 1 A Subspace Approach to Estimation of Autoregressive Parameters From Noisy Measurements 1 Carlos E Davila Electrical Engineering Department, Southern Methodist University Dallas, Texas
More informationGENERALIZED DEFLATION ALGORITHMS FOR THE BLIND SOURCE-FACTOR SEPARATION OF MIMO-FIR CHANNELS. Mitsuru Kawamoto 1,2 and Yujiro Inouye 1
GENERALIZED DEFLATION ALGORITHMS FOR THE BLIND SOURCE-FACTOR SEPARATION OF MIMO-FIR CHANNELS Mitsuru Kawamoto,2 and Yuiro Inouye. Dept. of Electronic and Control Systems Engineering, Shimane University,
More informationBlind Source Separation with a Time-Varying Mixing Matrix
Blind Source Separation with a Time-Varying Mixing Matrix Marcus R DeYoung and Brian L Evans Department of Electrical and Computer Engineering The University of Texas at Austin 1 University Station, Austin,
More informationEfficient Quasi-Newton Proximal Method for Large Scale Sparse Optimization
Efficient Quasi-Newton Proximal Method for Large Scale Sparse Optimization Xiaocheng Tang Department of Industrial and Systems Engineering Lehigh University Bethlehem, PA 18015 xct@lehigh.edu Katya Scheinberg
More informationsquares based sparse system identification for the error in variables
Lim and Pang SpringerPlus 2016)5:1460 DOI 10.1186/s40064-016-3120-6 RESEARCH Open Access l 1 regularized recursive total least squares based sparse system identification for the error in variables Jun
More informationLeveraging Machine Learning for High-Resolution Restoration of Satellite Imagery
Leveraging Machine Learning for High-Resolution Restoration of Satellite Imagery Daniel L. Pimentel-Alarcón, Ashish Tiwari Georgia State University, Atlanta, GA Douglas A. Hope Hope Scientific Renaissance
More informationSTAT 200C: High-dimensional Statistics
STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57
More informationNumerical Methods. Rafał Zdunek Underdetermined problems (2h.) Applications) (FOCUSS, M-FOCUSS,
Numerical Methods Rafał Zdunek Underdetermined problems (h.) (FOCUSS, M-FOCUSS, M Applications) Introduction Solutions to underdetermined linear systems, Morphological constraints, FOCUSS algorithm, M-FOCUSS
More informationOptimization methods
Optimization methods Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda /8/016 Introduction Aim: Overview of optimization methods that Tend to
More informationGaussian Mixtures Proposal Density in Particle Filter for Track-Before-Detect
12th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 29 Gaussian Mixtures Proposal Density in Particle Filter for Trac-Before-Detect Ondřej Straa, Miroslav Šimandl and Jindřich
More informationIN real-world adaptive filtering applications, severe impairments
1878 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 5, MAY 2008 A New Robust Variable Step-Size NLMS Algorithm Leonardo Rey Vega, Hernán Rey, Jacob Benesty, Senior Member, IEEE, and Sara Tressens
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationRegularization and Variable Selection via the Elastic Net
p. 1/1 Regularization and Variable Selection via the Elastic Net Hui Zou and Trevor Hastie Journal of Royal Statistical Society, B, 2005 Presenter: Minhua Chen, Nov. 07, 2008 p. 2/1 Agenda Introduction
More informationNewton-like method with diagonal correction for distributed optimization
Newton-lie method with diagonal correction for distributed optimization Dragana Bajović Dušan Jaovetić Nataša Krejić Nataša Krlec Jerinić August 15, 2015 Abstract We consider distributed optimization problems
More informationCO-OPERATION among multiple cognitive radio (CR)
586 IEEE SIGNAL PROCESSING LETTERS, VOL 21, NO 5, MAY 2014 Sparse Bayesian Hierarchical Prior Modeling Based Cooperative Spectrum Sensing in Wideb Cognitive Radio Networks Feng Li Zongben Xu Abstract This
More informationSparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28
Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:
More informationComparative Performance Analysis of Three Algorithms for Principal Component Analysis
84 R. LANDQVIST, A. MOHAMMED, COMPARATIVE PERFORMANCE ANALYSIS OF THR ALGORITHMS Comparative Performance Analysis of Three Algorithms for Principal Component Analysis Ronnie LANDQVIST, Abbas MOHAMMED Dept.
More informationAn algorithm for robust fitting of autoregressive models Dimitris N. Politis
An algorithm for robust fitting of autoregressive models Dimitris N. Politis Abstract: An algorithm for robust fitting of AR models is given, based on a linear regression idea. The new method appears to
More informationRECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK
RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK TRNKA PAVEL AND HAVLENA VLADIMÍR Dept of Control Engineering, Czech Technical University, Technická 2, 166 27 Praha, Czech Republic mail:
More informationNew Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit
New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence
More informationCompressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery
Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad
More informationA METHOD OF ADAPTATION BETWEEN STEEPEST- DESCENT AND NEWTON S ALGORITHM FOR MULTI- CHANNEL ACTIVE CONTROL OF TONAL NOISE AND VIBRATION
A METHOD OF ADAPTATION BETWEEN STEEPEST- DESCENT AND NEWTON S ALGORITHM FOR MULTI- CHANNEL ACTIVE CONTROL OF TONAL NOISE AND VIBRATION Jordan Cheer and Stephen Daley Institute of Sound and Vibration Research,
More informationA Matrix Theoretic Derivation of the Kalman Filter
A Matrix Theoretic Derivation of the Kalman Filter 4 September 2008 Abstract This paper presents a matrix-theoretic derivation of the Kalman filter that is accessible to students with a strong grounding
More informationComparison of Modern Stochastic Optimization Algorithms
Comparison of Modern Stochastic Optimization Algorithms George Papamakarios December 214 Abstract Gradient-based optimization methods are popular in machine learning applications. In large-scale problems,
More informationSubmitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Co
Submitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Computational Neuro-Engineering Laboratory University
More informationA Reservoir Sampling Algorithm with Adaptive Estimation of Conditional Expectation
A Reservoir Sampling Algorithm with Adaptive Estimation of Conditional Expectation Vu Malbasa and Slobodan Vucetic Abstract Resource-constrained data mining introduces many constraints when learning from
More informationBLIND SEPARATION OF INSTANTANEOUS MIXTURES OF NON STATIONARY SOURCES
BLIND SEPARATION OF INSTANTANEOUS MIXTURES OF NON STATIONARY SOURCES Dinh-Tuan Pham Laboratoire de Modélisation et Calcul URA 397, CNRS/UJF/INPG BP 53X, 38041 Grenoble cédex, France Dinh-Tuan.Pham@imag.fr
More informationExponential Moving Average Based Multiagent Reinforcement Learning Algorithms
Exponential Moving Average Based Multiagent Reinforcement Learning Algorithms Mostafa D. Awheda Department of Systems and Computer Engineering Carleton University Ottawa, Canada KS 5B6 Email: mawheda@sce.carleton.ca
More informationHere represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.
19 KALMAN FILTER 19.1 Introduction In the previous section, we derived the linear quadratic regulator as an optimal solution for the fullstate feedback control problem. The inherent assumption was that
More informationImproved least-squares-based combiners for diffusion networks
Improved least-squares-based combiners for diffusion networs Jesus Fernandez-Bes, Luis A. Azpicueta-Ruiz, Magno T. M. Silva, and Jerónimo Arenas-García Universidad Carlos III de Madrid Universidade de
More informationWavelet Footprints: Theory, Algorithms, and Applications
1306 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 5, MAY 2003 Wavelet Footprints: Theory, Algorithms, and Applications Pier Luigi Dragotti, Member, IEEE, and Martin Vetterli, Fellow, IEEE Abstract
More informationarxiv: v1 [cs.it] 26 Oct 2018
Outlier Detection using Generative Models with Theoretical Performance Guarantees arxiv:1810.11335v1 [cs.it] 6 Oct 018 Jirong Yi Anh Duc Le Tianming Wang Xiaodong Wu Weiyu Xu October 9, 018 Abstract This
More informationMidterm. Introduction to Machine Learning. CS 189 Spring Please do not open the exam before you are instructed to do so.
CS 89 Spring 07 Introduction to Machine Learning Midterm Please do not open the exam before you are instructed to do so. The exam is closed book, closed notes except your one-page cheat sheet. Electronic
More informationCHANGE DETECTION WITH UNKNOWN POST-CHANGE PARAMETER USING KIEFER-WOLFOWITZ METHOD
CHANGE DETECTION WITH UNKNOWN POST-CHANGE PARAMETER USING KIEFER-WOLFOWITZ METHOD Vijay Singamasetty, Navneeth Nair, Srikrishna Bhashyam and Arun Pachai Kannu Department of Electrical Engineering Indian
More informationFINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez
FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES Danlei Chu Tongwen Chen Horacio J Marquez Department of Electrical and Computer Engineering University of Alberta Edmonton
More informationUTILIZING PRIOR KNOWLEDGE IN ROBUST OPTIMAL EXPERIMENT DESIGN. EE & CS, The University of Newcastle, Australia EE, Technion, Israel.
UTILIZING PRIOR KNOWLEDGE IN ROBUST OPTIMAL EXPERIMENT DESIGN Graham C. Goodwin James S. Welsh Arie Feuer Milan Depich EE & CS, The University of Newcastle, Australia 38. EE, Technion, Israel. Abstract:
More informationSolving Corrupted Quadratic Equations, Provably
Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin
More informationBattery-Aware Selective Transmitters in Energy-Harvesting Sensor Networks: Optimal Solution and Stochastic Dual Approximation
Battery-Aware Selective Transmitters in Energy-Harvesting Sensor Networs: Optimal Solution and Stochastic Dual Approximation Jesús Fernández-Bes, Carlos III University of Madrid, Leganes, 28911, Madrid,
More informationOn Mixture Regression Shrinkage and Selection via the MR-LASSO
On Mixture Regression Shrinage and Selection via the MR-LASSO Ronghua Luo, Hansheng Wang, and Chih-Ling Tsai Guanghua School of Management, Peing University & Graduate School of Management, University
More informationMachine Learning Linear Models
Machine Learning Linear Models Outline II - Linear Models 1. Linear Regression (a) Linear regression: History (b) Linear regression with Least Squares (c) Matrix representation and Normal Equation Method
More informationOptimization methods
Lecture notes 3 February 8, 016 1 Introduction Optimization methods In these notes we provide an overview of a selection of optimization methods. We focus on methods which rely on first-order information,
More informationCo-Prime Arrays and Difference Set Analysis
7 5th European Signal Processing Conference (EUSIPCO Co-Prime Arrays and Difference Set Analysis Usham V. Dias and Seshan Srirangarajan Department of Electrical Engineering Bharti School of Telecommunication
More informationTwo-Dimensional Sparse Arrays with Hole-Free Coarray and Reduced Mutual Coupling
Two-Dimensional Sparse Arrays with Hole-Free Coarray and Reduced Mutual Coupling Chun-Lin Liu and P. P. Vaidyanathan Dept. of Electrical Engineering, 136-93 California Institute of Technology, Pasadena,
More information2 Regularized Image Reconstruction for Compressive Imaging and Beyond
EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement
More informationImproved Unitary Root-MUSIC for DOA Estimation Based on Pseudo-Noise Resampling
140 IEEE SIGNAL PROCESSING LETTERS, VOL. 21, NO. 2, FEBRUARY 2014 Improved Unitary Root-MUSIC for DOA Estimation Based on Pseudo-Noise Resampling Cheng Qian, Lei Huang, and H. C. So Abstract A novel pseudo-noise
More information