Modified Leaky LMS Algorithms Applied to Satellite Positioning
|
|
- Agnes Richards
- 6 years ago
- Views:
Transcription
1 Modified Leaky LMS Algorithms Applied to Satellite Positioning J.P. Montillet Research School of Earth Sciences the Australian National University Australia Kegen Yu School of Geodesy and Geomatics Wuhan University China Abstract With the recent advances in the theory of fractional Brownian motion (fbm), this model is used to describe the position coordinate estimates of Global Navigation Satellite System (GNSS) receivers that have long-range dependencies. The Modified Leaky Least Mean Squares (ML-LMS) algorithms are proposed to filter the long time series of the position coordinate estimates, which uses the Hurst parameter estimates to update the filter tap weights. Simulation results using field measurements demonstrate that these proposed modified leaky least mean squares algorithms can outperform the classical LMS filter considerably in terms of accuracy (mean squared error) and convergence. We also deal with the case study where our proposed algorithms outperform the leaky LMS. The algorithms are tested on simulated and real measurements. Index Terms GNSS positioning; fractional Browinian motion model; Hurst parameter; LMS; modified leaky LMS I. INTRODUCTION The fractional Brownian motion (fbm) based on the Hurst parameter (H) was developed to model long-run non-periodic statistical dependences of time series such as in [2] and [3]. This fbm model has been successfully applied in various research topics such as telecommunications [1] or more particularly to model the noise characteristics in geodesic time series [4]. Long time series of many processes cannot simply be modelled either as Brownian motion or as Gaussian. In 1968, [3] defined the fbm model and studied the Hurst parameter. In the case of H < 0.5, the increments of the process are negatively correlated and the statistics of the process noise are more Gaussian distributed; if H > 0.5 the process exhibits longrange dependence; while the case of H = 0.5 corresponds to a pure Brownian motion (white noise). Thus, the fbm model gives some degree of freedom to characterize the process noise or measurement errors of the time series. Note that in the study of time series with coloured noise, and thus exhibit a power law frequency spectrum such as S(f) = 1/f α (see [4]), it has been shown that H is directly connected with the power-law index α such as α = 2H 1. With this definition, the random walk noise corresponds to α = 2 (H = 3/2), and white noise to α = 0 (H = 0.5). Thus, the random walk noise is classified as long-term dependency phenomena. This paper intends to improve the positional accuracy of ground receivers which are based on the global navigation satellite system (GNSS). The fbm model is utilized to model the long time series of position coordinate estimates of the GNSS receiver. For simplicity, only one-dimensional coordinate data are employed in this work for the development of the theory. The degree of fitting of the fbm model to the time series of the receiver coordinate estimates will be studied through choosing the appropriate values of the H parameter. Knowledge about the H parameter is important and a number of different methods can be used to estimate this parameter [5]. The Least Mean Squares (LMS) algorithm has found wide range of applications in many areas of adaptive signal processing and control. It is a fact that the classical LMS filter is optimum when the measurement noise/error is additive white noise as explained in [6]. Recently the Leaky LMS (L-LMS) algorithms have been proposed to improve the performance of the LMS algorithm especially regarding the stability and convergence [7]. In this paper, we exploit the L-LMS algorithm to smooth the GNSS position coordinate estimates. It follows an earlier work from [8] where a modified leaky LMS was introduced and tested via simulations. Here, the cost function of this adaptive filter is revisted to improve the performances when using real GPS time series. In Section II, we justify the use of LMS and L-LMS adaptive filters in the smoothing of estimated positions. We also develop a modified L-LMS to specifically tackle the challenges of smoothing a fbm time series with the introduction of the H parameter in the cost function of the adaptive filters. Section III is dedicated to the results when applying the theory proposed in this paper. Note that the results are based on simulated and real data obtained in static scenario. II. SMOOTHING POSITION ESTIMATES WITH LMS FILTER A. The fbm model and adaptive filtering In this paper, the time series of interest is the coordinates of a ground receiver (in World Geodetic System 1984 referential frame or also called W GS 84 [10]) either triangulated with GNSS pseudoranges in single point positioning, using double difference GNSS solutions (float ambiguities) [10], or with a new positioning technology called Locata [11]. Locata technology is a ground based positioning technologies
2 allowing point positioning of a rover with centimetre accuracy (using carrier -phase measurements) [12]. For a fixed receiver, it is interesting to model the long-term and short-term correlation structure of these time series data [9]. The choice of this model is to give some degree of freedom instead of using the Gaussian model which is generally used in satellite-based positioning theory [10]. One way to constrain the variance of the receiver position estimates is generally to use a Kalman filter in backward filtering. Here, we intend to develop a simpler approach by using a Least-Mean-Square (LMS) filter as the case study is limited to a static receiver. The research community has already investigated the application of the adaptive filters to constrain the error of time-of-arrival measurements in sensor localization [13]. The authors in [14] developed a LMS filter to reduce the recurrent multipath error due to satellite geometry on the double difference observations for a static rover. The LMS filter is an iterative algorithm which adapts a stationary random process (v(n)) to a desired output (d(n)) with the same stochastic properties. Let us define the discretetime processes: v k = x k x 0 d k = g k x 0 (1) where the time series of the receiver s coordinate estimates is one dimensional, given by x k = [ x[k], x[k + 1],..., x[k + N] ] T. xk, v k, g k and d k are all in R N. As v k is the time series that we want to adapt to a desire time series d k, it follows that σ 2 d < σ2 v. In other words, it can be seen that d k is the time series coordinate of the same rover, but filtered (e.g. moving average) or coming from another sensor (e.g. Inertial Measurement Unit see [15]) with smaller variances. x 0 is the true rover s coordinate. Due to propagation phenomena (i.e. multipath), v k and d k are not necessarily zero-mean Gaussian distributed, but follow a fbm model as: v k fbm(h v, σ 2 v, µ v ) d k fbm(h d, σ 2 d, µ d ) where H v and H d are the Hurst parameters satisfying H v H d < ɛ 1 where ɛ 1 is a small positive number. To guarantee the convergence of the LMS filter and unbiased solution, it is important to have a strong hypothesis on the means: µ v µ d < ɛ 2 where ɛ 2 is a very small number. In the LMS algorithm described in [6] the filter output is given by y k = w T k v k where w k is the filter tap weight vector. The error signal is defined as e k = d k y k so that the mean square error (MSE) is given by: (2) E{e 2 k} = E{d 2 k} 2p T w k + w T k Rw k (3) where E{.} is the expectation operator, p = E{d k v k } and R = E{v T k v k}. The LMS filter is an iterative algorithm which updates the tap weight vector w k : w k+1 = w k + 2µe k v k (4) where µ is a selectable parameter. It has been shown that in the case of data with white noise, w k converges to the Wiener solution with 0 < µ < 1/(2λ max ) with λ max the largest eigenvalue of R (see [6]). Each of the weight vector elements exponentially relaxes to its optimal value with a time constant inversely proportional to λ max. Further, the eigenvalue spread, defined as the ratio of the largest eigenvalue over the smallest one ( λmax λ min ), plays a critical role in the convergence of the LMS filter. B. Derivation of a modified Leaky-LMS to smooth position estimates It has been shown in previous works (e.g, [7]) that it is possible to improve the convergence of the LMS algorithm based on the eigenvalue spread. In other words when the eigenvalue spread increases, the rate of convergence of the LMS algorithm decreases [7]. To speed up the convergence, one way is to employ the leaky LMS filter which decreases the eigenvalue spread [6]. Regarding eigenvalue spread and a fbm process, the following property was shown in [8]. Property 2:When one considers two fbm processes with Hurst parameters H a and H b constrained by H a < 0.5 and H b > 0.5, and with covariance matrix R a and R b, respectively. Then, the eigenvalue spreads of the two covariance matrices satisfy: λ (a) max λ (a) min < λ(b) max λ (b) min where the superscripts are used to denote the corresponding covariance matrices. From [7], the cost function of the L-LMS is defined as: (5) J k = e 2 k + γw T k w k (6) where γ is the leak parameter selected by the user such as γ 0. The name stems from the fact that, when the input is turned off, the weight vector of the regular LMS algorithm stalls. With leaky LMS in the same scenario, the weight vector instead leaks out. Several works (e.g, [6], [7]) showed that: lim E{w k} = (R + γi) 1 p (7) k with this formulation, the L-LMS can be interpreted as adding zero-mean white noise with autocorrelation matrix γi to the input. The downside is that this algorithm is biased (lim k E[w k ] w op, w op optimum weight [7]). However, it can be shown that the leaky algorithm decreases the eigenvalue spread. If λ max and λ min are the largest and smallest eigenvalues of the input signal of the LMS algorithm, then the input eigenvalues seen by the leaky algorithms should be γ + λ max and γ + λ min [7]. Because of the following inequality: λ max + γ λ min + γ λ max, γ 0 (8) λ min
3 It is obvious that the eigenvalue spread of the L-LMS algorithm is smaller than that of the LMS algorithm. Similarly to the cost function defined in the variable leaky LMS algorithm, we define: J k = e 2 k + f(h)w T k w k (9) with the function f defined as: H β if H 0.5, f(h) = (H 0.5) β if > 0.5, with β 2 where clearly the H parameter is directly used to adjust the cost function. Note that if f(h) = H, the modified LMS filter is equal to the one tested in [8]. It is called in the following L LMS o. Then, the tap weight vector is updated by: w k+1 = (1 2µf(H))w k + 2µe k v k (10) Clearly, the cost function in (10) is devised to achieve a good trade-off between the optimal LMS estimator when the noise is white (H < 0.5) and good convergence when the noise is fbm (H > 0.5). This can be shown by reformulating the eigenvalue spread in Equation (8) as: λ max λ max + f(h) λ min λ min + f(h) λ max + γ λ min + γ, γ 0 (11) In other words, the inequality shows that the eigenvalue spread of the LMS algorithm may be too large, whereas the eigenvalue of the L-LMS algorithm may be too tight. On the other hand, the proposed ML-LMS algorithm with cost functions defined by (9) has moderate eigenvalue spread. In the next section, the modified L-LMS associated with the cost function in equation (9) is called ML-LMS. III. RESULTS In this section, the performance of the ML-LMS algorithm is also evaluated against the L-LMS, classical LMS and the modified L LMS o developed in [8]. A. Performances of the ML-LMS with simulated time series According to the model in (2), we generate two signals using the Matlab library function wfbm. The first one is the input signal (v k ) and the second signal is a reference signal (d k ). There are then two possible scenarios to investigate: (Scenario A) the noisy input signal is smoothed with a reference signal with a smaller noise amplitude; (Scenario B) based on integrating two different technologies at the positioning level, the two coordinate time series have different noise characteristics. Scenario A is the classical scenario of denoising a signal by adapting it with a reference signal which has similar stochastic properties. In Scenario B, the two signals are simulated with a slightly different Hurst parameter. Thus, for both scenarios µ v and µ d are small. In Scenario A, the H parameters of the two time series are set to be the same when simulating with TABLE I STATISTICS OF THE MEAN SQUARE ERROR FOR THE DIFFERENT ADAPTIVE FILTERS WITH SIMULATED TIME SERIES (SCENARIO A) MSE (m 2 ) Series 1 Series 2 0 < H < H < 1 Original Reference LMS L LMS ML LMS ML LMS o the wfbm routine. However in the input signal, we add a white noise. In Scenario B, the two signals are simulated with the wfbm routine. In order to have the standard deviation of d k smaller than the input signal for all the simulation, we arbitrarily choose σ 2 d = σ2 v/r (r in [1.3, 3]). The simulation testbed uses various H parameter values and for each value the mean square error and standard deviation are averaged over 500 simulation runs. The results in Table I are produced following the setting described in Scenario A. Overall, it shows that for simulated time series with H in [0, 0.5] or H in [0.5, 1] the LMS algorithm performs the worst in terms of mean square error. Note that for H in [0, 0.5], its performance is relatively close to the other adaptive algorithms. On the other hand, the ML- LMS achieves the minimum MSE for both Series 1 and 2. In addition, the L-LMS and ML-LMS give very similar results. The previous ML LMS o is outperformed by the M L LM S. From these simulations, one can say that a tight eigenvalue spread such as the L-LMS or the ones derived for the modified algorithm (ML-LMS) give better results than with the classical LMS. It is supported by the inequality (11) developed in the previous section. Furthermore, it is important to underline that for the values of H smaller than 0.2 and a small noise amplitude (r 1) in the input signal, then the input signal and reference are very similar. In this particular case, the LMS filter can outperform the other algorithms. This case is not included in any simulations as the values of r are chosen in [1.3, 3]. TABLE II STATISTICS OF THE MEAN SQUARE ERROR FOR THE DIFFERENT ADAPTIVE FILTERS WITH SIMULATED TIME SERIES (SCENARIO B) MSE (m 2 ) Series 1 Series 2 0 < H < H < 1 Original Reference LMS L LMS ML LMS ML LMS o Now looking at the results from the Scenario B shown in Table II, one can see that overall the statistics have values of magnitude close to 10 times greater than the results from the previous scenario. However, in this case the LMS outperforms
4 Fig. 1. Mean and standard deviation of the Error (m) between the output of the adaptive filters and the reference signal (d) following scenario A (H = 0.4, σ 2 d = σ2 v /4). Fig. 2. Mean and standard deviation of the Error (m) between the output of the adaptive filters and the reference signal (d) following scenario B (H v = 0.4, H d = 0.37, σ 2 d = σ2 v/2.3). the L-LMS, whereas the ML-LMS outperforms the LMS algorithm. This result is valid for both cases where H is in [0, 0.5] and H in [0.5, 1]. This shows when dealing with time series following the fbm model, algorithm with large eigenvalue spread can track (and smooth) the input signal more efficiently than adaptive algorithm with tighter eigenvalue spread. Another way of studying the various adaptive filters is the evaluation through observing the time series of the error between the reference signal (d) and the input signal to smooth (v). Figure 1 and 2 show the results for two different H parameter values, respectively. In both scenarios the statistics of the error is minimum for the ML-LMS. This confirms the results shown previously. In addition, one can see that the adaptive algorithms and the classical LMS differ significantly in the first 200 epochs (e.g, top graph in Figure 1 and 2). To recall the Section II-B, an eigenvalue spread too large fails to adapt the input signal with the reference one when dealing with signal experiencing a fbm model. However, a tight eigenvalue spread (i.e. L-LMS) is also not favored. In all these simulations, the ML-LMS produced the best results outperforming the LMS, L LMS and also the previous ML LMS o. This justifies the modification of the cost function of the L-LMS algorithm by introducing the H parameter. B. Application to Real data In this part, we apply the adaptive filters to two real case scenarios. The first case study is a GNSS station in the middle of London in a built-up environment. The position was recorded during 1780 epochs using Real Time Kinematic, without (v) and with a choke ring antenna (d). The position Fig. 3. East coordinates of a GNSS receiver with and withour using a choke ring antenna (top), and Mean Square Error (MSE) when using either L-LMS, ML-LMS adaptive filter. is initially triangulated in W GS 84 reference frame and then translated in a local East, North, up reference frame. The figure at the top of Figure 3 shows the time series of the East coordinate. The remaining figures display the MSE when adapting the time series of v with d using the L-LMS and ML- LMS. Note that, the results with Ml LMS o are not shown as the previous section shows that the ML-LMS outperforms the previous algorithm. The statistics of the MSE when applying the adaptive filters on the East and North coordinates, are shown in the Table III. The error is generally very small
5 TABLE III MEAN SQUARE ERROR (MSE) AND STANDARD DEVIATION FOR THE DIFFERENT ADAPTIVE FILTERS WHEN APPLIED TO THE EAST AND NORTH COORDINATE OF THE GNSS STATION FOLLOWING THE FIRST CASE STUDY. MSE (m 2 ) East North H = 0.14 H=0.22 Original Reference LMS (10 3 ) L LMS (10 3 ) ML LMS (10 3 ) TABLE IV MEAN SQUARE ERROR (MSE) AND STANDARD DEVIATION FOR THE DIFFERENT ADAPTIVE FILTERS WHEN APPLIED TO THE EAST AND NORTH COORDINATE OF THE GNSS STATION FOLLOWING THE SECOND CASE STUDY. MSE (m 2 ) East North H = 0.32 H=0.45 Original Reference LMS L LMS ML LMS as the amplitude of the time series coordinates of the GNSS receiver is in the order of magnitude of the centimetre. The overall results agree with the previous section. In particular, we can see that the statistics of the MSE corresponding to the ML-LMS are the lowest among all the adaptive filters tested here. The second case study is based on the GNSS data recorded at the station in Nottingham (UK) in open sky environment, when using pseudoranges to triangulate the position of the GNSS receiver (v). We also use the carrier-phase smoothing technique to improve the accuracy of the position of the receiver estimates (d). The statistics of the position are calculated over 1800 epochs. The statistics of the MSE when applying the adaptive filters on the East and North coordinates, are shown in the Table IV. In this case, the standard deviation of the time series of the receiver coordinates v or d is large. We can then see that the mean and standard deviation of the MSE is large compared to the first case study. Overall, the results are similar to the one before. The ML-LMS outperforms the other algorithms. Kinematic in built-up environment or using Pseudoranges. In both case studies, the ML-LMS outperform the other adaptive filters. V. ACKNOWLEDGEMENTS This research is partially supported by the Australian Research Council (grant number DP ). We also thank Dr. Lukasz K. Bonenberg from the Nottingham Geospatial Institute (NGI) at the University of Nottingham with kindly providing some data. REFERENCES [1] S. Bregni and L. Jmoda, Accurate estimation of the Hurst parameter of long-range dependent traffic using modified Allan and Hadamard variances, IEEE Transactions on Communications, vol. 56(11), , 2008 [2] H. E. Hurst, R. P. Black and Y. M. Sinaika, Long Term-Storage: An Experimental Study, Constable, London, [3] B. B. Mandelbrot and J. W. Van Ness, Fractional Brownian Motions, Fractional Noises and Applications, Society for Industrial and Applied Mathematics (SIAM) Review, vol. 10(4), , [4] J.-P. Montillet, P. Tregoning, S. McClusky, K. Yu, Extracting white noise statistics in GPS coordinate time series, IEEE Geoscience and Remote Sensing Letters, vol. 10(3), , [5] M. S. Taqqu, V. Teverovsky and W. Willinger, Estimators for long-range dependence: an empirical study, Fractals, vol. 3(4) (1995), , [6] S. Haykin, Adaptive Filter Theory, fourth ed., Prentice Hall Upper Saddle River, New Jersey, [7] M. Kamenetsky and B. Widrow, A Variable Leaky LMS Adaptive Algorithm, Proc. of the Thirty-Eighth Asilomar Conference on Signals, Systems and Computers, , [8] J. P. Montillet and K. Yu, Leaky LMS Algorithm and Fractional Brownian Motion Model for GNSS Receiver Position Estimation, in Proc. of the IEEE Vehicular Technology Conference (VTC 11 fall), [9] J. R. M. Hosking, Fractional differencing, Biometrika, vol. 68(1), , [10] A. Leick, GPS Satellite Surveying, 3rd Edition, Wiley, [11] J.-P. Montillet, L. K. Bonenberg, C. M. Hancock, G. W. Roberts, On the improvements of the single point positioning accuracy with Locata technology, GPS solutions, 2013 (doi: /s ). [12] J. Barnes, C. Rizos, M. Kanli, M. Pahwa, Locata: a new positioning technology for classically difficult GNSS environments, Proc. of international global navigation satellite systems society (IGNSS symposium), [13] J.-P. Montillet, K. Yu, I. Oppermann, Location performance enhancement with recursive processing of time-of-arrival measurements, Proc. of IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), [14] U. Weinbach, N. Raziq and P. Collier, Mitigation of periodic GPS multipath errors using a normalised least mean square adaptive filter, Journal of Spatial Science, vol. 54(1), 1-13, [15] T. Moore, C. Hill, C. Hide, W. Ochieng, S. Feng, E. Aguado, R. Ioannides, P. Cross and L. Lau, End-to-end testing of an integrated centimetric positioning test-bed, Proc. of the ION-GNSS, Fort Worth, Texas, IV. CONCLUSIONS The modified Leaky LMS algorithm was developed to filter the long time series of the estimates of GNSS receiver coordinates, using a cost function as a function of the H parameter. The results demonstrated that the proposed methods outperformed the LMS algorithm in terms of MSE in the Scenario A in the simulated results. More particularly, the ML-LMS outperformed all the other algorithms in Scenario A and Scenario B.. At last, the results ended with a practical example using GNSS coordinate time series using Real time
An Adaptive Sensor Array Using an Affine Combination of Two Filters
An Adaptive Sensor Array Using an Affine Combination of Two Filters Tõnu Trump Tallinn University of Technology Department of Radio and Telecommunication Engineering Ehitajate tee 5, 19086 Tallinn Estonia
More informationOPTIMAL TIME TRANSFER
OPTIMAL TIME TRANSFER James R. Wright and James Woodburn Analytical Graphics, Inc. 220 Valley Creek Blvd, Exton, PA 19341, USA Abstract We have designed an optimal time transfer algorithm using GPS carrier-phase
More informationGPS Multipath Detection Based on Sequence of Successive-Time Double-Differences
1 GPS Multipath Detection Based on Sequence of Successive-Time Double-Differences Hyung Keun Lee, Jang-Gyu Lee, and Gyu-In Jee Hyung Keun Lee is with Seoul National University-Korea University Joint Research
More informationSIMON FRASER UNIVERSITY School of Engineering Science
SIMON FRASER UNIVERSITY School of Engineering Science Course Outline ENSC 810-3 Digital Signal Processing Calendar Description This course covers advanced digital signal processing techniques. The main
More informationDesign of Adaptive Filtering Algorithm for Relative Navigation
Design of Adaptive Filtering Algorithm for Relative Navigation Je Young Lee, Hee Sung Kim, Kwang Ho Choi, Joonhoo Lim, Sung Jin Kang, Sebum Chun, and Hyung Keun Lee Abstract Recently, relative navigation
More informationImproving Adaptive Kalman Estimation in GPS/INS Integration
THE JOURNAL OF NAVIGATION (27), 6, 517 529. f The Royal Institute of Navigation doi:1.117/s373463374316 Printed in the United Kingdom Improving Adaptive Kalman Estimation in GPS/INS Integration Weidong
More informationIS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE?
IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE? Dariusz Bismor Institute of Automatic Control, Silesian University of Technology, ul. Akademicka 16, 44-100 Gliwice, Poland, e-mail: Dariusz.Bismor@polsl.pl
More informationRiccati difference equations to non linear extended Kalman filter constraints
International Journal of Scientific & Engineering Research Volume 3, Issue 12, December-2012 1 Riccati difference equations to non linear extended Kalman filter constraints Abstract Elizabeth.S 1 & Jothilakshmi.R
More informationADAPTIVE FILTER THEORY
ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface
More informationNoise Characteristics in High Precision GPS Positioning
Noise Characteristics in High Precision GPS Positioning A.R. Amiri-Simkooei, C.C.J.M. Tiberius, P.J.G. Teunissen, Delft Institute of Earth Observation and Space systems (DEOS), Delft University of Technology,
More informationApproximation of ambiguity covariance matrix for integer de-correlation procedure in single-epoch GNSS positioning
he 9 th International Conference ENVIRONMENAL ENGINEERING 22 23 May 24, Vilnius, Lithuania SELECED PAPERS eissn 229-792 / eisbn 978-69-457-64-9 Available online at http://enviro.vgtu.lt Section: echnologies
More informationUWB Geolocation Techniques for IEEE a Personal Area Networks
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com UWB Geolocation Techniques for IEEE 802.15.4a Personal Area Networks Sinan Gezici Zafer Sahinoglu TR-2004-110 August 2004 Abstract A UWB positioning
More informationESTIMATING THE RESIDUAL TROPOSPHERIC DELAY FOR AIRBORNE DIFFERENTIAL GPS POSITIONING (A SUMMARY)
ESTIMATING THE RESIDUAL TROPOSPHERIC DELAY FOR AIRBORNE DIFFERENTIAL GPS POSITIONING (A SUMMARY) J. Paul Collins and Richard B. Langley Geodetic Research Laboratory Department of Geodesy and Geomatics
More informationRecursive Least Squares for an Entropy Regularized MSE Cost Function
Recursive Least Squares for an Entropy Regularized MSE Cost Function Deniz Erdogmus, Yadunandana N. Rao, Jose C. Principe Oscar Fontenla-Romero, Amparo Alonso-Betanzos Electrical Eng. Dept., University
More informationESTIMATOR STABILITY ANALYSIS IN SLAM. Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu
ESTIMATOR STABILITY ANALYSIS IN SLAM Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu Institut de Robtica i Informtica Industrial, UPC-CSIC Llorens Artigas 4-6, Barcelona, 88 Spain {tvidal, cetto,
More informationTOWARDS ROBUST LOCALIZATION OF RTK-GPS TOPOGRAPHIC SURVEYS 23
TOWARDS ROBUST LOCALIZATION OF RTK-GPS TOPOGRAPHIC SURVEYS Jerry W. Nave, North Carolina A&T University; Tarig A. Ali, American University of Sharjah Abstract Localization is performed to fit the observed
More informationA Study of Covariances within Basic and Extended Kalman Filters
A Study of Covariances within Basic and Extended Kalman Filters David Wheeler Kyle Ingersoll December 2, 2013 Abstract This paper explores the role of covariance in the context of Kalman filters. The underlying
More information2D Image Processing. Bayes filter implementation: Kalman filter
2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de
More information2D Image Processing. Bayes filter implementation: Kalman filter
2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche
More informationOPTIMAL FUSION OF SENSOR DATA FOR DISCRETE KALMAN FILTERING Z. G. FENG, K. L. TEO, N. U. AHMED, Y. ZHAO, AND W. Y. YAN
Dynamic Systems and Applications 16 (2007) 393-406 OPTIMAL FUSION OF SENSOR DATA FOR DISCRETE KALMAN FILTERING Z. G. FENG, K. L. TEO, N. U. AHMED, Y. ZHAO, AND W. Y. YAN College of Mathematics and Computer
More informationRecursive Generalized Eigendecomposition for Independent Component Analysis
Recursive Generalized Eigendecomposition for Independent Component Analysis Umut Ozertem 1, Deniz Erdogmus 1,, ian Lan 1 CSEE Department, OGI, Oregon Health & Science University, Portland, OR, USA. {ozertemu,deniz}@csee.ogi.edu
More informationLMS and eigenvalue spread 2. Lecture 3 1. LMS and eigenvalue spread 3. LMS and eigenvalue spread 4. χ(r) = λ max λ min. » 1 a. » b0 +b. b 0 a+b 1.
Lecture Lecture includes the following: Eigenvalue spread of R and its influence on the convergence speed for the LMS. Variants of the LMS: The Normalized LMS The Leaky LMS The Sign LMS The Echo Canceller
More informationHow long before I regain my signal?
How long before I regain my signal? Tingting Lu, Pei Liu and Shivendra S. Panwar Polytechnic School of Engineering New York University Brooklyn, New York Email: tl984@nyu.edu, peiliu@gmail.com, panwar@catt.poly.edu
More informationKalman filtering with intermittent heavy tailed observations
Kalman filtering with intermittent heavy tailed observations Sabina Zejnilović Abstract In large wireless sensor networks, data can experience loss and significant delay which from the aspect of control
More informationSENSITIVITY ANALYSIS OF MULTIPLE FAULT TEST AND RELIABILITY MEASURES IN INTEGRATED GPS/INS SYSTEMS
Arcves of Photogrammetry, Cartography and Remote Sensing, Vol., 011, pp. 5-37 ISSN 083-14 SENSITIVITY ANALYSIS OF MULTIPLE FAULT TEST AND RELIABILITY MEASURES IN INTEGRATED GPS/INS SYSTEMS Ali Almagbile
More informationENGR352 Problem Set 02
engr352/engr352p02 September 13, 2018) ENGR352 Problem Set 02 Transfer function of an estimator 1. Using Eq. (1.1.4-27) from the text, find the correct value of r ss (the result given in the text is incorrect).
More informationChapter 2 Wiener Filtering
Chapter 2 Wiener Filtering Abstract Before moving to the actual adaptive filtering problem, we need to solve the optimum linear filtering problem (particularly, in the mean-square-error sense). We start
More informationAdaptive Filtering Part II
Adaptive Filtering Part II In previous Lecture we saw that: Setting the gradient of cost function equal to zero, we obtain the optimum values of filter coefficients: (Wiener-Hopf equation) Adaptive Filtering,
More informationON THE CONVERGENCE OF FARIMA SEQUENCE TO FRACTIONAL GAUSSIAN NOISE. Joo-Mok Kim* 1. Introduction
JOURNAL OF THE CHUNGCHEONG MATHEMATICAL SOCIETY Volume 26, No. 2, May 2013 ON THE CONVERGENCE OF FARIMA SEQUENCE TO FRACTIONAL GAUSSIAN NOISE Joo-Mok Kim* Abstract. We consider fractional Gussian noise
More informationA. Barbu, J. Laurent-Varin, F. Perosanz, F. Mercier and J. Marty. AVENUE project. June, 20
Efficient QR Sequential Least Square algorithm for high frequency GNSS Precise Point Positioning A. Barbu, J. Laurent-Varin, F. Perosanz, F. Mercier and J. Marty AVENUE project June, 20 A. Barbu, J. Laurent-Varin,
More informationKalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise Covariance
2016 American Control Conference (ACC) Boston Marriott Copley Place July 6-8, 2016. Boston, MA, USA Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise
More informationOn the Stability of the Least-Mean Fourth (LMF) Algorithm
XXI SIMPÓSIO BRASILEIRO DE TELECOMUNICACÕES-SBT 4, 6-9 DE SETEMBRO DE 4, BELÉM, PA On the Stability of the Least-Mean Fourth (LMF) Algorithm Vítor H. Nascimento and José Carlos M. Bermudez + Abstract We
More informationX t = a t + r t, (7.1)
Chapter 7 State Space Models 71 Introduction State Space models, developed over the past 10 20 years, are alternative models for time series They include both the ARIMA models of Chapters 3 6 and the Classical
More informationLeast Squares and Kalman Filtering Questions: me,
Least Squares and Kalman Filtering Questions: Email me, namrata@ece.gatech.edu Least Squares and Kalman Filtering 1 Recall: Weighted Least Squares y = Hx + e Minimize Solution: J(x) = (y Hx) T W (y Hx)
More informationLocal Strong Convexity of Maximum-Likelihood TDOA-Based Source Localization and Its Algorithmic Implications
Local Strong Convexity of Maximum-Likelihood TDOA-Based Source Localization and Its Algorithmic Implications Huikang Liu, Yuen-Man Pun, and Anthony Man-Cho So Dept of Syst Eng & Eng Manag, The Chinese
More informationA Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang
More informationA KALMAN FILTER CLOCK ALGORITHM FOR USE IN THE PRESENCE OF FLICKER FREQUENCY MODULATION NOISE
A KALMAN FILTER CLOCK ALGORITHM FOR USE IN THE PRESENCE OF FLICKER FREQUENCY MODULATION NOISE J. A. Davis, C. A. Greenhall *, and P. W. Stacey National Physical Laboratory Queens Road, Teddington, Middlesex,
More informationAutocorrelation Functions in GPS Data Processing: Modeling Aspects
Autocorrelation Functions in GPS Data Processing: Modeling Aspects Kai Borre, Aalborg University Gilbert Strang, Massachusetts Institute of Technology Consider a process that is actually random walk but
More informationCooperative Communication with Feedback via Stochastic Approximation
Cooperative Communication with Feedback via Stochastic Approximation Utsaw Kumar J Nicholas Laneman and Vijay Gupta Department of Electrical Engineering University of Notre Dame Email: {ukumar jnl vgupta}@ndedu
More informationExpressions for the covariance matrix of covariance data
Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden
More informationConditions for Suboptimal Filter Stability in SLAM
Conditions for Suboptimal Filter Stability in SLAM Teresa Vidal-Calleja, Juan Andrade-Cetto and Alberto Sanfeliu Institut de Robòtica i Informàtica Industrial, UPC-CSIC Llorens Artigas -, Barcelona, Spain
More informationA METHOD OF ADAPTATION BETWEEN STEEPEST- DESCENT AND NEWTON S ALGORITHM FOR MULTI- CHANNEL ACTIVE CONTROL OF TONAL NOISE AND VIBRATION
A METHOD OF ADAPTATION BETWEEN STEEPEST- DESCENT AND NEWTON S ALGORITHM FOR MULTI- CHANNEL ACTIVE CONTROL OF TONAL NOISE AND VIBRATION Jordan Cheer and Stephen Daley Institute of Sound and Vibration Research,
More informationComparative Performance Analysis of Three Algorithms for Principal Component Analysis
84 R. LANDQVIST, A. MOHAMMED, COMPARATIVE PERFORMANCE ANALYSIS OF THR ALGORITHMS Comparative Performance Analysis of Three Algorithms for Principal Component Analysis Ronnie LANDQVIST, Abbas MOHAMMED Dept.
More informationOn the realistic stochastic model of GPS observables: Implementation and Performance
he International Archives of the Photogrammetry, Remote Sensing Spatial Information Sciences, Volume XL-/W5, 05 International Conference on Sensors & Models in Remote Sensing & Photogrammetry, 3 5 Nov
More informationADAPTIVE FILTER THEORY
ADAPTIVE FILTER THEORY Fifth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada International Edition contributions by Telagarapu Prabhakar Department
More informationSimulation studies of the standard and new algorithms show that a signicant improvement in tracking
An Extended Kalman Filter for Demodulation of Polynomial Phase Signals Peter J. Kootsookos y and Joanna M. Spanjaard z Sept. 11, 1997 Abstract This letter presents a new formulation of the extended Kalman
More informationLeast Squares Estimation Namrata Vaswani,
Least Squares Estimation Namrata Vaswani, namrata@iastate.edu Least Squares Estimation 1 Recall: Geometric Intuition for Least Squares Minimize J(x) = y Hx 2 Solution satisfies: H T H ˆx = H T y, i.e.
More informationUnderstanding the Differences between LS Algorithms and Sequential Filters
Understanding the Differences between LS Algorithms and Sequential Filters In order to perform meaningful comparisons between outputs from a least squares (LS) orbit determination algorithm and orbit determination
More informationNOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group
NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION M. Schwab, P. Noll, and T. Sikora Technical University Berlin, Germany Communication System Group Einsteinufer 17, 1557 Berlin (Germany) {schwab noll
More informationSparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels
Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels Bijit Kumar Das 1, Mrityunjoy Chakraborty 2 Department of Electronics and Electrical Communication Engineering Indian Institute
More informationEstimation, Detection, and Identification CMU 18752
Estimation, Detection, and Identification CMU 18752 Graduate Course on the CMU/Portugal ECE PhD Program Spring 2008/2009 Instructor: Prof. Paulo Jorge Oliveira pjcro @ isr.ist.utl.pt Phone: +351 21 8418053
More informationError reduction in GPS datum conversion using Kalman filter in diverse scenarios Swapna Raghunath 1, Malleswari B.L 2, Karnam Sridhar 3
INTERNATIONAL JOURNAL OF GEOMATICS AND GEOSCIENCES Volume 3, No 3, 2013 Copyright by the authors - Licensee IPA- Under Creative Commons license 3.0 Research article ISSN 0976 4380 Error reduction in GPS
More informationAssesment of the efficiency of the LMS algorithm based on spectral information
Assesment of the efficiency of the algorithm based on spectral information (Invited Paper) Aaron Flores and Bernard Widrow ISL, Department of Electrical Engineering, Stanford University, Stanford CA, USA
More informationAdaptive Filter Theory
0 Adaptive Filter heory Sung Ho Cho Hanyang University Seoul, Korea (Office) +8--0-0390 (Mobile) +8-10-541-5178 dragon@hanyang.ac.kr able of Contents 1 Wiener Filters Gradient Search by Steepest Descent
More informationTowards an Optimal Noise Versus Resolution Trade-off in Wind Scatterometry
Towards an Optimal Noise Versus Resolution Trade-off in Wind Scatterometry Brent Williams Jet Propulsion Lab, California Institute of Technology IOWVST Meeting Utrecht Netherlands June 12, 2012 Copyright
More informationNONUNIFORM SAMPLING FOR DETECTION OF ABRUPT CHANGES*
CIRCUITS SYSTEMS SIGNAL PROCESSING c Birkhäuser Boston (2003) VOL. 22, NO. 4,2003, PP. 395 404 NONUNIFORM SAMPLING FOR DETECTION OF ABRUPT CHANGES* Feza Kerestecioğlu 1,2 and Sezai Tokat 1,3 Abstract.
More informationDifferencing Revisited: I ARIMA(p,d,q) processes predicated on notion of dth order differencing of a time series {X t }: for d = 1 and 2, have X t
Differencing Revisited: I ARIMA(p,d,q) processes predicated on notion of dth order differencing of a time series {X t }: for d = 1 and 2, have X t 2 X t def in general = (1 B)X t = X t X t 1 def = ( X
More informationMotion Model Selection in Tracking Humans
ISSC 2006, Dublin Institute of Technology, June 2830 Motion Model Selection in Tracking Humans Damien Kellyt and Frank Boland* Department of Electronic and Electrical Engineering Trinity College Dublin
More informationThe Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance
The Kalman Filter Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience Sarah Dance School of Mathematical and Physical Sciences, University of Reading s.l.dance@reading.ac.uk July
More informationImproved Kalman Filter Initialisation using Neurofuzzy Estimation
Improved Kalman Filter Initialisation using Neurofuzzy Estimation J. M. Roberts, D. J. Mills, D. Charnley and C. J. Harris Introduction It is traditional to initialise Kalman filters and extended Kalman
More informationA NOVEL APPROACH TO THE ESTIMATION OF THE HURST PARAMETER IN SELF-SIMILAR TRAFFIC
Proceedings of IEEE Conference on Local Computer Networks, Tampa, Florida, November 2002 A NOVEL APPROACH TO THE ESTIMATION OF THE HURST PARAMETER IN SELF-SIMILAR TRAFFIC Houssain Kettani and John A. Gubner
More informationCO-OPERATION among multiple cognitive radio (CR)
586 IEEE SIGNAL PROCESSING LETTERS, VOL 21, NO 5, MAY 2014 Sparse Bayesian Hierarchical Prior Modeling Based Cooperative Spectrum Sensing in Wideb Cognitive Radio Networks Feng Li Zongben Xu Abstract This
More informationDESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof
DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof Delft Center for Systems and Control, Delft University of Technology, Mekelweg
More informationThe Effect of Stale Ranging Data on Indoor 2-D Passive Localization
The Effect of Stale Ranging Data on Indoor 2-D Passive Localization Chen Xia and Lance C. Pérez Department of Electrical Engineering University of Nebraska-Lincoln, USA chenxia@mariner.unl.edu lperez@unl.edu
More informationarxiv: v1 [math.st] 1 Dec 2014
HOW TO MONITOR AND MITIGATE STAIR-CASING IN L TREND FILTERING Cristian R. Rojas and Bo Wahlberg Department of Automatic Control and ACCESS Linnaeus Centre School of Electrical Engineering, KTH Royal Institute
More informationSIGMA-F: Variances of GPS Observations Determined by a Fuzzy System
SIGMA-F: Variances of GPS Observations Determined by a Fuzzy System A. Wieser and F.K. Brunner Engineering Surveying and Metrology, Graz University of Technology, Steyrergasse 3, A-8 Graz, Austria Keywords.
More informationA Strict Stability Limit for Adaptive Gradient Type Algorithms
c 009 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional A Strict Stability Limit for Adaptive Gradient Type Algorithms
More informationAn Assessment of the Accuracy of PPP in Remote Areas in Oman
An Assessment of the Accuracy of PPP in Remote Areas in Oman Rashid AL ALAWI, Sultanate of Oman and Audrey MARTIN, Ireland Keywords: GNSS, PPP, Oman Survey Infrastructure SUMMARY Traditionally, high accuracy
More informationAdaptive Unscented Kalman Filter with Multiple Fading Factors for Pico Satellite Attitude Estimation
Adaptive Unscented Kalman Filter with Multiple Fading Factors for Pico Satellite Attitude Estimation Halil Ersin Söken and Chingiz Hajiyev Aeronautics and Astronautics Faculty Istanbul Technical University
More informationEVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER
EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER Zhen Zhen 1, Jun Young Lee 2, and Abdus Saboor 3 1 Mingde College, Guizhou University, China zhenz2000@21cn.com 2 Department
More informationDiscrete quantum random walks
Quantum Information and Computation: Report Edin Husić edin.husic@ens-lyon.fr Discrete quantum random walks Abstract In this report, we present the ideas behind the notion of quantum random walks. We further
More informationResearch of Satellite and Ground Time Synchronization Based on a New Navigation System
Research of Satellite and Ground Time Synchronization Based on a New Navigation System Yang Yang, Yufei Yang, Kun Zheng and Yongjun Jia Abstract The new navigation time synchronization method is a breakthrough
More informationComparison of Selected Fast Orthogonal Parametric Transforms in Data Encryption
JOURNAL OF APPLIED COMPUTER SCIENCE Vol. 23 No. 2 (2015), pp. 55-68 Comparison of Selected Fast Orthogonal Parametric Transforms in Data Encryption Dariusz Puchala Lodz University of Technology Institute
More informationBlind Source Separation with a Time-Varying Mixing Matrix
Blind Source Separation with a Time-Varying Mixing Matrix Marcus R DeYoung and Brian L Evans Department of Electrical and Computer Engineering The University of Texas at Austin 1 University Station, Austin,
More informationSEC POWER METHOD Power Method
SEC..2 POWER METHOD 599.2 Power Method We now describe the power method for computing the dominant eigenpair. Its extension to the inverse power method is practical for finding any eigenvalue provided
More informationSubmitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Co
Submitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Computational Neuro-Engineering Laboratory University
More informationEstimating Polynomial Structures from Radar Data
Estimating Polynomial Structures from Radar Data Christian Lundquist, Umut Orguner and Fredrik Gustafsson Department of Electrical Engineering Linköping University Linköping, Sweden {lundquist, umut, fredrik}@isy.liu.se
More informationCramér-Rao Bounds for Estimation of Linear System Noise Covariances
Journal of Mechanical Engineering and Automation (): 6- DOI: 593/jjmea Cramér-Rao Bounds for Estimation of Linear System oise Covariances Peter Matiso * Vladimír Havlena Czech echnical University in Prague
More informationModern Navigation. Thomas Herring
12.215 Modern Navigation Thomas Herring Basic Statistics Summary of last class Statistical description and parameters Probability distributions Descriptions: expectations, variances, moments Covariances
More informationFurther Results on Model Structure Validation for Closed Loop System Identification
Advances in Wireless Communications and etworks 7; 3(5: 57-66 http://www.sciencepublishinggroup.com/j/awcn doi:.648/j.awcn.735. Further esults on Model Structure Validation for Closed Loop System Identification
More informationIonosphere influence on success rate of GPS ambiguity resolution in a satellite formation flying
Journal of Physics: Conference Series PAPER OPEN ACCESS Ionosphere influence on success rate of GPS ambiguity resolution in a satellite formation flying To cite this article: Leandro Baroni 2015 J. Phys.:
More information3.4 Linear Least-Squares Filter
X(n) = [x(1), x(2),..., x(n)] T 1 3.4 Linear Least-Squares Filter Two characteristics of linear least-squares filter: 1. The filter is built around a single linear neuron. 2. The cost function is the sum
More informationVirtual Array Processing for Active Radar and Sonar Sensing
SCHARF AND PEZESHKI: VIRTUAL ARRAY PROCESSING FOR ACTIVE SENSING Virtual Array Processing for Active Radar and Sonar Sensing Louis L. Scharf and Ali Pezeshki Abstract In this paper, we describe how an
More informationV. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline
V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline Goals Introduce Wiener-Hopf (WH) equations Introduce application of the steepest descent method to the WH problem Approximation to the Least
More informationCh5: Least Mean-Square Adaptive Filtering
Ch5: Least Mean-Square Adaptive Filtering Introduction - approximating steepest-descent algorithm Least-mean-square algorithm Stability and performance of the LMS algorithm Robustness of the LMS algorithm
More informationIII.C - Linear Transformations: Optimal Filtering
1 III.C - Linear Transformations: Optimal Filtering FIR Wiener Filter [p. 3] Mean square signal estimation principles [p. 4] Orthogonality principle [p. 7] FIR Wiener filtering concepts [p. 8] Filter coefficients
More informationUse of GNSS for autonomous navigation on medium Earth orbits
UDC 629.783(043.2) V. Konin, F.Shyshkov, O. Pogurelskiy (National Aviation University, Ukraine) Use of GNSS for autonomous navigation on medium Earth orbits Use of GNSS for space navigation is relatively
More informationCONSTRAINT KALMAN FILTER FOR INDOOR BLUETOOTH LOCALIZATION
CONSTRAINT KALMAN FILTER FOR INDOOR BLUETOOTH LOCALIZATION Liang Chen, Heidi Kuusniemi, Yuwei Chen, Jingbin Liu, Ling Pei, Laura Ruotsalainen, and Ruizhi Chen NLS Finnish Geospatial Research Institute
More informationGeog Lecture 29 Mapping and GIS Continued
Geog 1000 - Lecture 29 Mapping and GIS Continued http://scholar.ulethbridge.ca/chasmer/classes/ Today s Lecture (Pgs 13-25, 28-29) 1. Hand back Assignment 3 2. Review of Dr. Peddle s lecture last week
More informationUSING THE INTEGER DECORRELATION PROCEDURE TO INCREASE OF THE EFFICIENCY OF THE MAFA METHOD
ARIFICIAL SAELLIES, Vol. 46, No. 3 2 DOI:.2478/v8-2-2- USING HE INEGER DECORRELAION PROCEDURE O INCREASE OF HE EFFICIENCY OF HE MAFA MEHOD S. Cellmer Institute of Geodesy University of Warmia and Mazury
More informationDiscrete Simulation of Power Law Noise
Discrete Simulation of Power Law Noise Neil Ashby 1,2 1 University of Colorado, Boulder, CO 80309-0390 USA 2 National Institute of Standards and Technology, Boulder, CO 80305 USA ashby@boulder.nist.gov
More informationRELATIVE NAVIGATION FOR SATELLITES IN CLOSE PROXIMITY USING ANGLES-ONLY OBSERVATIONS
(Preprint) AAS 12-202 RELATIVE NAVIGATION FOR SATELLITES IN CLOSE PROXIMITY USING ANGLES-ONLY OBSERVATIONS Hemanshu Patel 1, T. Alan Lovell 2, Ryan Russell 3, Andrew Sinclair 4 "Relative navigation using
More informationLinear-Quadratic Optimal Control: Full-State Feedback
Chapter 4 Linear-Quadratic Optimal Control: Full-State Feedback 1 Linear quadratic optimization is a basic method for designing controllers for linear (and often nonlinear) dynamical systems and is actually
More informationOPTIMAL ESTIMATION of DYNAMIC SYSTEMS
CHAPMAN & HALL/CRC APPLIED MATHEMATICS -. AND NONLINEAR SCIENCE SERIES OPTIMAL ESTIMATION of DYNAMIC SYSTEMS John L Crassidis and John L. Junkins CHAPMAN & HALL/CRC A CRC Press Company Boca Raton London
More informationSAGE-based Estimation Algorithms for Time-varying Channels in Amplify-and-Forward Cooperative Networks
SAGE-based Estimation Algorithms for Time-varying Channels in Amplify-and-Forward Cooperative Networks Nico Aerts and Marc Moeneclaey Department of Telecommunications and Information Processing Ghent University
More informationCharacterization of Convex and Concave Resource Allocation Problems in Interference Coupled Wireless Systems
2382 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 59, NO 5, MAY 2011 Characterization of Convex and Concave Resource Allocation Problems in Interference Coupled Wireless Systems Holger Boche, Fellow, IEEE,
More informationA NUMERICAL METHOD TO SOLVE A QUADRATIC CONSTRAINED MAXIMIZATION
A NUMERICAL METHOD TO SOLVE A QUADRATIC CONSTRAINED MAXIMIZATION ALIREZA ESNA ASHARI, RAMINE NIKOUKHAH, AND STEPHEN L. CAMPBELL Abstract. The problem of maximizing a quadratic function subject to an ellipsoidal
More informationFundamentals of Statistical Signal Processing Volume II Detection Theory
Fundamentals of Statistical Signal Processing Volume II Detection Theory Steven M. Kay University of Rhode Island PH PTR Prentice Hall PTR Upper Saddle River, New Jersey 07458 http://www.phptr.com Contents
More informationRobotics 2 Target Tracking. Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard
Robotics 2 Target Tracking Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard Slides by Kai Arras, Gian Diego Tipaldi, v.1.1, Jan 2012 Chapter Contents Target Tracking Overview Applications
More informationOptimal PMU Placement for Power System State Estimation with Random Communication Packet Losses
2011 9th IEEE International Conference on Control and Automation (ICCA) Santiago, Chile, December 19-21, 2011 TueB1.1 Optimal PMU Placement for Power System State Estimation with Random Communication Packet
More information