Comparative Performance Analysis of Three Algorithms for Principal Component Analysis

Similar documents
Recursive Generalized Eigendecomposition for Independent Component Analysis

ADAPTIVE FILTER THEORY

Title without the persistently exciting c. works must be obtained from the IEE

POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS

Separation of the EEG Signal using Improved FastICA Based on Kurtosis Contrast Function

Dimensionality Reduction

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

Principal Component Analysis. Applied Multivariate Statistics Spring 2012

Estimation, Detection, and Identification CMU 18752

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

Adaptive MMSE Equalizer with Optimum Tap-length and Decision Delay

PCA, Kernel PCA, ICA

Independent Component Analysis. Contents

SIMON FRASER UNIVERSITY School of Engineering Science

Mathematical foundations - linear algebra

ADAPTIVE FILTER THEORY

Principal Component Analysis

Dimension Reduction Techniques. Presented by Jie (Jerry) Yu

ACTIVE noise control (ANC) ([1], [2]) is an established

Deep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, Spis treści

CCA BASED ALGORITHMS FOR BLIND EQUALIZATION OF FIR MIMO SYSTEMS

An Improved Cumulant Based Method for Independent Component Analysis

CS 4495 Computer Vision Principle Component Analysis

Principal Component Analysis

Heeyoul (Henry) Choi. Dept. of Computer Science Texas A&M University

sine wave fit algorithm

Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation)

Statistical and Adaptive Signal Processing

c Springer, Reprinted with permission.

IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE?

Statistical Pattern Recognition

Constrained Projection Approximation Algorithms for Principal Component Analysis

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision)

CSC 411 Lecture 12: Principal Component Analysis

MULTICHANNEL SIGNAL PROCESSING USING SPATIAL RANK COVARIANCE MATRICES

Ch4: Method of Steepest Descent

Introduction to Machine Learning

Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels

Statistical Data Analysis

MATH 829: Introduction to Data Mining and Analysis Principal component analysis

MACHINE LEARNING. Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA

Independent Component Analysis (ICA) Bhaskar D Rao University of California, San Diego

LMS and eigenvalue spread 2. Lecture 3 1. LMS and eigenvalue spread 3. LMS and eigenvalue spread 4. χ(r) = λ max λ min. » 1 a. » b0 +b. b 0 a+b 1.

Subspace Methods for Visual Learning and Recognition

Deriving Principal Component Analysis (PCA)

ORIENTED PCA AND BLIND SIGNAL SEPARATION

Robust extraction of specific signals with temporal structure

Adaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling

Kostas Triantafyllopoulos University of Sheffield. Motivation / algorithmic pairs trading. Model set-up Detection of local mean-reversion

Non-Euclidean Independent Component Analysis and Oja's Learning

Principal Components Analysis (PCA)

Optimum Sampling Vectors for Wiener Filter Noise Reduction

New Recursive-Least-Squares Algorithms for Nonlinear Active Control of Sound and Vibration Using Neural Networks

Least Squares Optimization

HST.582J / 6.555J / J Biomedical Signal and Image Processing Spring 2007

Recursive Least Squares for an Entropy Regularized MSE Cost Function

Chapter 2 Fundamentals of Adaptive Filter Theory

Cheng Soon Ong & Christian Walder. Canberra February June 2018

1 Introduction Independent component analysis (ICA) [10] is a statistical technique whose main applications are blind source separation, blind deconvo

Chapter 8 Gradient Methods

7. Variable extraction and dimensionality reduction

Linear Methods for Regression. Lijun Zhang

Dimensionality Reduction

Unsupervised Learning

Maximum variance formulation

Separation of Different Voices in Speech using Fast Ica Algorithm

Fundamentals of Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Independent Vector Analysis (IVA)

Intelligent Fault Classification of Rolling Bearing at Variable Speed Based on Reconstructed Phase Space

Machine Learning (Spring 2012) Principal Component Analysis

CS545 Contents XVI. Adaptive Control. Reading Assignment for Next Class. u Model Reference Adaptive Control. u Self-Tuning Regulators

Principal Component Analysis

CS545 Contents XVI. l Adaptive Control. l Reading Assignment for Next Class

Signal Analysis. Principal Component Analysis

The Discrete Kalman Filtering of a Class of Dynamic Multiscale Systems

Fast principal component analysis using fixed-point algorithm

Pattern Recognition and Machine Learning

Stationary or Non-Stationary Random Excitation for Vibration-Based Structural Damage Detection? An exploratory study

IPAM Summer School Optimization methods for machine learning. Jorge Nocedal

Principal Component Analysis CS498

Blind Equalization via Particle Filtering

A Coupled Helmholtz Machine for PCA

V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

Undercomplete Independent Component. Analysis for Signal Separation and. Dimension Reduction. Category: Algorithms and Architectures.

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.

TWO METHODS FOR ESTIMATING OVERCOMPLETE INDEPENDENT COMPONENT BASES. Mika Inki and Aapo Hyvärinen

Adaptive Filtering Part II

MLCC 2015 Dimensionality Reduction and PCA

Statistical Machine Learning

Samira A. Mahdi University of Babylon/College of Science/Physics Dept. Iraq/Babylon

HST.582J/6.555J/16.456J

Principal Component Analysis and Linear Discriminant Analysis

Probabilistic & Unsupervised Learning

On Information Maximization and Blind Signal Deconvolution

Using Kernel PCA for Initialisation of Variational Bayesian Nonlinear Blind Source Separation Method

OPTIMAL CONTROL AND ESTIMATION

2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST A General Class of Nonlinear Normalized Adaptive Filtering Algorithms

Lecture: Face Recognition and Feature Reduction

A Robust PCA by LMSER Learning with Iterative Error. Bai-ling Zhang Irwin King Lei Xu.

component risk analysis

Transcription:

84 R. LANDQVIST, A. MOHAMMED, COMPARATIVE PERFORMANCE ANALYSIS OF THR ALGORITHMS Comparative Performance Analysis of Three Algorithms for Principal Component Analysis Ronnie LANDQVIST, Abbas MOHAMMED Dept. of Signal Processing, Blekinge Institute of Technology, 372 25 Ronneby, Sweden ronnie.landqvist@gmail.com, abbas.mohammed@bth.se Abstract. Principal Component Analysis (PCA) is an important concept in statistical signal processing. In this paper, we evaluate an on-line algorithm for PCA, which we denote as the Exact Eigendecomposition () algorithm. The algorithm is evaluated using Monte Carlo Simulations and compared with the PAST and RP In addition, we investigate a normalization procedure of the eigenvectors for PAST and RP. The results show that has the best performance and that normalization improves the performance of PAST and RP algorithms, respectively. Keywords Principal component analysis, signal processing, exact eigen-decomposition. 1. Introduction Principal Component Analysis (PCA) is an important concept in statistical signal processing. PCA is used in various applications such as feature extraction (compression), signal estimation and detection. In blind signal separation, PCA may be used to prewhite (i.e. decorrelate) signals before Independent Component Analysis (ICA) is applied for finding the independent signal components. ICA algorithms are generally more efficient if the input data is white since the number of possible solutions to the problem decreases significantly. There are two different approaches to PCA. If the whole data set is available, analytical methods can be used to calculate the Principal Components (PC). On the other hand, if PCA is to be used in real time applications, then the PCs might have to be estimated on-line for each new sample. Algorithms that operate on the whole date set are generally denoted off-line algorithms, while the latter type is denoted on-line Two on-line algorithms for PCA are the Projection Approximation Subspace Tracking (PAST) algorithm and the Rao-Principe (RP) algorithm. The first algorithm was proposed by Yang in [1]. Essentially, PAST finds the PCs by minimizing the linear PCA criterion [2], [3], [4] using a gradient-descent technique or any recursive least squares variant. In this paper, the well known Recursive Least Squares (RLS) algorithm [5] is used. The second algorithm, RP, was proposed by Rao and Principe in [6], [7]. RP estimates the PCs by using update equations derived from the Rayleigh quotient, without the use of any external parameter such as a step-size or forgetting factor. PAST, on the other hand, relies on a forgetting factor parameter to be tuned before it can be used. This paper investigates the effect of incorporating normalization of the eigenvectors in the PAST and RP In addition, an on-line algorithm for PCA, denoted as the Exact Eigendecomposition () algorithm, is proposed. is based on direct estimation of the correlation matrix followed by calculation of the PCs using the EIGcommand in Matlab. The performance of these algorithms will be assessed for different configurations using Monte Carlo computer simulations in Matlab. The organization of the rest of the paper is as follows. In section 2, the different investigated algorithms for PCA are reviewed. The estimation of coefficients and normalization procedure of the eigenvectors for the algorithms is outlined in section 3. The simulation results and performance evaluations of the different algorithms are presented in section 4. Finally, section 5 concludes the paper. 2. Principal Component Analysis On-line PCA can be viewed as a functional block where the input is a complex m-by-1 data vector x(n) at the n th time instant. At each time instant the outputs are estimated eigenvectors and eigenvalues of the correlation matrix R of the data. These eigenvectors and eigenvalues are denoted PCs of the data. Note that in this paper it is assumed that the correlation matrix R of the data is time-invariant. 2.1 The PAST Algorithm The PAST algorithm [1] minimizes an approximation of the linear PCA cost criterion, n i 1 J(w(n)) = = β n-1 x(i) w(n) x (i) 2 (1)

RADIOENGINRING, VOL. 15, NO. 4, DECEMBER 2006 85 where x (n) = w H (n-1)x(n), β is a scalar forgetting factor, w(n) is an m-by-1 coefficient vector and. denotes the vector norm. This approximate version of the cost criterion is quadratic and has the same form as the cost criterion of the RLS algorithm, with exception of the error signal (inside the vector norm) that is a vector instead of a scalar. Thus, the minimization may be performed by incorporating the new signal x (n) in the RLS. The RLS will update the coefficient vector w(n) using x (n) as the input signal and x(n) as the desired signal. When the algorithm has converged, w(n) will contain the eigenvector corresponding to the largest eigenvalue of the correlation matrix R, i.e. the largest PC. The eigenvalue can be found directly by the RLS via a reformulation of the update equation for the inverse correlation matrix. The deflation technique is used for sequential estimation of the remaining PCs. 2.2 The RP Algorithm The RP [6], [7] algorithm is derived from the Rayleigh quotient and uses the following rule for extraction of the first PC: w(n) = (w(n-1) + Rw(n-1) ) / (1+w H (n-1)rw(n-1)). (2) In the implementation, terms Rw(n-1) and w H (n-1)rw(n-1) are redefined as P(n-1) and Q(n-1) and updated independently before calculating the new coefficient vector w(n). This is convenient since Q(n-1) will be an estimate of the largest eigenvalue. The update rules for P(n-1) and Q(n-1) may be rewritten so that the need for an explicit estimate of the correlation matrix R vanishes. No forgetting factor is incorporated in the estimation of P(n-1) and Q(n-1), nevertheless this could be useful in a time-varying environment. The remaining PCs are estimated by using the same deflation technique as in the PAST algorithm. 2.3 The Algorithm In this paper the algorithm is used and evaluated for PCA. In the algorithm, the PCs are found by directly computing the eigenvalues and eigenvectors of an estimate of the correlation matrix. The following update rule is proposed to be used for : R(n) + βr(n-1) + x(n)x H (n). (3) Here, β is a scalar forgetting factor. The correlation matrix is estimated by (1-β)R(n) and is used for calculating the eigenvectors and eigenvalues (i.e. the PCs) by calling the EIG-function in Matlab. A key difference between and the other algorithms is that uses direct estimation of the correlation matrix as the basis for calculating the PCs, while PAST and RP directly estimates the PCs without making a detour via the correlation matrix. 3. Choice of Coefficients and Normalization The PAST and RP algorithms estimate the different PCs in an iterative way, where the largest PC is estimated first followed by a deflation step. After the deflation, the second largest PC is estimated followed by a new deflation step and so on. This procedure continues until all desired PCs are estimated, then the sample instant n is advanced by one and the procedure is repeated for the next data vector x(n). The deflation step is essentially a simple algebraic rule that removes the contribution of the latest estimated PC from the data vector. The most straight forward approach is to use the most current estimate of the eigenvector in this rule, but one could also choose to use the previous estimate. Which choice that is the best is not clear and is therefore investigated by Monte Carlo simulations in this paper. Both the PAST and RP algorithm are simulated in two configurations, using the old or the new estimate. Eigenvectors are by definition normalized so that w H w equals to 1. However, the formulation of the PAST and RP algorithms does not guarantee this condition. Depending on the application, the eigenvectors can be normalized after each iteration, every l iteration or when the coefficients have converged [7]. In this paper, it is proposed that both algorithms are adjusted so that the eigenvectors are normalized between iterations. Normalization is not required for, since the EIG-function assures that the eigenvectors are normalized at all time. According to the previous discussion, there are now a total of four possible configurations of PAST and RP. First, the old or the new estimate may be used in the deflation rule. Second, normalization of the coefficient vectors is on or off. The different configurations are denoted as A, B, C or D: old weights and normalization off (A), old weights and normalization on (B), new weights and normalization off (C), new weights and normalization on (D). Two more configurations, denoted as E and F are also used. In these configurations, normalization is performed every 10 th sample (E) and in the 10 last samples of a realization (F), respectively. In both E and F the new weights are used in the deflation step. The different configurations are summarized in Table 1. The configuration letter (A, B, C, D, E, F) is added to the algorithm name in order to differentiate between them. Examples are PAST-A, PAST-B,, RP-A, RP-B, etc. does not have these configurations and hence it is simply denoted as. The performance of PAST and RP is dependent of the configuration and is investigated in the next section.

86 R. LANDQVIST, A. MOHAMMED, COMPARATIVE PERFORMANCE ANALYSIS OF THR ALGORITHMS New weights Old weights Normalization off C A Normalization every sample D B Normalization every 10 th sample E - Normalization last 10 samples F Tab. 1. The six possible configurations for PAST and RP, respectively. The configuration letter (A, B, C, D, E, F) is added to the algorithm name in order to differentiate between them. Examples are PAST-A, PAST-B,, RP-A, RP-B, etc. WGN H(z) z -1 z -1 x 1 (n) x 2 (n) x 3 (n) x(n) iterations (i.e. data samples). After the last iteration in each Monte Carlo run, the five largest estimated eigenvalues, λ i, i = 1,.., 5, and the corresponding estimated eigenvectors, w i,est, i = 1,.., 5, are examined. An estimation error (the difference between the estimated eigenvalue and true eigenvalue, i.e. λ i,est λ i ) is calculated and saved. Also, the absolute value of the Directional Cosines (DC), d i, between the true and estimated eigenvectors are calculated and saved. A value of +1 indicates perfect alignment of the estimated and true eigenvectors, while 0 indicates that the estimated and true eigenvector are orthogonal. The forgetting factor β is equal to 1 for all simulations of PAST and. P(λ abscissa) 1 0.5 0 cumulative distribution function for λ 5 1 1.5 2 eigenvalue λ Fig. 2. Cumulative distribution functions for the 5 th principal component for PAST and. The vertical line corresponds to the true eigenvalue. z -1 x 14 (n) x 15 (n) Fig. 1. The signal model used in the simulations. 4. Performance Evaluation In this section we evaluate the performance of the PAST-x, RP-x and algorithms by means of computer simulations that employ a Monte Carlo approach. Here, x denotes configuration A, B, C, D, E or F, respectively. In the simulations, a random White Gaussian Noise (WGN) signal with variance 1 is filtered by a first-order AR filter H(z) = 1/(1-0.9495z -1 ). The resulting filtered signal is then fed to a 15-tap delay line resulting in input data vectors x(n) of size 15-by-1, see Fig. 1. The five largest true eigenvalues of the correlation matrix R = E{x(n)x H (n)} are found to be 119, 17.8, 5.58, 2.65 and 1.55, respectively. The condition number or the eigenvalue spread of R is 451. The choice of the first-order AR filter was made so that it results in approximately the same eigenvalues used in [6]. The performance evaluation is set up so that each algorithm (PAST-x, RP-x and ) is simulated for 10000 Monte Carlo runs (i.e. realizations) consisting of 2000 P(λ abscissa) 1 0.5 0 cumulative distribution function for 1 1.5 2 eigenvalue λ Fig. 3. Cumulative distribution functions for the 5 th principal component for RP and. The vertical line corresponds to the true eigenvalue. The saved eigenvalue estimates and directional cosines may be evaluated by plotting the Cumulative Distribution Function (CDF). The CDF for the eigenvalue estimates for the 5 th principal component is plotted in Fig. 2 and 3 for configurations A-D for PAST and RP, respectively. The CDF for and the true eigenvalue λ 5 are also plotted in these figures. The figures show that PAST and RP have similar performance, that has the best performance and B and D modes perform better than A and C modes of the A compilation of all results are shown in Figures 4 to 23. The figures show performance results for all 13 algorithms for the five largest PCs. Figures 4-8 show the Normalized Magnitude Bias (NMB) for the eigenvalues, λ 5

RADIOENGINRING, VOL. 15, NO. 4, DECEMBER 2006 87 Figures 9 to 13 the Coefficient of Variation (CV) of the eigenvalue estimates, Figures 14-18 the Average Magnitude Directional Cosine (AMDC) for the eigenvectors and Figures 19-23 the Variance of the Magnitude Directional Cosines (VMDC). All these performance measures are defined in detail in Table 2. Measure Normalized Magnitude Bias (NMB) Coefficient of Variation (CV) Average Magnitude DC (AMDC) Variance of Magnitude DC (VMDC) Definition Mean{ λ i,est -λ i / λ i } Std{ λ i,est } / Mean{ λ i,est } Mean{ (w H i w i,est )/ (w H i w i w H i,est w i,est ) } Var{ (w H i w i,est )/ (w H i w i w H i,est w i,est ) } Tab. 2. Performance measures used in the evaluation. By analyzing the figures, the following observations can be made regarding the eigenvalue estimation (Fig. 4 to 13): The B and D configurations of PAST and RP are the best since they have the lowest NMB and CV for most eigenvalues. The has the best performance of all algorithms since it results in the lowest NMB and CV for most eigenvalues. The PAST has slightly better performance than RP, see for example eigenvalues 4 and 5. Similar observations can be made for the eigenvector estimation (Fig. 14 to 23): All algorithms perfectly estimate the first eigenvector since AMDC is equal to one and the VMDC is equal to zero for all The perfectly estimates the first five eigenvectors since the AMDC is equal to one and the VMDC is equal to zero. The B and D configurations of PAST and RP are the best since AMDC is closest to one and the VMDC is the lowest for all eigenvectors. The PAST has slightly better performance than RP, see for example eigenvectors 4 and 5. From these observations, the following can be concluded: The B and D configurations of PAST and RP have similarly good performance. The A, C, E and F configurations all have worse performance and A is the worst. When normalization is not used, the new weights must be used in the deflation step in order for the algorithm to be robust. When normalization is used, the choice of the old or the new weights in the deflation step is not crucial. Both configurations result in similar performance. Normalization is highly recommended since the algorithms have better performance than if normalization is not used. Finally, the algorithm has the best overall performance and may therefore be used as a benchmark for comparing different PCA 5. Conclusions In this paper we proposed the on-line algorithm, denoted as the Exact Eigendecomposition (), for PCA. In addition, we incorporated a normalization procedure of the eigenvectors in the PAST and RP algorithms, and investigated the effect for different configurations. The performance of the algorithms was compared using Monte Carlo simulations using several performance measures. Simulation results clearly demonstrate that the algorithms operate more reliably when normalization is adopted. It is also shown that the algorithm provides the best performance and can be used as a benchmark for comparing PCA References [1] YANG, B. Projection approximation subspace tracking. IE Transactions on Signal Processing, 1995, vol. 32, p. 95-107. [2] XU, L. Least mean square error reconstruction principle for selforganizing neural nets. Neural Networks, 1993, vol. 6, p. 627-648. [3] PAJUNEN, P., KARHUNEN, J. Least-squares methods for blind source separation based on nonlinear PCA. International Journal of Neural Systems, 1997, vol. 8, p. 601-612. [4] KARHUNEN, J., JOUTSENSALO, J. Representation and separation of signals using nonlinear PCA type learning. Neural Networks, 1994, vol. 7, p. 113-127. [5] HAYKIN, S. Adaptive Filter Theory. Prentice Hall, 4 th ed., 2002. [6] RAO, Y. N., PRINCIPE, J. C. On-line principal component analysis based on a fixed-point approach. IE International Conference on Acoustics, Speech and Signal Processing, 2002, vol. 1, p. I-981-I- 984. [7] RAO, Y. N., PRINCIPE, J. C. A fast, on-line algorithm for PCA and its convergence characteristics. IE Signal Processing Society Workshop, 2000, vol. 1, p. 299-307.

88 R. LANDQVIST, A. MOHAMMED, COMPARATIVE PERFORMANCE ANALYSIS OF THR ALGORITHMS 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Fig. 4. Normalized magnitude bias for eigenvalue 1 for the different 0 1 2 3 4 5 6 7 8 9 10 Fig. 5. Normalized magnitude bias for eigenvalue 2 for the different 0 2 4 6 8 10 12 Fig. 6. Normalized magnitude bias for eigenvalue 3 for the different 0 5 10 15 20 Fig. 7. Normalized magnitude bias for eigenvalue 4 for the different 0 5 10 15 20 25 30 Fig. 8. Normalized magnitude bias for eigenvalue 5 for the different Fig. 9. Coefficient of variation for eigenvalue 1 for the different Fig. 10. Coefficient of variation for eigenvalue 2 for the different Fig. 11. Coefficient of variation for eigenvalue 3 for the different

RADIOENGINRING, VOL. 15, NO. 4, DECEMBER 2006 89 Fig. 12. Coefficient of variation for eigenvalue 4 for the different Fig. 13. Coefficient of variation for eigenvalue 5 for the different Fig. 14. Average magnitude directional cosine for eigenvector 1 for the different Fig. 15. Average magnitude directional cosine for eigenvector 2 for the different Fig. 16. Average magnitude directional cosine for eigenvector 3 for the different Fig. 17. Average magnitude directional cosine for eigenvector 4 for the different Fig. 18. Average magnitude directional cosine for eigenvector 5 for the different Fig. 19. Variance of magnitude directional cosine for eigenvector 1 for the different

90 R. LANDQVIST, A. MOHAMMED, COMPARATIVE PERFORMANCE ANALYSIS OF THR ALGORITHMS Fig. 20. Variance of magnitude directional cosine for eigenvector 2 for the different Fig. 23. Variance of magnitude directional cosine for eigenvector 5 for the different Fig. 21. Variance of magnitude directional cosine for eigenvector 3 for the different Fig. 22. Variance of magnitude directional cosine for eigenvector 4 for the different About Authors... Ronnie LANDQVIST received his PhD degree from Blekinge University of Technology, Sweden, in 2005. He is currently employed by Ericsson AB where he is working in the system simulation group with development for GSM, 3G and 4G. Abbas MOHAMMED received his PhD degree from the University of Liverpool, UK, in 1992. He is currently an Associate Professor in Radio Communications and Navigation at Blekinge Institute of Technology, Sweden. Recently, he received the Blekinge Research Foundation Researcher of the Year Award for 2006. He is a Fellow of I, lifemember of the International Loran Association, member of IE and IEICE. In 2006, he received Fellowship of the UK s Royal Institute of Navigation in recognition of his significant contribution in navigation and in particular to advanced signal processing techniques that have enhanced the capability of Loran-C. He is also a Board Member of the Radio Engineering Journal and the IE Signal Processing Swedish Chapter. He has published over 100 papers on telecommunications and navigation systems. He has also developed techniques for measuring skywave delays in Loran-C receivers. He was the Editor of a special issue Advances in Signal Processing for Mobile Communication Systems of Wiley s International Journal of Adaptive Control and Signal Processing.