Application-Oriented Estimator Selection

Size: px
Start display at page:

Download "Application-Oriented Estimator Selection"

Transcription

1 IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 4, APRIL Application-Oriented Estimator Selection Dimitrios Katselis, Member, IEEE, and Cristian R. Rojas, Member, IEEE Abstract Designing the optimal experiment for the recovery of an unknown system with respect to the end performance metric of interest is a recently established practice in the system identification literature. This practice leads to superior end performance to designing the experiment with respect to some generic metric quantifying the distance of the estimated model from the true one. This is usually done by choosing and fixing the estimation method to either a standard maximum likelihood (ML) or a Bayesian estimator. In this paper, we pose the intuitive question: Can we design better estimators than the usual ones with respect to an end performance metric of interest? Based on a simple linear regression exampleweaffirmatively answer this question. Index Terms End performance metric, estimation, experiment design, training. I. INTRODUCTION ABASIC subproblem in the context of system identification is that of experiment design. Overviews of this topic over the last decade can be found in [4], [5], [6], [15]. Contributions include convexification [9], robust design [12], [16], least-costly design [3], and closed vs open loop experiments [1]. In the context of application-oriented experiment design, the training is designed to optimize a performance metric associated with the particular application where the estimated model will be used [10], [11]. A conceptual framework for application-oriented experiment design was outlined in [6]. The least-costly experiment in this framework is given as follows: Experimental effort (1) where is the identified model and is the set of admissible models. Here, is the end-performance metric of interest dependent on the true model and is a predefined accuracy parameter. For the experimental effort, different measures commonly used are input or output power, and experimental length. For, standard maximum likelihood (ML) and Bayesian estimation methods are usually employed, e.g., minimum mean square error (MMSE) estimation. Fixing the system estimators to usual methods possessing some optimality attributes, application-oriented training designs have Manuscript received August 09, 2014; revised October 07, 2014; accepted October 09, Date of publication October 16, 2014; date of current version October 22, The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Phillip Ainsleigh. D. Katselis is with the Industrial and Enterprise Systems Engineering Department, University of Illinois at Urbana-Champaign, Urbana, IL USA ( katselis@illinois.edu). C. R. Rojas is with ACCESS Linnaeus Center, KTH Royal Institute of Technology, Stockholm S , Sweden ( cristian.rojas@ee.kth.se). Color versions of one or more of the figures in this paper are available online at Digital Object Identifier /LSP been shown to outperform classical training designs based on usual quadratic loss functions [10], [11]. By the previous line of thinking a dual question is: Can we design better estimators than the usual ones with respect to an end performance metric of interest when considering finite length training sequences? Note that under the small sample size assumption, the asymptotic efficiency of the ML estimator together with its invariance property are irrelevant. Optimizing the training and optimally choosing the system estimator are two problems that should ultimately be tackled in a joint context. Nevertheless, both in the framework of classical and application-oriented training designs, a separation strategy is applied: initially, we select and fix the system estimator to a choice that is known to possess some optimality aspects, e.g., the ML or MMSE estimators, and then we optimize the training. This implies that in the case of the dual question one should choose and fix the training to something that possesses some optimality attributes for any system estimator or to something that is independent of the system estimator so that the isolated optimal selection of the system estimator is meaningful. In this paper, we affirmatively answer the aforementioned dual question based on a simple linear Gaussian regression example that is used here as the simplest possible paradigm. Finally, we numerically demonstrate the validity of the claims verifying the purchased analysis. It is worth mentioning that the approach taken in this paper is closely related to the so-called Stein phenomenon [8], [17], according to which there are biased estimators that outperform the standard least-squares (LS) and ML estimators for specific performance metrics. These biased estimators have been studied extensively in statistics, and have shown great potential in signal processing applications [2], [14]. Notation: Vectors are denoted by bold letters. Superscripts, and stand for transposition, Hermitian transposition and conjugation, respectively. is the complex modulus. For a vector, denotes its -th entry. The expectation operator is denoted by. Finally, denotes the complex Gaussian distribution with mean and variance. II. PROBLEM STATEMENT Consider the scalar linear Gaussian model where is the observed signal at time instant, is the unknown system parameter assumed to be complex-valued, istheinputatthesametimeinstantand is complex, circularly symmetric, Gaussian noise with zero mean and variance. We further assume that and. (2) IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See for more information.

2 490 IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 4, APRIL 2015 In addition, and are independent random sequences, while is a white random sequence. Assume that the training length is limited to time slots and that the maximum allowed training energy for estimation purposes is. We can collect the received samples corresponding to the training in a vector: (3) where is the vector of received samples corresponding to the training, is the vector of training symbols and is the vector of noise samples. Considering the class of linear parameter estimators, the system is estimated as follows: (4) where is a estimating filter. To create a paradigm suitable for answering the aforementioned dual question, we consider as performance metric the mean square error (MSE) of a linear input estimator that uses the system knowledge and delivers an estimate of the input variable: Two standard input estimators are the minimum MSE (MMSE) estimator, which minimizes when the input is treated as a random variable, and the zero forcing (ZF) estimator, which coincides with the LS input estimator when the input is deterministically treated. If the signal-to-noise ratio (SNR) is high, i.e., is small, both estimators lead to very similar results. In the sequel, we will focus on the ZF estimator, as it leads to simpler derivations and it does not depend on the knowledge of the noise variance ; however, due to the connection between and,itisexpected that the results in the paper can be extended to the MMSE estimator using perturbation analysis (starting from the design for the ZF estimator). Given the input estimator, we define the considered end performance metric based on the input estimator that only knows a system estimate as follows: Our goal will be to determine the optimal parameter estimators for fixed experiments of finite length so that is minimized. To this end, the following section presents some useful ideas. III. RUDIMENTS Consider the ML estimator. For the linear Gaussian regression this estimator coincides with the minimum variance unbiased (MVU) estimator. We therefore replace our references to the ML estimator by references to the MVU estimator from now on. Since the MVU is an unbiased estimator, it satisfies.by[13],. If we assume that is a random variable and that its prior distribution is known, then instead of the MVU one could use the MMSE parameter estimator. With our assumptions and the extra assumption that (5) (6), one obtains. Assuming that is a deterministic but unknown variable, it turns out that (c.f. (6)). Here, the superscript d stands for deterministic. If is assumed to be a random variable, then the corresponding end-performance metric is obtained by averaging the last expression over. Depending on the probability distributions of and and mayfailtoexistduetoinfinite moment problems. In order to obtain well-behaved estimators that will be used in conjunction with the actual performance metrics, some sort of regularization is needed. Some ideas for appropriate regularization techniques may be obtained by modifying robust estimators (against heavy-tailed distributions), e.g., by trimming a standard estimator, if it gives a value very close to zero [7]. An example of such a trimmed estimator is given as follows: where can be any estimator and a regularization parameter 1. Clearly, the reader may observe that the definition of the trimmed preserves the continuity at. Additionally, the event has zero probability since the distribution of is continuous. Therefore, in this case can be arbitrarily defined, e.g.,. We focus now on the.assumeafixed. In the appendix, we show that for a sufficiently small and a sufficiently high SNR during training, minimizing is equivalent to minimizing the following approximation Following similar steps and using some minor additional technicalities, we can work with (7) (8) (9) (10) instead of. We will call the last approximations zeroth order metrics. The following analysis and results will be based on the zeroth order metrics and they will reveal the dependency of the parameter estimator s selection on the considered (any) end performance metric. Remark: In (9), one can observe that after approximating the mean value of the ratio by the ratio of the mean values the infinite moment problem is eliminated. In the following, all zeroth order metrics will be defined based on the non-trimmed to ease the derivations. This treatment is approximately valid when is sufficiently small as it is actually shown in eq. (19) of the appendix. 1 This parameter can be tuned via cross-validation or any other technique, although in the simulation section we empirically select it for simplicity purposes.

3 KATSELIS AND ROJAS: APPLICATION-ORIENTED ESTIMATOR SELECTION 491 IV. MINIMIZING THE ZEROTH ORDER METRICS In this section, we assume that the training sequence is fixed and we examine the estimator selection problem for and. A. Deterministic The expectation operators in Eq. (9) are with respect to, and. We have: (11) Taking the gradient in (11) with respect to, discarding the outer and setting, we obtain an expression with the following numerator: (12) Note that no choice of will zero this expression for any. Therefore, the MVU is not an optimal system estimator in this case. We can state this result more formally: Proposition 1: The MVU estimator is not an optimal system estimator for the task of minimizing,when is considered deterministic but otherwise an unknown quantity. The question that arises in this case is how to find the optimal system estimator in this setup or generally how to determine a uniformly better estimator for minimizing. Equating the numerator of the gradient of w.r.t. to and taking the inner product of both sides with, we obtain the following optimal estimating filter (13) The problem with (13) is that the optimal solution depends on the unknown system. To deal with this dependence we may assume a noninformative prior distribution for. If the real and imaginary parts of are considered bounded in the intervals and, 2 then one can treat them as independent random variables uniformly distributed on and, respectively. is now replaced by,where denotes the expectation with respect to the joint (uniform) distribution of the real and imaginary parts of. Furthermore, this approach leads to the substitution of by in (13). B. Random In this case, the actual prior statistics of are known. Easily, the optimal,say, is given by (13) but with replaced by, while it can be easily shown that or do not zero the corresponding gradient. This implies the following result: Proposition 2: The MVU and MMSE estimators are not optimal system estimators for the task of minimizing,whenthe prior distribution is known. Remarks: 1) The claimed optimality of the derived estimators in this section is with respect to the zeroth order performance metrics. These estimators also turn out to be uniformly better 2 This assumption is usually reasonable in practice. Fig. 1. with SNR during training equal to 0 db, and. Moreover,. The error bars correspond to one standard deviation confidence bounds around the mean values. than the MVU and MMSE estimators with respect to the true end performance metric as we demonstrate in the simulation section. 2) An alternative way to express eq. (13) is,where is the inverse SNR. Depending on how we implement the last estimator in practice, turns to a tuning parameter controlling the introduction of bias in the MVU estimator. C. Discussion on the Optimal Training To optimize the training vector, one should first fixtheparameter estimator. This is a complementary problem with respect to the approach that we have followed so far. Suppose that we use either the MVU or the MMSE estimators. One can observe that by fixing for example the problem of selecting optimally the training vector is meaningless. To see this consider the case of. We then have: which only depends on. Furthermore, setting, it follows that at sufficiently high SNR, i.e., is minimized when,whichis intuitively appealing. Therefore, any with energy equal to is an equally good training vector for the MVU estimator. Thus, for the same,theestimator will be better than the MVU. Similar conclusions can be reached for the MMSE estimator, as well. V. SIMULATIONS In this section we present numerical results to verify our analysis. In the figure,. The SNR during training highlights how good the system estimate is. Due to space limitations, we only examine the performance of the derived estimators with respect to the true performance metric. The system estimator is implemented based on (8) to combat the infinite moment problems. In Fig. 1, is presented for and SNR during training equal to 0 db. The derived

4 492 IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 4, APRIL 2015 optimal 3 estimators in this paper are better than the MVU and MMSE estimators. We can see that the zeroth order approximation in this case is satisfactory even for low SNR during training, in the sense that the corresponding optimal estimators outperform the MVU and MMSE estimators for the true performance metric. Additionally, the MVU estimator appears to be better than the MMSE estimator for this performance metric. This is a snapshot contradicting what one would naturally expect and verifying the motivation of this paper. Fig.1alsoshowsconfidence bounds of one standard deviation for the MSE plots. These bounds confirm that the gains of the optimal estimators do not hold only in mean sense, but also with high probability, in particular for low SNR. VI. CONCLUSIONS In this paper, application-oriented estimator selection has been compared with common estimators such as the MVU and MMSE estimators. We have shown that the application-oriented selection is the right way to choose estimators in practice. We have verified this observation based on a simple linear Gaussian regression example. Furthermore, if the SNR during training is sufficiently high and the probability mass of is concentrated around,then it can be shown that (15) The same holds even if is a biased estimator of at high training SNR and tends to concentrate around a value bounded away from (and of course from 0). To show the last claim, we set and.since, it also holds that.furthermore, it can be seen that (16) At high training SNR, and in the mean square sense and therefore it can be easily shown that the right hand side of (16) converges to 0. To see this, notice that the Cauchy-Schwarz inequality yields APPENDIX I This section proposes a simplification of the metric for the estimator given in (8) with a fixed. Due to the Gaussianity of, for any (infinite moment problem). Using (8), the corresponding metric becomes: (17) Since and in the mean square sense,, and. For the last case, notice that (14) where denotes conditioning and reg signifies the use of the regularized system estimator in (8). To simplify this expression, we observe that, since by the mean value theorem this probability is equal to the area of the region, which is of order, multiplied by some value of the probability density function of in that region, which is of order. In addition, where the last inequality follows again from the Cauchy- Schwarz inequality. By the mean square convergence of to and to the right hand side of the last inequality tends to 0. Therefore, the right hand side of (17) tends to 0. Moreover, under the high SNR assumption the conditional expectations can be approximated by their unconditional ones, since for a sufficiently small their difference is due to an event of probability. Therefore, (18) Combining all the above results yields (19) 3 The term optimal is used in this case to refer to uniformly better estimators than the MVU/MMSE estimators and not to actually optimal estimators in the strict sense. The estimators are optimal only with respect to the zeroth order metrics. The term is not negligible but for sufficiently small its dependence on is insignificant. Hence, for a sufficiently small and a sufficiently high SNR during training, minimizing is equivalent to minimizing (9).

5 KATSELIS AND ROJAS: APPLICATION-ORIENTED ESTIMATOR SELECTION 493 REFERENCES [1] J. C. Agüero and G. C. Goodwin, Choosing between open and closed-loop experiments in linear system identification, IEEE Trans. Automat. Contr., vol. 52, no. 8, pp , Aug [2] Z. Ben-Haim and Y. C. Eldar, Blind minimax estimation, IEEE Trans. Inf. Theory, vol. 53, no. 9, pp , [3] X. Bombois, G. Scorletti, M. Gevers, P. Van den Hof, and R. Hildebrand, Least costly identification experiment for control, Automatica, vol. 42, no. 10, pp , Oct [4] M. Gevers, Identification for control: From the early achievements to the revival of experiment design, Eur. J. Contr., vol. 11, pp. 1 18, [5] H. Hjalmarsson, From experiment design to closed-loop control, Automatica, vol. 41, no. 3, pp , Mar [6] H. Hjalmarsson, System identification of complex and structured systems, in Plenary Address Eur. Control Conf./Eur. J. Cont., 2009, vol. 15, no. 4, pp [7] P.J.Huber, Robust Statistics. : John Wiley & Sons, [8] W. James and C. Stein, Estimation with quadratic loss, in Proc. Fourth Berkeley Symp. Mathematical Statistics and Probability, 1961, vol. 1, no. 1961, pp [9] H. Jansson and H. Hjalmarsson, Input design via LMIs admitting frequency-wise model specifications in confidence regions, IEEE Trans. Automat. Contr., vol. 50, no. 10, pp , Oct [10] D. Katselis, C. R. Rojas, H. Hjalmarsson, and M. Bengtsson, Application-oriented finite sample experiment design: A semidefinite relaxation approach, in Proc. SYSID 2012, Brussels,Belgium,Jul [11] D. Katselis, C. R. Rojas, M. Bengtsson, E. Bjrnson, X. Bombois, N. Shariati, M. Jansson, and H. Hjalmarsson, Training sequence design for MIMO channels: An application-oriented approach, EURASIP J. Wireless Commun. Netw., p. 245, [12] D.Katselis,C.R.Rojas,J.S.Welsh,andH.Hjalmarsson, Robust Experiment Design for System Identification via Semi-Infinite Programming Techniques, in Proc. SYSID 2012, Brussels,Belgium,Jul [13] S. M. Kay, Fundamentals of Statistical Signal Processing, Volume I: Estimation Theory. Englewood Cliffs, NJ, USA: Prentice-Hall, [14] S. Kay and Y. C. Eldar, Rethinking biased estimatio, IEEE Signal Process. Mag., vol. 25, no. 3, pp , [15] L. Pronzato, Optimal experimental design and some related control problems, Automatica, vol. 44, no. 2, pp , Feb [16] C. R. Rojas, J. S. Welsh, G. C. Goodwin, and A. Feuer, Robust optimal experiment design for system identification, Automatica, vol. 43, no. 6, pp , [17] C. Stein, Inadmissibility of the usual estimator for the mean of a multivariate normal distribution, in Proc. Third Berkeley Symp. Mathematical Statistics and Probability, 1956, vol. 1, no. 399, pp

Chance Constrained Input Design

Chance Constrained Input Design 5th IEEE Conference on Decision and Control and European Control Conference (CDC-ECC) Orlando, FL, USA, December -5, Chance Constrained Input Design Cristian R. Rojas, Dimitrios Katselis, Håkan Hjalmarsson,

More information

A Chernoff Relaxation on the Problem of Application-Oriented Finite Sample Experiment Design

A Chernoff Relaxation on the Problem of Application-Oriented Finite Sample Experiment Design 5st IEEE Conference on Decision and Control December 0-3, 202. Maui, Hawaii, USA A Chernoff elaxation on the Problem of Application-Oriented inite Sample Experiment Design Dimitrios Katselis, Cristian.

More information

Application-Oriented Finite Sample Experiment Design: A Semidefinite Relaxation Approach

Application-Oriented Finite Sample Experiment Design: A Semidefinite Relaxation Approach Application-Oriented Finite Sample Experiment Design: A Semidefinite elaxation Approach Dimitrios Katselis Cristian. ojas Håkan Hjalmarsson Mats Bengtsson ACCESS Linnaeus Center, Electrical Engineering,

More information

On the Design of Channel Estimators for given Signal Estimators and Detectors

On the Design of Channel Estimators for given Signal Estimators and Detectors 1 On the Design of Channel Estimators for given Signal Estimators and Detectors Dimitrios Katselis, Cristian R. Rojas, Håkan Hjalmarsson, Mats Bengtsson, Mikael Skoglund arxiv:133.489v1 cs.it] 18 Mar 13

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Minimax MMSE Estimator for Sparse System

Minimax MMSE Estimator for Sparse System Proceedings of the World Congress on Engineering and Computer Science 22 Vol I WCE 22, October 24-26, 22, San Francisco, USA Minimax MMSE Estimator for Sparse System Hongqing Liu, Mandar Chitre Abstract

More information

Further Results on Model Structure Validation for Closed Loop System Identification

Further Results on Model Structure Validation for Closed Loop System Identification Advances in Wireless Communications and etworks 7; 3(5: 57-66 http://www.sciencepublishinggroup.com/j/awcn doi:.648/j.awcn.735. Further esults on Model Structure Validation for Closed Loop System Identification

More information

Finite-time experiment design with multisines

Finite-time experiment design with multisines Proceedings of the 7th World Congress The International Federation of Automatic Control Seoul, Korea, July 6-, 8 Finite-time experiment design with multisines X. Bombois M. Barenthin P.M.J. Van den Hof

More information

Characterization of Convex and Concave Resource Allocation Problems in Interference Coupled Wireless Systems

Characterization of Convex and Concave Resource Allocation Problems in Interference Coupled Wireless Systems 2382 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 59, NO 5, MAY 2011 Characterization of Convex and Concave Resource Allocation Problems in Interference Coupled Wireless Systems Holger Boche, Fellow, IEEE,

More information

Optimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko

Optimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko IEEE SIGNAL PROCESSING LETTERS, VOL. 17, NO. 12, DECEMBER 2010 1005 Optimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko Abstract A new theorem shows that

More information

Optimal Decentralized Control of Coupled Subsystems With Control Sharing

Optimal Decentralized Control of Coupled Subsystems With Control Sharing IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 58, NO. 9, SEPTEMBER 2013 2377 Optimal Decentralized Control of Coupled Subsystems With Control Sharing Aditya Mahajan, Member, IEEE Abstract Subsystems that

More information

arxiv: v1 [math.st] 1 Dec 2014

arxiv: v1 [math.st] 1 Dec 2014 HOW TO MONITOR AND MITIGATE STAIR-CASING IN L TREND FILTERING Cristian R. Rojas and Bo Wahlberg Department of Automatic Control and ACCESS Linnaeus Centre School of Electrical Engineering, KTH Royal Institute

More information

A Modified Baum Welch Algorithm for Hidden Markov Models with Multiple Observation Spaces

A Modified Baum Welch Algorithm for Hidden Markov Models with Multiple Observation Spaces IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, VOL. 9, NO. 4, MAY 2001 411 A Modified Baum Welch Algorithm for Hidden Markov Models with Multiple Observation Spaces Paul M. Baggenstoss, Member, IEEE

More information

On Identification of Cascade Systems 1

On Identification of Cascade Systems 1 On Identification of Cascade Systems 1 Bo Wahlberg Håkan Hjalmarsson Jonas Mårtensson Automatic Control and ACCESS, School of Electrical Engineering, KTH, SE-100 44 Stockholm, Sweden. (bo.wahlberg@ee.kth.se

More information

2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST A General Class of Nonlinear Normalized Adaptive Filtering Algorithms

2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST A General Class of Nonlinear Normalized Adaptive Filtering Algorithms 2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST 1999 A General Class of Nonlinear Normalized Adaptive Filtering Algorithms Sudhakar Kalluri, Member, IEEE, and Gonzalo R. Arce, Senior

More information

ASIGNIFICANT research effort has been devoted to the. Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach

ASIGNIFICANT research effort has been devoted to the. Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 42, NO 6, JUNE 1997 771 Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach Xiangbo Feng, Kenneth A Loparo, Senior Member, IEEE,

More information

PARAMETER ESTIMATION AND ORDER SELECTION FOR LINEAR REGRESSION PROBLEMS. Yngve Selén and Erik G. Larsson

PARAMETER ESTIMATION AND ORDER SELECTION FOR LINEAR REGRESSION PROBLEMS. Yngve Selén and Erik G. Larsson PARAMETER ESTIMATION AND ORDER SELECTION FOR LINEAR REGRESSION PROBLEMS Yngve Selén and Eri G Larsson Dept of Information Technology Uppsala University, PO Box 337 SE-71 Uppsala, Sweden email: yngveselen@ituuse

More information

On the Behavior of Information Theoretic Criteria for Model Order Selection

On the Behavior of Information Theoretic Criteria for Model Order Selection IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 49, NO. 8, AUGUST 2001 1689 On the Behavior of Information Theoretic Criteria for Model Order Selection Athanasios P. Liavas, Member, IEEE, and Phillip A. Regalia,

More information

COMPLEX CONSTRAINED CRB AND ITS APPLICATION TO SEMI-BLIND MIMO AND OFDM CHANNEL ESTIMATION. Aditya K. Jagannatham and Bhaskar D.

COMPLEX CONSTRAINED CRB AND ITS APPLICATION TO SEMI-BLIND MIMO AND OFDM CHANNEL ESTIMATION. Aditya K. Jagannatham and Bhaskar D. COMPLEX CONSTRAINED CRB AND ITS APPLICATION TO SEMI-BLIND MIMO AND OFDM CHANNEL ESTIMATION Aditya K Jagannatham and Bhaskar D Rao University of California, SanDiego 9500 Gilman Drive, La Jolla, CA 92093-0407

More information

Lyapunov Stability of Linear Predictor Feedback for Distributed Input Delays

Lyapunov Stability of Linear Predictor Feedback for Distributed Input Delays IEEE TRANSACTIONS ON AUTOMATIC CONTROL VOL. 56 NO. 3 MARCH 2011 655 Lyapunov Stability of Linear Predictor Feedback for Distributed Input Delays Nikolaos Bekiaris-Liberis Miroslav Krstic In this case system

More information

MSE Bounds With Affine Bias Dominating the Cramér Rao Bound Yonina C. Eldar, Senior Member, IEEE

MSE Bounds With Affine Bias Dominating the Cramér Rao Bound Yonina C. Eldar, Senior Member, IEEE 3824 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 8, AUGUST 2008 MSE Bounds With Affine Bias Dominating the Cramér Rao Bound Yonina C. Eldar, Senior Member, IEEE Abstract In continuation to an

More information

SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM. Neal Patwari and Alfred O.

SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM. Neal Patwari and Alfred O. SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM Neal Patwari and Alfred O. Hero III Department of Electrical Engineering & Computer Science University of

More information

IEEE copyright notice

IEEE copyright notice IEEE copyright notice Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for

More information

A New Subspace Identification Method for Open and Closed Loop Data

A New Subspace Identification Method for Open and Closed Loop Data A New Subspace Identification Method for Open and Closed Loop Data Magnus Jansson July 2005 IR S3 SB 0524 IFAC World Congress 2005 ROYAL INSTITUTE OF TECHNOLOGY Department of Signals, Sensors & Systems

More information

Expressions for the covariance matrix of covariance data

Expressions for the covariance matrix of covariance data Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden

More information

Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate

Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate www.scichina.com info.scichina.com www.springerlin.com Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate WEI Chen & CHEN ZongJi School of Automation

More information

Soft-Output Trellis Waveform Coding

Soft-Output Trellis Waveform Coding Soft-Output Trellis Waveform Coding Tariq Haddad and Abbas Yongaçoḡlu School of Information Technology and Engineering, University of Ottawa Ottawa, Ontario, K1N 6N5, Canada Fax: +1 (613) 562 5175 thaddad@site.uottawa.ca

More information

CHEAPEST IDENTIFICATION EXPERIMENT WITH GUARANTEED ACCURACY IN THE PRESENCE OF UNDERMODELING

CHEAPEST IDENTIFICATION EXPERIMENT WITH GUARANTEED ACCURACY IN THE PRESENCE OF UNDERMODELING CHEAPEST IDENTIFICATION EXPERIMENT WITH GUARANTEED ACCURACY IN THE PRESENCE OF UNDERMODELING Xavier Bombois, Marion Gilson Delft Center for Systems and Control, Delft University of Technology, Mekelweg

More information

Gaussian Estimation under Attack Uncertainty

Gaussian Estimation under Attack Uncertainty Gaussian Estimation under Attack Uncertainty Tara Javidi Yonatan Kaspi Himanshu Tyagi Abstract We consider the estimation of a standard Gaussian random variable under an observation attack where an adversary

More information

Lessons in Estimation Theory for Signal Processing, Communications, and Control

Lessons in Estimation Theory for Signal Processing, Communications, and Control Lessons in Estimation Theory for Signal Processing, Communications, and Control Jerry M. Mendel Department of Electrical Engineering University of Southern California Los Angeles, California PRENTICE HALL

More information

H State-Feedback Controller Design for Discrete-Time Fuzzy Systems Using Fuzzy Weighting-Dependent Lyapunov Functions

H State-Feedback Controller Design for Discrete-Time Fuzzy Systems Using Fuzzy Weighting-Dependent Lyapunov Functions IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL 11, NO 2, APRIL 2003 271 H State-Feedback Controller Design for Discrete-Time Fuzzy Systems Using Fuzzy Weighting-Dependent Lyapunov Functions Doo Jin Choi and PooGyeon

More information

SPECTRAL line estimation, or the problem of estimating the

SPECTRAL line estimation, or the problem of estimating the IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 61, NO. 18, SEPTEMBER 15, 2013 4545 A Note on the SPICE Method Cristian R. Rojas, Member, IEEE, Dimitrios Katselis, Member, IEEE, and Håkan Hjalmarsson, Fellow,

More information

A GLRT FOR RADAR DETECTION IN THE PRESENCE OF COMPOUND-GAUSSIAN CLUTTER AND ADDITIVE WHITE GAUSSIAN NOISE. James H. Michels. Bin Liu, Biao Chen

A GLRT FOR RADAR DETECTION IN THE PRESENCE OF COMPOUND-GAUSSIAN CLUTTER AND ADDITIVE WHITE GAUSSIAN NOISE. James H. Michels. Bin Liu, Biao Chen A GLRT FOR RADAR DETECTION IN THE PRESENCE OF COMPOUND-GAUSSIAN CLUTTER AND ADDITIVE WHITE GAUSSIAN NOISE Bin Liu, Biao Chen Syracuse University Dept of EECS, Syracuse, NY 3244 email : biliu{bichen}@ecs.syr.edu

More information

Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg

Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws Symeon Chatzinotas February 11, 2013 Luxembourg Outline 1. Random Matrix Theory 1. Definition 2. Applications 3. Asymptotics 2. Ensembles

More information

On the Cramér-Rao lower bound under model mismatch

On the Cramér-Rao lower bound under model mismatch On the Cramér-Rao lower bound under model mismatch Carsten Fritsche, Umut Orguner, Emre Özkan and Fredrik Gustafsson Linköping University Post Print N.B.: When citing this work, cite the original article.

More information

Decision Fusion With Unknown Sensor Detection Probability

Decision Fusion With Unknown Sensor Detection Probability 208 IEEE SIGNAL PROCESSING LETTERS, VOL. 21, NO. 2, FEBRUARY 2014 Decision Fusion With Unknown Sensor Detection Probability D. Ciuonzo, Student Member, IEEE, P.SalvoRossi, Senior Member, IEEE Abstract

More information

SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM. Neal Patwari and Alfred O.

SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM. Neal Patwari and Alfred O. SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM Neal Patwari and Alfred O. Hero III Department of Electrical Engineering & Computer Science University of

More information

Adaptive beamforming for uniform linear arrays with unknown mutual coupling. IEEE Antennas and Wireless Propagation Letters.

Adaptive beamforming for uniform linear arrays with unknown mutual coupling. IEEE Antennas and Wireless Propagation Letters. Title Adaptive beamforming for uniform linear arrays with unknown mutual coupling Author(s) Liao, B; Chan, SC Citation IEEE Antennas And Wireless Propagation Letters, 2012, v. 11, p. 464-467 Issued Date

More information

Input Signal Generation for Constrained Multiple-Input Multple-Output Systems

Input Signal Generation for Constrained Multiple-Input Multple-Output Systems Preprints of the 19th World Congress The International Federation of Automatic Control Input Signal Generation for Constrained Multiple-Input Multple-Output Systems Per Hägg Christian A. Larsson Afrooz

More information

Riccati difference equations to non linear extended Kalman filter constraints

Riccati difference equations to non linear extended Kalman filter constraints International Journal of Scientific & Engineering Research Volume 3, Issue 12, December-2012 1 Riccati difference equations to non linear extended Kalman filter constraints Abstract Elizabeth.S 1 & Jothilakshmi.R

More information

THE problem of parameter estimation in linear models is

THE problem of parameter estimation in linear models is 138 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 1, JANUARY 2006 Minimax MSE Estimation of Deterministic Parameters With Noise Covariance Uncertainties Yonina C. Eldar, Member, IEEE Abstract In

More information

IN HIGH-SPEED digital communications, the channel often

IN HIGH-SPEED digital communications, the channel often IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 7, JULY 1997 1815 Asymptotically Optimal Blind Fractionally Spaced Channel Estimation Performance Analysis Georgios B. Giannakis, Fellow, IEEE, Steven

More information

Covariance Shaping Least-Squares Estimation

Covariance Shaping Least-Squares Estimation 686 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 51, NO 3, MARCH 2003 Covariance Shaping Least-Squares Estimation Yonina C Eldar, Member, IEEE, and Alan V Oppenheim, Fellow, IEEE Abstract A new linear estimator

More information

Ten years of progress in Identification for Control. Outline

Ten years of progress in Identification for Control. Outline Ten years of progress in Identification for Control Design and Optimization of Restricted Complexity Controllers Grenoble Workshop, 15-16 January, 2003 Michel Gevers CESAME - UCL, Louvain-la-Neuve, Belgium

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

An Invariance Property of the Generalized Likelihood Ratio Test

An Invariance Property of the Generalized Likelihood Ratio Test 352 IEEE SIGNAL PROCESSING LETTERS, VOL. 10, NO. 12, DECEMBER 2003 An Invariance Property of the Generalized Likelihood Ratio Test Steven M. Kay, Fellow, IEEE, and Joseph R. Gabriel, Member, IEEE Abstract

More information

Co-Prime Arrays and Difference Set Analysis

Co-Prime Arrays and Difference Set Analysis 7 5th European Signal Processing Conference (EUSIPCO Co-Prime Arrays and Difference Set Analysis Usham V. Dias and Seshan Srirangarajan Department of Electrical Engineering Bharti School of Telecommunication

More information

OPTIMAL EXPERIMENT DESIGN IN CLOSED LOOP. KTH, Signals, Sensors and Systems, S Stockholm, Sweden.

OPTIMAL EXPERIMENT DESIGN IN CLOSED LOOP. KTH, Signals, Sensors and Systems, S Stockholm, Sweden. OPTIMAL EXPERIMENT DESIGN IN CLOSED LOOP Henrik Jansson Håkan Hjalmarsson KTH, Signals, Sensors and Systems, S-00 44 Stockholm, Sweden. henrik.jansson@s3.kth.se Abstract: In this contribution we extend

More information

Local Modelling with A Priori Known Bounds Using Direct Weight Optimization

Local Modelling with A Priori Known Bounds Using Direct Weight Optimization Local Modelling with A Priori Known Bounds Using Direct Weight Optimization Jacob Roll, Alexander azin, Lennart Ljung Division of Automatic Control Department of Electrical Engineering Linköpings universitet,

More information

IN this paper, we consider the capacity of sticky channels, a

IN this paper, we consider the capacity of sticky channels, a 72 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 1, JANUARY 2008 Capacity Bounds for Sticky Channels Michael Mitzenmacher, Member, IEEE Abstract The capacity of sticky channels, a subclass of insertion

More information

Adaptive MMSE Equalizer with Optimum Tap-length and Decision Delay

Adaptive MMSE Equalizer with Optimum Tap-length and Decision Delay Adaptive MMSE Equalizer with Optimum Tap-length and Decision Delay Yu Gong, Xia Hong and Khalid F. Abu-Salim School of Systems Engineering The University of Reading, Reading RG6 6AY, UK E-mail: {y.gong,x.hong,k.f.abusalem}@reading.ac.uk

More information

COMPLEX SIGNALS are used in various areas of signal

COMPLEX SIGNALS are used in various areas of signal IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 2, FEBRUARY 1997 411 Second-Order Statistics of Complex Signals Bernard Picinbono, Fellow, IEEE, and Pascal Bondon, Member, IEEE Abstract The second-order

More information

This model of the conditional expectation is linear in the parameters. A more practical and relaxed attitude towards linear regression is to say that

This model of the conditional expectation is linear in the parameters. A more practical and relaxed attitude towards linear regression is to say that Linear Regression For (X, Y ) a pair of random variables with values in R p R we assume that E(Y X) = β 0 + with β R p+1. p X j β j = (1, X T )β j=1 This model of the conditional expectation is linear

More information

APPROXIMATE SOLUTION OF A SYSTEM OF LINEAR EQUATIONS WITH RANDOM PERTURBATIONS

APPROXIMATE SOLUTION OF A SYSTEM OF LINEAR EQUATIONS WITH RANDOM PERTURBATIONS APPROXIMATE SOLUTION OF A SYSTEM OF LINEAR EQUATIONS WITH RANDOM PERTURBATIONS P. Date paresh.date@brunel.ac.uk Center for Analysis of Risk and Optimisation Modelling Applications, Department of Mathematical

More information

Measure-Transformed Quasi Maximum Likelihood Estimation

Measure-Transformed Quasi Maximum Likelihood Estimation Measure-Transformed Quasi Maximum Likelihood Estimation 1 Koby Todros and Alfred O. Hero Abstract In this paper, we consider the problem of estimating a deterministic vector parameter when the likelihood

More information

A General Overview of Parametric Estimation and Inference Techniques.

A General Overview of Parametric Estimation and Inference Techniques. A General Overview of Parametric Estimation and Inference Techniques. Moulinath Banerjee University of Michigan September 11, 2012 The object of statistical inference is to glean information about an underlying

More information

Detection & Estimation Lecture 1

Detection & Estimation Lecture 1 Detection & Estimation Lecture 1 Intro, MVUE, CRLB Xiliang Luo General Course Information Textbooks & References Fundamentals of Statistical Signal Processing: Estimation Theory/Detection Theory, Steven

More information

The Rationale for Second Level Adaptation

The Rationale for Second Level Adaptation The Rationale for Second Level Adaptation Kumpati S. Narendra, Yu Wang and Wei Chen Center for Systems Science, Yale University arxiv:1510.04989v1 [cs.sy] 16 Oct 2015 Abstract Recently, a new approach

More information

MMSE Dimension. snr. 1 We use the following asymptotic notation: f(x) = O (g(x)) if and only

MMSE Dimension. snr. 1 We use the following asymptotic notation: f(x) = O (g(x)) if and only MMSE Dimension Yihong Wu Department of Electrical Engineering Princeton University Princeton, NJ 08544, USA Email: yihongwu@princeton.edu Sergio Verdú Department of Electrical Engineering Princeton University

More information

SIGNALS with sparse representations can be recovered

SIGNALS with sparse representations can be recovered IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1497 Cramér Rao Bound for Sparse Signals Fitting the Low-Rank Model with Small Number of Parameters Mahdi Shaghaghi, Student Member, IEEE,

More information

Lattice Reduction Aided Precoding for Multiuser MIMO using Seysen s Algorithm

Lattice Reduction Aided Precoding for Multiuser MIMO using Seysen s Algorithm Lattice Reduction Aided Precoding for Multiuser MIMO using Seysen s Algorithm HongSun An Student Member IEEE he Graduate School of I & Incheon Korea ahs3179@gmail.com Manar Mohaisen Student Member IEEE

More information

HOPFIELD neural networks (HNNs) are a class of nonlinear

HOPFIELD neural networks (HNNs) are a class of nonlinear IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 52, NO. 4, APRIL 2005 213 Stochastic Noise Process Enhancement of Hopfield Neural Networks Vladimir Pavlović, Member, IEEE, Dan Schonfeld,

More information

EUSIPCO

EUSIPCO EUSIPCO 03 56974375 ON THE RESOLUTION PROBABILITY OF CONDITIONAL AND UNCONDITIONAL MAXIMUM LIKELIHOOD DOA ESTIMATION Xavier Mestre, Pascal Vallet, Philippe Loubaton 3, Centre Tecnològic de Telecomunicacions

More information

Filter Design for Linear Time Delay Systems

Filter Design for Linear Time Delay Systems IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 49, NO. 11, NOVEMBER 2001 2839 ANewH Filter Design for Linear Time Delay Systems E. Fridman Uri Shaked, Fellow, IEEE Abstract A new delay-dependent filtering

More information

WEIGHTED QUANTILE REGRESSION THEORY AND ITS APPLICATION. Abstract

WEIGHTED QUANTILE REGRESSION THEORY AND ITS APPLICATION. Abstract Journal of Data Science,17(1). P. 145-160,2019 DOI:10.6339/JDS.201901_17(1).0007 WEIGHTED QUANTILE REGRESSION THEORY AND ITS APPLICATION Wei Xiong *, Maozai Tian 2 1 School of Statistics, University of

More information

Outline Lecture 2 2(32)

Outline Lecture 2 2(32) Outline Lecture (3), Lecture Linear Regression and Classification it is our firm belief that an understanding of linear models is essential for understanding nonlinear ones Thomas Schön Division of Automatic

More information

Plant friendly input design for system identification in closed loop

Plant friendly input design for system identification in closed loop Plant friendly input design for system identification in closed loop Sridharakumar Narasimhan, Xavier Bombois Dept. of Chemical Engineering, IIT Madras, Chennai, India. (e-mail: sridharkrn@iitm.ac.in.

More information

Median Filter Based Realizations of the Robust Time-Frequency Distributions

Median Filter Based Realizations of the Robust Time-Frequency Distributions TIME-FREQUENCY SIGNAL ANALYSIS 547 Median Filter Based Realizations of the Robust Time-Frequency Distributions Igor Djurović, Vladimir Katkovnik, LJubiša Stanković Abstract Recently, somenewefficient tools

More information

Title without the persistently exciting c. works must be obtained from the IEE

Title without the persistently exciting c.   works must be obtained from the IEE Title Exact convergence analysis of adapt without the persistently exciting c Author(s) Sakai, H; Yang, JM; Oka, T Citation IEEE TRANSACTIONS ON SIGNAL 55(5): 2077-2083 PROCESS Issue Date 2007-05 URL http://hdl.handle.net/2433/50544

More information

Performance Bounds for Polynomial Phase Parameter Estimation with Nonuniform and Random Sampling Schemes

Performance Bounds for Polynomial Phase Parameter Estimation with Nonuniform and Random Sampling Schemes IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 48, NO. 2, FEBRUARY 2000 331 Performance Bounds for Polynomial Phase Parameter Estimation with Nonuniform Rom Sampling Schemes Jonathan A. Legg, Member, IEEE,

More information

An Interpretation of the Moore-Penrose. Generalized Inverse of a Singular Fisher Information Matrix

An Interpretation of the Moore-Penrose. Generalized Inverse of a Singular Fisher Information Matrix An Interpretation of the Moore-Penrose 1 Generalized Inverse of a Singular Fisher Information Matrix Yen-Huan Li, Member, IEEE, Ping-Cheng Yeh, Member, IEEE, arxiv:1107.1944v4 [cs.i] 6 Aug 2012 Abstract

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY PREFACE xiii 1 Difference Equations 1.1. First-Order Difference Equations 1 1.2. pth-order Difference Equations 7

More information

Chapter 2 Wiener Filtering

Chapter 2 Wiener Filtering Chapter 2 Wiener Filtering Abstract Before moving to the actual adaptive filtering problem, we need to solve the optimum linear filtering problem (particularly, in the mean-square-error sense). We start

More information

A Log-Frequency Approach to the Identification of the Wiener-Hammerstein Model

A Log-Frequency Approach to the Identification of the Wiener-Hammerstein Model A Log-Frequency Approach to the Identification of the Wiener-Hammerstein Model The MIT Faculty has made this article openly available Please share how this access benefits you Your story matters Citation

More information

Detection & Estimation Lecture 1

Detection & Estimation Lecture 1 Detection & Estimation Lecture 1 Intro, MVUE, CRLB Xiliang Luo General Course Information Textbooks & References Fundamentals of Statistical Signal Processing: Estimation Theory/Detection Theory, Steven

More information

Bayesian Games for Adversarial Regression Problems (Online Appendix)

Bayesian Games for Adversarial Regression Problems (Online Appendix) Online Appendix) Michael Großhans grosshan@cs.uni-potsdam.de Christoph Sawade sawade@cs.uni-potsdam.de Michael Brückner michael@soundcloud.com Tobias Scheffer scheffer@cs.uni-potsdam.de University of Potsdam,

More information

Estimation, Detection, and Identification CMU 18752

Estimation, Detection, and Identification CMU 18752 Estimation, Detection, and Identification CMU 18752 Graduate Course on the CMU/Portugal ECE PhD Program Spring 2008/2009 Instructor: Prof. Paulo Jorge Oliveira pjcro @ isr.ist.utl.pt Phone: +351 21 8418053

More information

A Framework for Training-Based Estimation in Arbitrarily Correlated Rician MIMO Channels with Rician Disturbance

A Framework for Training-Based Estimation in Arbitrarily Correlated Rician MIMO Channels with Rician Disturbance A Framework for Training-Based Estimation in Arbitrarily Correlated Rician MIMO Channels with Rician Disturbance IEEE TRANSACTIONS ON SIGNAL PROCESSING Volume 58, Issue 3, Pages 1807-1820, March 2010.

More information

EIE6207: Estimation Theory

EIE6207: Estimation Theory EIE6207: Estimation Theory Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: Steven M.

More information

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.

More information

ROYAL INSTITUTE OF TECHNOLOGY KUNGL TEKNISKA HÖGSKOLAN. Department of Signals, Sensors & Systems

ROYAL INSTITUTE OF TECHNOLOGY KUNGL TEKNISKA HÖGSKOLAN. Department of Signals, Sensors & Systems The Evil of Supereciency P. Stoica B. Ottersten To appear as a Fast Communication in Signal Processing IR-S3-SB-9633 ROYAL INSTITUTE OF TECHNOLOGY Department of Signals, Sensors & Systems Signal Processing

More information

THE problem of phase noise and its influence on oscillators

THE problem of phase noise and its influence on oscillators IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 54, NO. 5, MAY 2007 435 Phase Diffusion Coefficient for Oscillators Perturbed by Colored Noise Fergal O Doherty and James P. Gleeson Abstract

More information

Course content (will be adapted to the background knowledge of the class):

Course content (will be adapted to the background knowledge of the class): Biomedical Signal Processing and Signal Modeling Lucas C Parra, parra@ccny.cuny.edu Departamento the Fisica, UBA Synopsis This course introduces two fundamental concepts of signal processing: linear systems

More information

Math Tune-Up Louisiana State University August, Lectures on Partial Differential Equations and Hilbert Space

Math Tune-Up Louisiana State University August, Lectures on Partial Differential Equations and Hilbert Space Math Tune-Up Louisiana State University August, 2008 Lectures on Partial Differential Equations and Hilbert Space 1. A linear partial differential equation of physics We begin by considering the simplest

More information

sine wave fit algorithm

sine wave fit algorithm TECHNICAL REPORT IR-S3-SB-9 1 Properties of the IEEE-STD-57 four parameter sine wave fit algorithm Peter Händel, Senior Member, IEEE Abstract The IEEE Standard 57 (IEEE-STD-57) provides algorithms for

More information

covariance function, 174 probability structure of; Yule-Walker equations, 174 Moving average process, fluctuations, 5-6, 175 probability structure of

covariance function, 174 probability structure of; Yule-Walker equations, 174 Moving average process, fluctuations, 5-6, 175 probability structure of Index* The Statistical Analysis of Time Series by T. W. Anderson Copyright 1971 John Wiley & Sons, Inc. Aliasing, 387-388 Autoregressive {continued) Amplitude, 4, 94 case of first-order, 174 Associated

More information

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM Ralf L.M. Peeters Bernard Hanzon Martine Olivi Dept. Mathematics, Universiteit Maastricht, P.O. Box 616, 6200 MD Maastricht,

More information

ECE521 week 3: 23/26 January 2017

ECE521 week 3: 23/26 January 2017 ECE521 week 3: 23/26 January 2017 Outline Probabilistic interpretation of linear regression - Maximum likelihood estimation (MLE) - Maximum a posteriori (MAP) estimation Bias-variance trade-off Linear

More information

LTI Systems, Additive Noise, and Order Estimation

LTI Systems, Additive Noise, and Order Estimation LTI Systems, Additive oise, and Order Estimation Soosan Beheshti, Munther A. Dahleh Laboratory for Information and Decision Systems Department of Electrical Engineering and Computer Science Massachusetts

More information

A NEW BAYESIAN LOWER BOUND ON THE MEAN SQUARE ERROR OF ESTIMATORS. Koby Todros and Joseph Tabrikian

A NEW BAYESIAN LOWER BOUND ON THE MEAN SQUARE ERROR OF ESTIMATORS. Koby Todros and Joseph Tabrikian 16th European Signal Processing Conference EUSIPCO 008) Lausanne Switzerland August 5-9 008 copyright by EURASIP A NEW BAYESIAN LOWER BOUND ON THE MEAN SQUARE ERROR OF ESTIMATORS Koby Todros and Joseph

More information

Linear Dependency Between and the Input Noise in -Support Vector Regression

Linear Dependency Between and the Input Noise in -Support Vector Regression 544 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 14, NO. 3, MAY 2003 Linear Dependency Between the Input Noise in -Support Vector Regression James T. Kwok Ivor W. Tsang Abstract In using the -support vector

More information

Lecture - 30 Stationary Processes

Lecture - 30 Stationary Processes Probability and Random Variables Prof. M. Chakraborty Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture - 30 Stationary Processes So,

More information

SIMON FRASER UNIVERSITY School of Engineering Science

SIMON FRASER UNIVERSITY School of Engineering Science SIMON FRASER UNIVERSITY School of Engineering Science Course Outline ENSC 810-3 Digital Signal Processing Calendar Description This course covers advanced digital signal processing techniques. The main

More information

Hypothesis Testing. Part I. James J. Heckman University of Chicago. Econ 312 This draft, April 20, 2006

Hypothesis Testing. Part I. James J. Heckman University of Chicago. Econ 312 This draft, April 20, 2006 Hypothesis Testing Part I James J. Heckman University of Chicago Econ 312 This draft, April 20, 2006 1 1 A Brief Review of Hypothesis Testing and Its Uses values and pure significance tests (R.A. Fisher)

More information

CO-OPERATION among multiple cognitive radio (CR)

CO-OPERATION among multiple cognitive radio (CR) 586 IEEE SIGNAL PROCESSING LETTERS, VOL 21, NO 5, MAY 2014 Sparse Bayesian Hierarchical Prior Modeling Based Cooperative Spectrum Sensing in Wideb Cognitive Radio Networks Feng Li Zongben Xu Abstract This

More information

Diversity Combining Techniques

Diversity Combining Techniques Diversity Combining Techniques When the required signal is a combination of several plane waves (multipath), the total signal amplitude may experience deep fades (Rayleigh fading), over time or space.

More information

Massive MU-MIMO Downlink TDD Systems with Linear Precoding and Downlink Pilots

Massive MU-MIMO Downlink TDD Systems with Linear Precoding and Downlink Pilots Massive MU-MIMO Downlink TDD Systems with Linear Precoding and Downlink Pilots Hien Quoc Ngo, Erik G. Larsson, and Thomas L. Marzetta Department of Electrical Engineering (ISY) Linköping University, 58

More information

WE study the capacity of peak-power limited, single-antenna,

WE study the capacity of peak-power limited, single-antenna, 1158 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 3, MARCH 2010 Gaussian Fading Is the Worst Fading Tobias Koch, Member, IEEE, and Amos Lapidoth, Fellow, IEEE Abstract The capacity of peak-power

More information

g-priors for Linear Regression

g-priors for Linear Regression Stat60: Bayesian Modeling and Inference Lecture Date: March 15, 010 g-priors for Linear Regression Lecturer: Michael I. Jordan Scribe: Andrew H. Chan 1 Linear regression and g-priors In the last lecture,

More information

DATA receivers for digital transmission and storage systems

DATA receivers for digital transmission and storage systems IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 52, NO. 10, OCTOBER 2005 621 Effect of Loop Delay on Phase Margin of First-Order Second-Order Control Loops Jan W. M. Bergmans, Senior

More information