STOCHASTIC INFORMATION GRADIENT ALGORITHM BASED ON MAXIMUM ENTROPY DENSITY ESTIMATION. Badong Chen, Yu Zhu, Jinchun Hu and Ming Zhang

Size: px
Start display at page:

Download "STOCHASTIC INFORMATION GRADIENT ALGORITHM BASED ON MAXIMUM ENTROPY DENSITY ESTIMATION. Badong Chen, Yu Zhu, Jinchun Hu and Ming Zhang"

Transcription

1 ICIC Express Letters ICIC International c 2009 ISSN X Volume 3, Number 3, September 2009 pp. 1 6 STOCHASTIC INFORMATION GRADIENT ALGORITHM BASED ON MAXIMUM ENTROPY DENSITY ESTIMATION Badong Chen, Yu Zhu, Jinchun Hu and Ming Zhang Department of Precision Instruments and Mechanology Tsinghua University Beijing, , P. R. China chenbd04@mails.tsinghua.edu.cn Received December 2009; accepted February 2010 Abstract. We propose a new stochastic information gradient (SIG algorithm based on an online maximum entropy density estimation of the error distribution. The proposed algorithm is simple, yet efficient in implementation, as it involves no choice of bandwidth and does not resort to Newton s method of optimization. Simulation results demonstrate the favorable performance of the new algorithm. 1. Introduction. The minimum mean square error (MMSE criterion and corresponding least mean-squares (LMS algorithm are rather efficient for adaptive filtering if the error is normally distributed [1]. When the error distribution is non-gaussian, the MMSE criterion (and hence the LMS algorithm usually results in significant performance degradation [2, 3, 4]. Recently, in order to deal with non-gaussian error distribution, the minimum error entropy (MEE criterion has been proposed as an information theoretic alternative to the MMSE criterion in supervised adaptation [5, 6, 7, 8]. Under MEE criterion, the stochastic gradient based filtering algorithm, i.e., the named stochastic information gradient (SIG algorithm, takes the form [6] W k+1 = W k η W { log ˆp (e k} (1 where W k = [w 1, w 2,, w M ] T denotes the filter s weight vector at k iteration, η is the step-size, and ˆp(. denotes the estimated probability density function (PDF of error e k. The SIG algorithm (1 is somewhat similar to the adaptive estimation (AE [9]. The adaptive estimation first estimates the error distribution based on the consistent ordinary least squares (OLS residuals, and then uses the estimated likelihood function to estimate the parameters. However, different from the AE estimation, the SIG algorithm is an online adaptive filtering (learning algorithm, which estimates the error distribution and updates the weight vector simultaneously. The existing online PDF estimate is usually obtained by non-parametric kernel approach with a sliding window of error samples [1], that is ˆp(e k = 1 k K σ (e k e i (2 L i=k L+1 where L is the length of sliding error data ((e k L+1,, e k 1, e k, K σ (. is the kernel function with bandwidth σ. However, as stressed in [9], certain parametric estimate of the error distribution, which does not involve bandwidth selection, may outperform the nonparametric ones. So in this letter, we propose a new SIG algorithm, in which the error s distribution is modeled as a generalized exponential density and is estimated through the Maximum Entropy Principle (MEP [9, 10]. The reason for using the maximum entropy 1

2 2 BADONG CHEN, YU ZHU, JINCHUN HU AND MING ZHANG (Maxent density estimation is twofold. First, with this method, the selection of kernel bandwidth is bypassed (an inappropriate choice of width will significantly deteriorate the behavior of the SIG algorithm; and second, the Maxent densities nest a wide range of statistical distributions as special cases, yet fit the known data without committing extensively to unseen data. 2. Algorithm. The Maxent density is of the generalized exponential family [9], that is p (e = exp ( λ 0 m λ if i (e (3 where functions {f i (e} establish the characterizing moments {E [f i (e]}, {λ i } are the Lagrange multipliers. Let E p [f i (e] = µ i, the Lagrange multipliers {λ i } can be obtained by solving the optimization [9]: max {Γ = λ 0 + m } λ iµ i λ ( λ 0 = log exp ( m (4 λ if i (e de where λ = [λ 1,, λ m ] T. This optimization can be solved using Newton s method [9]. However, the computational burden of Newton s method is very heavy due to a lot of numerical integrals involved. Thus it is not suitable for online density estimation. In order to reduce the computational burden, we use the method of literature [10] to calculate the Lagrange multipliers. Specifically, let λ i Γ = 0, we get µ i = f i (eexp ( λ 0 m λ if i (e de (5 Applying the integrating by parts method and assuming the function F i (e = f i (ede satisfies F i (e p (e = 0, we have µ i = m λ [ je p Fi (e f j (e] = m λ jβ ij (6 j=1 j=1 where β ij = E p [ Fi (e f j (e ]. And hence, the Lagrange multipliers are given by the solution of the linear system of equations shown in (6, that is λ = β 1 µ (7 where µ = [µ 1,, µ m ] T, β = [β ij ]. The moment vector µ and the matrix β can be approximated by using sample means. So we propose an online Maxent density estimation of the error distribution: ( m ˆp(e k = C (ˆλ (k exp ˆλ i (kf i (e k ˆλ(k = ˆβ (k 1 ˆµ (k (8 ˆµ i (k = γ µˆµ i (k 1 + (1 γ µ f i (e k ˆβ ij (k = γ β ˆβij (k 1 + (1 γ β F i (e k f j (e k where C (ˆλ (k stands for the normalization factor, γ µ and γ β are the exponential weighting parameters (0 < γ µ < 1, 0 < γ β < 1. Based on the estimated PDF ˆp (e k in (8, the SIG algorithm of (1 becomes W k+1 = W k η ( m ˆλ i (kf i (e k W = W k η e ( k m (9 ˆλ i (kf i W (e k

3 ICIC EXPRESS LETTERS, VOL.3, NO.3, If the adaptive filter is of finite impulse response (FIR structure, we have ( m W k+1 = W k + η ˆλ i (kf i (e k X k (10 where X k = [x k, x k 1,, x k M+1 ] T is the input vector (M is the number of taps. The above algorithm depends on the choice of the functions {f i (e}. In practice, as contended in [9], the expression of Maxent density can be chosen as or simply p (e = exp ( λ 0 λ 1 e λ 2 e 2 λ 3 log ( 1 + e 2 λ 4 sin (e λ 5 cos (e (11 p (e = exp ( λ 0 λ 1 e λ 2 e 2 λ 3 log ( 1 + e 2 (12 To make a distinction, we denote SIG-Kernel and SIG-Maxent the SIG algorithms based on the kernel approach and the maximum entropy approach, respectively. 3. Simulation Results. Now we present a simulation experiment to demonstrate the performance of the SIG-Maxent algorithm, in comparison with the SIG-Kernel and the well-known LMS algorithm. Consider the FIR channel identification scenario, in which the error signal is given by e k = ( W T X k + n k W T k X k = V T k X k + n k (13 where W denotes the weight vector of the unknown FIR channel, n k is the interference noise, and V k = W W k is the weight error vector. In the simulation, we set W = [0.1,0.2,0.3,0.4,0.1,0.3,0.4,0.3,0.2,0.1,0.1,0.2,0.3,0.4, 0.5] T (14 Further, let the input {x k } be the unit-power white Gaussian process, and {n k } be the unit-dispersion symmetric alpha-stable (SαS noise (1 < α 2 [2]. For the SIG-Maxent algorithm, we adopt (12 as the Maxent density, and set γ µ = γ β = For the SIG- Kernel algorithm, we choose the Gaussian kernel and determine the bandwidth according to Silverman s rule of thumb [11]. Fig. 1 and Fig. 2 show the convergence curves of the mean square deviation (MSD E [ Vk TV k] averaged over 10 independent Monte Carlo runs for α = 2 and α = 1.5 respectively. Evidently, the two SIG algorithms are both robust to impulse noise (α = 1.5, and in particular, the SIG-Maxent algorithm achieves the smallest misadjustments for both Gaussian (α = 2 and impulse noise cases. Remark: In a previous paper [12], we show that the SIG algorithm with Gaussian kernel (SIG-G is not robust to the impulse noises. In that work, we propose a Laplacian kernel based SIG (SIG-L algorithm to improve the robustness performance. Our new study shows that, however, if error s PDF is estimated as (2, the SIG-G algorithm is actually robust to the impulse noises. Notice that in [12],the estimated PDF is given by ˆp(e k = 1 L k 1 i=k L K σ (e k e i (15 4. Conclusion. Based on an online Maxent density estimation of the error distribution, we develop the SIG-Maxent algorithm. Simulation results emphasize the favorable performance of the new algorithm. Acknowledgment. This work was supported by the National Natural Science Foundation of China ( , National Key Basic Research and Development Program (973 of China (2009CB724205, China Postdoctoral Science Foundation Funded Project ( , and the grant from the State Key Laboratory of Tribology (SKLT at Tsinghua University (SKLT08B04, SKLT08B06.

4 4 BADONG CHEN, YU ZHU, JINCHUN HU AND MING ZHANG MSD (db SIG-Kernel LMS SIG-Maxent iteration Figure 1. Average convergence curves for different algorithms (α = MSD (db 0-5 LMS -10 SIG-Kernel SIG-Maxent iteration Figure 2. Average convergence curves for different algorithms (α = 1.50 REFERENCES [1] S. Haykin, Adaptive Filtering Theory, 3rd ed., NY: Prentice Hall, [2] S. Min, L. N. Chrysostomos, Signal processing with fractional lower order moments: stable processes and their applications, Proceedings of the IEEE, vol.81, no.7, pp , [3] S. C. Pei, C. C. Tseng, Least mean p-power error criterion for adaptive FIR filter, IEEE Journal on Selected Areas in Communications, vol. 12, no. 9, pp , [4] S.C. Douglas, H.Y. Meng, Stochastic gradient adaptation under general error criteria, IEEE Trans. Signal Processing, vol. 42, pp , [5] J. C. Principe, D. Xu, Q. Zhao, J. W. Fisher, Learning from examples with information theoretic criteria, Journal of VLSI Signal Processing Systems, vol. 26, pp , [6] D. Erdogmus, E. H. Kenneth, J. C. Principe, Online entropy manipulation: stochastic information gradient, IEEE Signal Processing Letters, vol.10, no.8, pp , 2003.

5 ICIC EXPRESS LETTERS, VOL.3, NO.3, [7] D. Erdogmus, J. C. Principe, Convergence properties and data efficiency of the minimum error entropy criterion in Adaline training, IEEE Transactions on Signal Processing, vol.51, no.7, pp , [8] B. D. Chen, J. C. Hu, L. Pu, Z. Q. Sun, Stochastic gradient algorithm under (h, φ-entropy criterion, Circuits Systems Signal Processing, vol. 26, pp , [9] X. Wu, T. Stengos, Partially adaptive estimation via the maximum entropy densities, Econometrics Journal, vol. 8, pp , [10] D. Erdogmus, E. H. Kenneth, N. R. Yadunandana, J. C. Principe, Minimax mutual information approach for independent component analysis, Neural Computation, vol. 16, pp , [11] B. W. Silverman, Density Estimation for Statistic and Data Analysis, NY: Chapman & Hall, [12] M. Zhang, B. D. Chen, Y. Zhu, J. C. Hu, Laplacian kernel based SIG algorithm for FIR filtering in the presence of alpha-stable noise, ICIC Express Letters, vol.4, no.1, pp , 2010.

Computing Maximum Entropy Densities: A Hybrid Approach

Computing Maximum Entropy Densities: A Hybrid Approach Computing Maximum Entropy Densities: A Hybrid Approach Badong Chen Department of Precision Instruments and Mechanology Tsinghua University Beijing, 84, P. R. China Jinchun Hu Department of Precision Instruments

More information

STOCHASTIC INFORMATION GRADIENT ALGORITHM WITH GENERALIZED GAUSSIAN DISTRIBUTION MODEL

STOCHASTIC INFORMATION GRADIENT ALGORITHM WITH GENERALIZED GAUSSIAN DISTRIBUTION MODEL Journal of Circuits, Systems, and Computers Vol. 2, No. (22) 256 (6 pages) #.c World Scientific Publishing Company DOI:.42/S282662565 STOCHASTIC INFORMATION GRADIENT ALGORITHM WITH GENERALIZED GAUSSIAN

More information

Recursive Least Squares for an Entropy Regularized MSE Cost Function

Recursive Least Squares for an Entropy Regularized MSE Cost Function Recursive Least Squares for an Entropy Regularized MSE Cost Function Deniz Erdogmus, Yadunandana N. Rao, Jose C. Principe Oscar Fontenla-Romero, Amparo Alonso-Betanzos Electrical Eng. Dept., University

More information

STEADY-STATE MEAN SQUARE PERFORMANCE OF A SPARSIFIED KERNEL LEAST MEAN SQUARE ALGORITHM.

STEADY-STATE MEAN SQUARE PERFORMANCE OF A SPARSIFIED KERNEL LEAST MEAN SQUARE ALGORITHM. STEADY-STATE MEAN SQUARE PERFORMANCE OF A SPARSIFIED KERNEL LEAST MEAN SQUARE ALGORITHM Badong Chen 1, Zhengda Qin 1, Lei Sun 2 1 Institute of Artificial Intelligence and Robotics, Xi an Jiaotong University,

More information

Error Entropy Criterion in Echo State Network Training

Error Entropy Criterion in Echo State Network Training Error Entropy Criterion in Echo State Network Training Levy Boccato 1, Daniel G. Silva 1, Denis Fantinato 1, Kenji Nose Filho 1, Rafael Ferrari 1, Romis Attux 1, Aline Neves 2, Jugurta Montalvão 3 and

More information

MANY real-word applications require complex nonlinear

MANY real-word applications require complex nonlinear Proceedings of International Joint Conference on Neural Networks, San Jose, California, USA, July 31 August 5, 2011 Kernel Adaptive Filtering with Maximum Correntropy Criterion Songlin Zhao, Badong Chen,

More information

Variable Learning Rate LMS Based Linear Adaptive Inverse Control *

Variable Learning Rate LMS Based Linear Adaptive Inverse Control * ISSN 746-7659, England, UK Journal of Information and Computing Science Vol., No. 3, 6, pp. 39-48 Variable Learning Rate LMS Based Linear Adaptive Inverse Control * Shuying ie, Chengjin Zhang School of

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface

More information

Project 1: A comparison of time delay neural networks (TDNN) trained with mean squared error (MSE) and error entropy criterion (EEC)

Project 1: A comparison of time delay neural networks (TDNN) trained with mean squared error (MSE) and error entropy criterion (EEC) 1 Project 1: A comparison of time delay neural networks (TDNN) trained with mean squared error (MSE) and error entropy criterion (EEC) Stefan Craciun Abstract The goal is to implement a TDNN (time delay

More information

Independent Component Analysis and Unsupervised Learning

Independent Component Analysis and Unsupervised Learning Independent Component Analysis and Unsupervised Learning Jen-Tzung Chien National Cheng Kung University TABLE OF CONTENTS 1. Independent Component Analysis 2. Case Study I: Speech Recognition Independent

More information

Independent Component Analysis and Unsupervised Learning. Jen-Tzung Chien

Independent Component Analysis and Unsupervised Learning. Jen-Tzung Chien Independent Component Analysis and Unsupervised Learning Jen-Tzung Chien TABLE OF CONTENTS 1. Independent Component Analysis 2. Case Study I: Speech Recognition Independent voices Nonparametric likelihood

More information

An Error-Entropy Minimization Algorithm for Supervised Training of Nonlinear Adaptive Systems

An Error-Entropy Minimization Algorithm for Supervised Training of Nonlinear Adaptive Systems 1780 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 50, NO. 7, JULY 2002 An Error-Entropy Minimization Algorithm for Supervised Training of Nonlinear Adaptive Systems Deniz Erdogmus, Member, IEEE, and Jose

More information

Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels

Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels Bijit Kumar Das 1, Mrityunjoy Chakraborty 2 Department of Electronics and Electrical Communication Engineering Indian Institute

More information

Performance Comparison of Two Implementations of the Leaky. LMS Adaptive Filter. Scott C. Douglas. University of Utah. Salt Lake City, Utah 84112

Performance Comparison of Two Implementations of the Leaky. LMS Adaptive Filter. Scott C. Douglas. University of Utah. Salt Lake City, Utah 84112 Performance Comparison of Two Implementations of the Leaky LMS Adaptive Filter Scott C. Douglas Department of Electrical Engineering University of Utah Salt Lake City, Utah 8411 Abstract{ The leaky LMS

More information

Recursive Generalized Eigendecomposition for Independent Component Analysis

Recursive Generalized Eigendecomposition for Independent Component Analysis Recursive Generalized Eigendecomposition for Independent Component Analysis Umut Ozertem 1, Deniz Erdogmus 1,, ian Lan 1 CSEE Department, OGI, Oregon Health & Science University, Portland, OR, USA. {ozertemu,deniz}@csee.ogi.edu

More information

Believability Evaluation of a State Estimation Result

Believability Evaluation of a State Estimation Result 13 th Int. Workshop on EPCC, May 17-20, 2015, Bled, Slovenia Believability Evaluation of a State Estimation Result Boming Zhang Ph D, Prof. IEEE Fellow Dept. of Electrical Eng. Tsinghua University Beijing,

More information

Discussion About Nonlinear Time Series Prediction Using Least Squares Support Vector Machine

Discussion About Nonlinear Time Series Prediction Using Least Squares Support Vector Machine Commun. Theor. Phys. (Beijing, China) 43 (2005) pp. 1056 1060 c International Academic Publishers Vol. 43, No. 6, June 15, 2005 Discussion About Nonlinear Time Series Prediction Using Least Squares Support

More information

A Minimum Error Entropy criterion with Self Adjusting. Step-size (MEE-SAS)

A Minimum Error Entropy criterion with Self Adjusting. Step-size (MEE-SAS) Signal Processing - EURASIP (SUBMIED) A Minimum Error Entropy criterion with Self Adjusting Step-size (MEE-SAS) Seungju Han *, Sudhir Rao *, Deniz Erdogmus, Kyu-Hwa Jeong *, Jose Principe * Corresponding

More information

2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST A General Class of Nonlinear Normalized Adaptive Filtering Algorithms

2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST A General Class of Nonlinear Normalized Adaptive Filtering Algorithms 2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST 1999 A General Class of Nonlinear Normalized Adaptive Filtering Algorithms Sudhakar Kalluri, Member, IEEE, and Gonzalo R. Arce, Senior

More information

Statistical and Adaptive Signal Processing

Statistical and Adaptive Signal Processing r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory

More information

Expectation propagation for signal detection in flat-fading channels

Expectation propagation for signal detection in flat-fading channels Expectation propagation for signal detection in flat-fading channels Yuan Qi MIT Media Lab Cambridge, MA, 02139 USA yuanqi@media.mit.edu Thomas Minka CMU Statistics Department Pittsburgh, PA 15213 USA

More information

Blind Equalization Based on Direction Gradient Algorithm under Impulse Noise Environment

Blind Equalization Based on Direction Gradient Algorithm under Impulse Noise Environment Blind Equalization Based on Direction Gradient Algorithm under Impulse Noise Environment XIAO YING 1,, YIN FULIANG 1 1. Faculty of Electronic Information and Electrical Engineering Dalian University of

More information

Power-Efficient Linear Phase FIR Notch Filter Design Using the LARS Scheme

Power-Efficient Linear Phase FIR Notch Filter Design Using the LARS Scheme Wei Xu, Jiaxiang Zhao, Hongie Wang, Chao Gu Power-Efficient Linear Phase FIR Notch Filter Design Using the LARS Scheme WEI XU Tianin Polytechnic University Institute of Electronics and Information Engineering

More information

Alpha-Stable Distributions in Signal Processing of Audio Signals

Alpha-Stable Distributions in Signal Processing of Audio Signals Alpha-Stable Distributions in Signal Processing of Audio Signals Preben Kidmose, Department of Mathematical Modelling, Section for Digital Signal Processing, Technical University of Denmark, Building 3,

More information

A low intricacy variable step-size partial update adaptive algorithm for Acoustic Echo Cancellation USNRao

A low intricacy variable step-size partial update adaptive algorithm for Acoustic Echo Cancellation USNRao ISSN: 77-3754 International Journal of Engineering and Innovative echnology (IJEI Volume 1, Issue, February 1 A low intricacy variable step-size partial update adaptive algorithm for Acoustic Echo Cancellation

More information

System Identification of UAV Based on EWC-LMS

System Identification of UAV Based on EWC-LMS System Identification of UAV Based on EWC-LMS Zijing Zhou, Jinchun Hu, Shiqian Liu, Badong Chen Department of Computer Science and Technology, Tsinghua University, 100084, Beijing, P.R. China Phone: 86-10-62796460,

More information

Linear Models for Regression

Linear Models for Regression Linear Models for Regression Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information

BLIND SEPARATION OF TEMPORALLY CORRELATED SOURCES USING A QUASI MAXIMUM LIKELIHOOD APPROACH

BLIND SEPARATION OF TEMPORALLY CORRELATED SOURCES USING A QUASI MAXIMUM LIKELIHOOD APPROACH BLID SEPARATIO OF TEMPORALLY CORRELATED SOURCES USIG A QUASI MAXIMUM LIKELIHOOD APPROACH Shahram HOSSEII, Christian JUTTE Laboratoire des Images et des Signaux (LIS, Avenue Félix Viallet Grenoble, France.

More information

Submitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Co

Submitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Co Submitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Computational Neuro-Engineering Laboratory University

More information

Samira A. Mahdi University of Babylon/College of Science/Physics Dept. Iraq/Babylon

Samira A. Mahdi University of Babylon/College of Science/Physics Dept. Iraq/Babylon Echo Cancelation Using Least Mean Square (LMS) Algorithm Samira A. Mahdi University of Babylon/College of Science/Physics Dept. Iraq/Babylon Abstract The aim of this work is to investigate methods for

More information

Convergence Evaluation of a Random Step-Size NLMS Adaptive Algorithm in System Identification and Channel Equalization

Convergence Evaluation of a Random Step-Size NLMS Adaptive Algorithm in System Identification and Channel Equalization Convergence Evaluation of a Random Step-Size NLMS Adaptive Algorithm in System Identification and Channel Equalization 1 Shihab Jimaa Khalifa University of Science, Technology and Research (KUSTAR) Faculty

More information

ON-LINE MINIMUM MUTUAL INFORMATION METHOD FOR TIME-VARYING BLIND SOURCE SEPARATION

ON-LINE MINIMUM MUTUAL INFORMATION METHOD FOR TIME-VARYING BLIND SOURCE SEPARATION O-IE MIIMUM MUTUA IFORMATIO METHOD FOR TIME-VARYIG BID SOURCE SEPARATIO Kenneth E. Hild II, Deniz Erdogmus, and Jose C. Principe Computational euroengineering aboratory (www.cnel.ufl.edu) The University

More information

ADAPTIVE CLUSTERING ALGORITHM FOR COOPERATIVE SPECTRUM SENSING IN MOBILE ENVIRONMENTS. Jesus Perez and Ignacio Santamaria

ADAPTIVE CLUSTERING ALGORITHM FOR COOPERATIVE SPECTRUM SENSING IN MOBILE ENVIRONMENTS. Jesus Perez and Ignacio Santamaria ADAPTIVE CLUSTERING ALGORITHM FOR COOPERATIVE SPECTRUM SENSING IN MOBILE ENVIRONMENTS Jesus Perez and Ignacio Santamaria Advanced Signal Processing Group, University of Cantabria, Spain, https://gtas.unican.es/

More information

Curriculum Vitae Wenxiao Zhao

Curriculum Vitae Wenxiao Zhao 1 Personal Information Curriculum Vitae Wenxiao Zhao Wenxiao Zhao, Male PhD, Associate Professor with Key Laboratory of Systems and Control, Institute of Systems Science, Academy of Mathematics and Systems

More information

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 3, MARCH

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 3, MARCH IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 3, MARCH 2005 1057 Error Whitening Criterion for Adaptive Filtering: Theory and Algorithms Yadunandana N. Rao, Deniz Erdogmus, Member, IEEE, and Jose

More information

A DELAY-DEPENDENT APPROACH TO DESIGN STATE ESTIMATOR FOR DISCRETE STOCHASTIC RECURRENT NEURAL NETWORK WITH INTERVAL TIME-VARYING DELAYS

A DELAY-DEPENDENT APPROACH TO DESIGN STATE ESTIMATOR FOR DISCRETE STOCHASTIC RECURRENT NEURAL NETWORK WITH INTERVAL TIME-VARYING DELAYS ICIC Express Letters ICIC International c 2009 ISSN 1881-80X Volume, Number (A), September 2009 pp. 5 70 A DELAY-DEPENDENT APPROACH TO DESIGN STATE ESTIMATOR FOR DISCRETE STOCHASTIC RECURRENT NEURAL NETWORK

More information

Generalized Information Potential Criterion for Adaptive System Training

Generalized Information Potential Criterion for Adaptive System Training IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 13, NO. 5, SEPTEMBER 2002 1035 Generalized Information Potential Criterion for Adaptive System Training Deniz Erdogmus, Student Member, IEEE, and Jose C. Principe,

More information

CSC2515 Winter 2015 Introduction to Machine Learning. Lecture 2: Linear regression

CSC2515 Winter 2015 Introduction to Machine Learning. Lecture 2: Linear regression CSC2515 Winter 2015 Introduction to Machine Learning Lecture 2: Linear regression All lecture slides will be available as.pdf on the course website: http://www.cs.toronto.edu/~urtasun/courses/csc2515/csc2515_winter15.html

More information

ON SOME EXTENSIONS OF THE NATURAL GRADIENT ALGORITHM. Brain Science Institute, RIKEN, Wako-shi, Saitama , Japan

ON SOME EXTENSIONS OF THE NATURAL GRADIENT ALGORITHM. Brain Science Institute, RIKEN, Wako-shi, Saitama , Japan ON SOME EXTENSIONS OF THE NATURAL GRADIENT ALGORITHM Pando Georgiev a, Andrzej Cichocki b and Shun-ichi Amari c Brain Science Institute, RIKEN, Wako-shi, Saitama 351-01, Japan a On leave from the Sofia

More information

Spatially Smoothed Kernel Density Estimation via Generalized Empirical Likelihood

Spatially Smoothed Kernel Density Estimation via Generalized Empirical Likelihood Spatially Smoothed Kernel Density Estimation via Generalized Empirical Likelihood Kuangyu Wen & Ximing Wu Texas A&M University Info-Metrics Institute Conference: Recent Innovations in Info-Metrics October

More information

Blind Deconvolution by Modified Bussgang Algorithm

Blind Deconvolution by Modified Bussgang Algorithm Reprinted from ISCAS'99, International Symposium on Circuits and Systems, Orlando, Florida, May 3-June 2, 1999 Blind Deconvolution by Modified Bussgang Algorithm Simone Fiori, Aurelio Uncini, Francesco

More information

A ROBUST BEAMFORMER BASED ON WEIGHTED SPARSE CONSTRAINT

A ROBUST BEAMFORMER BASED ON WEIGHTED SPARSE CONSTRAINT Progress In Electromagnetics Research Letters, Vol. 16, 53 60, 2010 A ROBUST BEAMFORMER BASED ON WEIGHTED SPARSE CONSTRAINT Y. P. Liu and Q. Wan School of Electronic Engineering University of Electronic

More information

AN ON-LINE ADAPTATION ALGORITHM FOR ADAPTIVE SYSTEM TRAINING WITH MINIMUM ERROR ENTROPY: STOCHASTIC INFORMATION GRADIENT

AN ON-LINE ADAPTATION ALGORITHM FOR ADAPTIVE SYSTEM TRAINING WITH MINIMUM ERROR ENTROPY: STOCHASTIC INFORMATION GRADIENT A O-LIE ADAPTATIO ALGORITHM FOR ADAPTIVE SYSTEM TRAIIG WITH MIIMUM ERROR ETROPY: STOHASTI IFORMATIO GRADIET Deniz Erdogmus, José. Príncipe omputational euroengineering Laboratory Department of Electrical

More information

A Canonical Genetic Algorithm for Blind Inversion of Linear Channels

A Canonical Genetic Algorithm for Blind Inversion of Linear Channels A Canonical Genetic Algorithm for Blind Inversion of Linear Channels Fernando Rojas, Jordi Solé-Casals, Enric Monte-Moreno 3, Carlos G. Puntonet and Alberto Prieto Computer Architecture and Technology

More information

Neural Network Training

Neural Network Training Neural Network Training Sargur Srihari Topics in Network Training 0. Neural network parameters Probabilistic problem formulation Specifying the activation and error functions for Regression Binary classification

More information

squares based sparse system identification for the error in variables

squares based sparse system identification for the error in variables Lim and Pang SpringerPlus 2016)5:1460 DOI 10.1186/s40064-016-3120-6 RESEARCH Open Access l 1 regularized recursive total least squares based sparse system identification for the error in variables Jun

More information

ESTIMATION problem plays a key role in many fields,

ESTIMATION problem plays a key role in many fields, 1 Maximum Correntropy Unscented Filter Xi Liu, Badong Chen, Bin Xu, Zongze Wu and Paul Honeine arxiv:1608.07526v1 stat.ml 26 Aug 2016 Abstract The unscented transformation UT) is an efficient method to

More information

New Recursive-Least-Squares Algorithms for Nonlinear Active Control of Sound and Vibration Using Neural Networks

New Recursive-Least-Squares Algorithms for Nonlinear Active Control of Sound and Vibration Using Neural Networks IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 1, JANUARY 2001 135 New Recursive-Least-Squares Algorithms for Nonlinear Active Control of Sound and Vibration Using Neural Networks Martin Bouchard,

More information

Optimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko

Optimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko IEEE SIGNAL PROCESSING LETTERS, VOL. 17, NO. 12, DECEMBER 2010 1005 Optimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko Abstract A new theorem shows that

More information

Acoustic MIMO Signal Processing

Acoustic MIMO Signal Processing Yiteng Huang Jacob Benesty Jingdong Chen Acoustic MIMO Signal Processing With 71 Figures Ö Springer Contents 1 Introduction 1 1.1 Acoustic MIMO Signal Processing 1 1.2 Organization of the Book 4 Part I

More information

A Robust State Estimator Based on Maximum Exponential Square (MES)

A Robust State Estimator Based on Maximum Exponential Square (MES) 11 th Int. Workshop on EPCC, May -5, 011, Altea, Spain A Robust State Estimator Based on Maimum Eponential Square (MES) Wenchuan Wu Ye Guo Boming Zhang, et al Dept. of Electrical Eng. Tsinghua University

More information

FIR Filters for Stationary State Space Signal Models

FIR Filters for Stationary State Space Signal Models Proceedings of the 17th World Congress The International Federation of Automatic Control FIR Filters for Stationary State Space Signal Models Jung Hun Park Wook Hyun Kwon School of Electrical Engineering

More information

Receivers for Digital Communication in. Symmetric Alpha Stable Noise

Receivers for Digital Communication in. Symmetric Alpha Stable Noise Receivers for Digital Communication in 1 Symmetric Alpha Stable Noise Marcel Nassar and Marcus DeYoung Wireless Communication Lab Final Project Prof. Robert W. Heath December 7, 2007 Abstract Wireless

More information

Higher Order Statistics

Higher Order Statistics Higher Order Statistics Matthias Hennig Neural Information Processing School of Informatics, University of Edinburgh February 12, 2018 1 0 Based on Mark van Rossum s and Chris Williams s old NIP slides

More information

Impulsive Noise Filtering In Biomedical Signals With Application of New Myriad Filter

Impulsive Noise Filtering In Biomedical Signals With Application of New Myriad Filter BIOSIGAL 21 Impulsive oise Filtering In Biomedical Signals With Application of ew Myriad Filter Tomasz Pander 1 1 Division of Biomedical Electronics, Institute of Electronics, Silesian University of Technology,

More information

CONVENTIONAL decision feedback equalizers (DFEs)

CONVENTIONAL decision feedback equalizers (DFEs) 2092 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 52, NO. 7, JULY 2004 Adaptive Minimum Symbol-Error-Rate Decision Feedback Equalization for Multilevel Pulse-Amplitude Modulation Sheng Chen, Senior Member,

More information

Pattern Recognition and Machine Learning

Pattern Recognition and Machine Learning Christopher M. Bishop Pattern Recognition and Machine Learning ÖSpri inger Contents Preface Mathematical notation Contents vii xi xiii 1 Introduction 1 1.1 Example: Polynomial Curve Fitting 4 1.2 Probability

More information

Tensor approach for blind FIR channel identification using 4th-order cumulants

Tensor approach for blind FIR channel identification using 4th-order cumulants Tensor approach for blind FIR channel identification using 4th-order cumulants Carlos Estêvão R Fernandes Gérard Favier and João Cesar M Mota contact: cfernand@i3s.unice.fr June 8, 2006 Outline 1. HOS

More information

Bayesian Methods in Positioning Applications

Bayesian Methods in Positioning Applications Bayesian Methods in Positioning Applications Vedran Dizdarević v.dizdarevic@tugraz.at Graz University of Technology, Austria 24. May 2006 Bayesian Methods in Positioning Applications p.1/21 Outline Problem

More information

Comparative Performance Analysis of Three Algorithms for Principal Component Analysis

Comparative Performance Analysis of Three Algorithms for Principal Component Analysis 84 R. LANDQVIST, A. MOHAMMED, COMPARATIVE PERFORMANCE ANALYSIS OF THR ALGORITHMS Comparative Performance Analysis of Three Algorithms for Principal Component Analysis Ronnie LANDQVIST, Abbas MOHAMMED Dept.

More information

Gaussian process for nonstationary time series prediction

Gaussian process for nonstationary time series prediction Computational Statistics & Data Analysis 47 (2004) 705 712 www.elsevier.com/locate/csda Gaussian process for nonstationary time series prediction Soane Brahim-Belhouari, Amine Bermak EEE Department, Hong

More information

Analysis Methods for Supersaturated Design: Some Comparisons

Analysis Methods for Supersaturated Design: Some Comparisons Journal of Data Science 1(2003), 249-260 Analysis Methods for Supersaturated Design: Some Comparisons Runze Li 1 and Dennis K. J. Lin 2 The Pennsylvania State University Abstract: Supersaturated designs

More information

Linear Least-Squares Based Methods for Neural Networks Learning

Linear Least-Squares Based Methods for Neural Networks Learning Linear Least-Squares Based Methods for Neural Networks Learning Oscar Fontenla-Romero 1, Deniz Erdogmus 2, JC Principe 2, Amparo Alonso-Betanzos 1, and Enrique Castillo 3 1 Laboratory for Research and

More information

Performance Analysis and Enhancements of Adaptive Algorithms and Their Applications

Performance Analysis and Enhancements of Adaptive Algorithms and Their Applications Performance Analysis and Enhancements of Adaptive Algorithms and Their Applications SHENGKUI ZHAO School of Computer Engineering A thesis submitted to the Nanyang Technological University in partial fulfillment

More information

Trimmed Diffusion Least Mean Squares for Distributed Estimation

Trimmed Diffusion Least Mean Squares for Distributed Estimation Trimmed Diffusion Least Mean Squares for Distributed Estimation Hong Ji, Xiaohan Yang, Badong Chen School of Electronic and Information Engineering Xi an Jiaotong University Xi an 710049, P.R. China chenbd@mail.xjtu.edu.cn,

More information

STATISTICAL similarity measures play significant roles

STATISTICAL similarity measures play significant roles This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI.9/TSP.7.66993,

More information

EM-algorithm for Training of State-space Models with Application to Time Series Prediction

EM-algorithm for Training of State-space Models with Application to Time Series Prediction EM-algorithm for Training of State-space Models with Application to Time Series Prediction Elia Liitiäinen, Nima Reyhani and Amaury Lendasse Helsinki University of Technology - Neural Networks Research

More information

Connections between score matching, contrastive divergence, and pseudolikelihood for continuous-valued variables. Revised submission to IEEE TNN

Connections between score matching, contrastive divergence, and pseudolikelihood for continuous-valued variables. Revised submission to IEEE TNN Connections between score matching, contrastive divergence, and pseudolikelihood for continuous-valued variables Revised submission to IEEE TNN Aapo Hyvärinen Dept of Computer Science and HIIT University

More information

Modelling Non-linear and Non-stationary Time Series

Modelling Non-linear and Non-stationary Time Series Modelling Non-linear and Non-stationary Time Series Chapter 2: Non-parametric methods Henrik Madsen Advanced Time Series Analysis September 206 Henrik Madsen (02427 Adv. TS Analysis) Lecture Notes September

More information

ADAPTIVE EQUALIZATION AT MULTI-GHZ DATARATES

ADAPTIVE EQUALIZATION AT MULTI-GHZ DATARATES ADAPTIVE EQUALIZATION AT MULTI-GHZ DATARATES Department of Electrical Engineering Indian Institute of Technology, Madras 1st February 2007 Outline Introduction. Approaches to electronic mitigation - ADC

More information

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang

More information

A METHOD OF ADAPTATION BETWEEN STEEPEST- DESCENT AND NEWTON S ALGORITHM FOR MULTI- CHANNEL ACTIVE CONTROL OF TONAL NOISE AND VIBRATION

A METHOD OF ADAPTATION BETWEEN STEEPEST- DESCENT AND NEWTON S ALGORITHM FOR MULTI- CHANNEL ACTIVE CONTROL OF TONAL NOISE AND VIBRATION A METHOD OF ADAPTATION BETWEEN STEEPEST- DESCENT AND NEWTON S ALGORITHM FOR MULTI- CHANNEL ACTIVE CONTROL OF TONAL NOISE AND VIBRATION Jordan Cheer and Stephen Daley Institute of Sound and Vibration Research,

More information

Diffusion Maximum Correntropy Criterion Algorithms for Robust Distributed Estimation

Diffusion Maximum Correntropy Criterion Algorithms for Robust Distributed Estimation Diffusion Maximum Correntropy Criterion Algorithms for Robust Distributed Estimation Wentao Ma a, Badong Chen b, *,Jiandong Duan a,haiquan Zhao c a Department of Electrical Engineering, Xi an University

More information

POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS

POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS Russell H. Lambert RF and Advanced Mixed Signal Unit Broadcom Pasadena, CA USA russ@broadcom.com Marcel

More information

Learning Gaussian Process Models from Uncertain Data

Learning Gaussian Process Models from Uncertain Data Learning Gaussian Process Models from Uncertain Data Patrick Dallaire, Camille Besse, and Brahim Chaib-draa DAMAS Laboratory, Computer Science & Software Engineering Department, Laval University, Canada

More information

Gradient-based Adaptive Stochastic Search

Gradient-based Adaptive Stochastic Search 1 / 41 Gradient-based Adaptive Stochastic Search Enlu Zhou H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology November 5, 2014 Outline 2 / 41 1 Introduction

More information

Nonparametric Bayesian Methods (Gaussian Processes)

Nonparametric Bayesian Methods (Gaussian Processes) [70240413 Statistical Machine Learning, Spring, 2015] Nonparametric Bayesian Methods (Gaussian Processes) Jun Zhu dcszj@mail.tsinghua.edu.cn http://bigml.cs.tsinghua.edu.cn/~jun State Key Lab of Intelligent

More information

Modeling Data with Linear Combinations of Basis Functions. Read Chapter 3 in the text by Bishop

Modeling Data with Linear Combinations of Basis Functions. Read Chapter 3 in the text by Bishop Modeling Data with Linear Combinations of Basis Functions Read Chapter 3 in the text by Bishop A Type of Supervised Learning Problem We want to model data (x 1, t 1 ),..., (x N, t N ), where x i is a vector

More information

Median Filter Based Realizations of the Robust Time-Frequency Distributions

Median Filter Based Realizations of the Robust Time-Frequency Distributions TIME-FREQUENCY SIGNAL ANALYSIS 547 Median Filter Based Realizations of the Robust Time-Frequency Distributions Igor Djurović, Vladimir Katkovnik, LJubiša Stanković Abstract Recently, somenewefficient tools

More information

The Deep Ritz method: A deep learning-based numerical algorithm for solving variational problems

The Deep Ritz method: A deep learning-based numerical algorithm for solving variational problems The Deep Ritz method: A deep learning-based numerical algorithm for solving variational problems Weinan E 1 and Bing Yu 2 arxiv:1710.00211v1 [cs.lg] 30 Sep 2017 1 The Beijing Institute of Big Data Research,

More information

AdaptiveFilters. GJRE-F Classification : FOR Code:

AdaptiveFilters. GJRE-F Classification : FOR Code: Global Journal of Researches in Engineering: F Electrical and Electronics Engineering Volume 14 Issue 7 Version 1.0 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals

More information

Computer Vision & Digital Image Processing

Computer Vision & Digital Image Processing Computer Vision & Digital Image Processing Image Restoration and Reconstruction I Dr. D. J. Jackson Lecture 11-1 Image restoration Restoration is an objective process that attempts to recover an image

More information

HST.582J / 6.555J / J Biomedical Signal and Image Processing Spring 2007

HST.582J / 6.555J / J Biomedical Signal and Image Processing Spring 2007 MIT OpenCourseWare http://ocw.mit.edu HST.582J / 6.555J / 16.456J Biomedical Signal and Image Processing Spring 2007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL. Adaptive Filtering Fundamentals of Least Mean Squares with MATLABR Alexander D. Poularikas University of Alabama, Huntsville, AL CRC Press Taylor & Francis Croup Boca Raton London New York CRC Press is

More information

Likelihood-based inference with missing data under missing-at-random

Likelihood-based inference with missing data under missing-at-random Likelihood-based inference with missing data under missing-at-random Jae-kwang Kim Joint work with Shu Yang Department of Statistics, Iowa State University May 4, 014 Outline 1. Introduction. Parametric

More information

Average Reward Parameters

Average Reward Parameters Simulation-Based Optimization of Markov Reward Processes: Implementation Issues Peter Marbach 2 John N. Tsitsiklis 3 Abstract We consider discrete time, nite state space Markov reward processes which depend

More information

On the estimation of initial conditions in kernel-based system identification

On the estimation of initial conditions in kernel-based system identification On the estimation of initial conditions in kernel-based system identification Riccardo S Risuleo, Giulio Bottegal and Håkan Hjalmarsson When data records are very short (eg two times the rise time of the

More information

Adaptive MMSE Equalizer with Optimum Tap-length and Decision Delay

Adaptive MMSE Equalizer with Optimum Tap-length and Decision Delay Adaptive MMSE Equalizer with Optimum Tap-length and Decision Delay Yu Gong, Xia Hong and Khalid F. Abu-Salim School of Systems Engineering The University of Reading, Reading RG6 6AY, UK E-mail: {y.gong,x.hong,k.f.abusalem}@reading.ac.uk

More information

A Flexible ICA-Based Method for AEC Without Requiring Double-Talk Detection

A Flexible ICA-Based Method for AEC Without Requiring Double-Talk Detection APSIPA ASC 2011 Xi an A Flexible ICA-Based Method for AEC Without Requiring Double-Talk Detection Marko Kanadi, Muhammad Tahir Akhtar, Wataru Mitsuhashi Department of Information and Communication Engineering,

More information

IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE?

IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE? IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE? Dariusz Bismor Institute of Automatic Control, Silesian University of Technology, ul. Akademicka 16, 44-100 Gliwice, Poland, e-mail: Dariusz.Bismor@polsl.pl

More information

Revision of Lecture 4

Revision of Lecture 4 Revision of Lecture 4 We have discussed all basic components of MODEM Pulse shaping Tx/Rx filter pair Modulator/demodulator Bits map symbols Discussions assume ideal channel, and for dispersive channel

More information

Robust diffusion maximum correntropy criterion algorithm for distributed network estimation

Robust diffusion maximum correntropy criterion algorithm for distributed network estimation Robust diffusion maximum correntropy criterion algorithm for distributed networ estimation Wentao Ma, Hua Qu, Badong Chen *, Jihong Zhao,, Jiandong Duan 3 School of Electronic and Information Engineering,

More information

Approximate Inference Part 1 of 2

Approximate Inference Part 1 of 2 Approximate Inference Part 1 of 2 Tom Minka Microsoft Research, Cambridge, UK Machine Learning Summer School 2009 http://mlg.eng.cam.ac.uk/mlss09/ Bayesian paradigm Consistent use of probability theory

More information

Generalized Linear Models. Kurt Hornik

Generalized Linear Models. Kurt Hornik Generalized Linear Models Kurt Hornik Motivation Assuming normality, the linear model y = Xβ + e has y = β + ε, ε N(0, σ 2 ) such that y N(μ, σ 2 ), E(y ) = μ = β. Various generalizations, including general

More information

On Information Maximization and Blind Signal Deconvolution

On Information Maximization and Blind Signal Deconvolution On Information Maximization and Blind Signal Deconvolution A Röbel Technical University of Berlin, Institute of Communication Sciences email: roebel@kgwtu-berlinde Abstract: In the following paper we investigate

More information

Modification and Improvement of Empirical Likelihood for Missing Response Problem

Modification and Improvement of Empirical Likelihood for Missing Response Problem UW Biostatistics Working Paper Series 12-30-2010 Modification and Improvement of Empirical Likelihood for Missing Response Problem Kwun Chuen Gary Chan University of Washington - Seattle Campus, kcgchan@u.washington.edu

More information

MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING

MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING Yichuan Hu (), Javier Garcia-Frias () () Dept. of Elec. and Comp. Engineering University of Delaware Newark, DE

More information

Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons

Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons Yan Karklin and Eero P. Simoncelli NYU Overview Efficient coding is a well-known objective for the evaluation and

More information

VECTOR-QUANTIZATION BY DENSITY MATCHING IN THE MINIMUM KULLBACK-LEIBLER DIVERGENCE SENSE

VECTOR-QUANTIZATION BY DENSITY MATCHING IN THE MINIMUM KULLBACK-LEIBLER DIVERGENCE SENSE VECTOR-QUATIZATIO BY DESITY ATCHIG I THE IIU KULLBACK-LEIBLER DIVERGECE SESE Anant Hegde, Deniz Erdogmus, Tue Lehn-Schioler 2, Yadunandana. Rao, Jose C. Principe CEL, Electrical & Computer Engineering

More information

Bayesian Regularization and Nonnegative Deconvolution for Time Delay Estimation

Bayesian Regularization and Nonnegative Deconvolution for Time Delay Estimation University of Pennsylvania ScholarlyCommons Departmental Papers (ESE) Department of Electrical & Systems Engineering December 24 Bayesian Regularization and Nonnegative Deconvolution for Time Delay Estimation

More information