|
|
- Kerry Walker
- 6 years ago
- Views:
Transcription
1 Proc. th Asilomar Conf. Signals, Syst. Comput., Pacific Grove, CA, Nov. 1-, Blind Deconvolution of Discrete-Valued Signals Ta-Hsin Li Department of Statistics Texas A&M University College Station, Texas 8 Abstract This paper shows that when the input signal to a linear system is discrete-valued the blind deconvolution problem of simultaneously estimating the system and recovering the input can be solved more eæciently by taking into account the discreteness of the input signal. Two situations are considered. One deals with noiseless data by an inverse-æltering procedure which minimizes a cost function that measures the discreteness of the output of an inverse ælter. For noisy data observed from FIR systems, the Gibbs sampling approach is employed to simulate the posteriors of the unknowns under the assumption that the input signal is a Markov chain. It is shown that in the noiseless case the method leads to a highly eæcient estimator for parametric systems so that the estimation error decays exponentially as the sample size grows. The Gibbs sampling approach also provides rather precise results for noisy data, even if the initial and transition probabilities of the input signal and the variance of the noise are completely unknown. 1. Introduction Blind deconvolution in general deals with the simultaneous estimation of a linear system fs j g and reconstruction of its random input fx t g on the basis of the data fy t g obtained from the convolution 1X y t = s j x t,j : j=,1 è1è Partial information about the statistical properties of fx t g is usually required in order to obtain a sensible solution. It is evident that how well the knowledge of fx t g can be incorporated into the solution plays an important role in this problem. The current paper is concerned with a special problem of blind deconvolution in which the input signal takes discrete values from a known alphabet a typical situation encountered frequently in digital communications ë8ë. Based on the inverse æltering approach, a cost function is employed to measure the closeness of the æltered data to a discrete-valued sequence and minimized to obtain an estimate for the unknown system. In the parametric case where the system is characterized by a ænite dimensional parameter èe.g., ARMA modelsè the method is proved to yield highly eæcient estimates so that the estimation error may decay exponentially as the sample size grows. When the data fy t g are contaminated by Gaussian white noise and the system fs j g has ænite length èfirè, the current paper shows that the Gibbs sampling procedure can be used to deal with the estimation of fs j g and fx t g under the assumption that fx t g is a Markov chain ëë. This method presents an avenue to incorporate colored input signals into the blind deconvolution problem. All these results provide yet another piece of evidence that a digital signal is capable of resisting distortion and contamination if its discreteness can be judiciously utilized in the restoration procedure.. An Inverse Filtering Procedure When fx t g is an i.i.d. sequence of zero-mean random variables and fs j g is a minimum-phase ARMAèp; qè system so that px a æ j y t,j = b æ j x t,j èè with a æ 0 = b æ 0 = 1, the classical least-squares method ë1ë provides a solution to the problem by seeking the coefæcients fa1;::: ;a p ;b1;::: ;b q gto minimize the sample variance of the linear prediction error fu t g given by px u t = a j y t,j, b j u t,j : j=1 èè Since the method approximates the maximum likelihood estimation by ignoring the end-point eæect, it is not surprising that minimizing the sample variance of fu t g leads to asymptotically eæcient estimates ë1ë for the ARMA system èè. An alternative to least squares is the method of moments. Although computationally
2 appealing, it does not however provide eæcient estimates except for the pure AR systems. The variance of the estimates in both methods is usually proportional to the reciprocal of the sample size, i.e., Oè1=nè. To generalize the idea of least squares, it is crucial to observe that fu t g in èè is the output of an inverse ælter corresponding to the ARMA system in èè. Therefore, the least-squares method calls for the minimization of variance of the output sequence obtained by æltering the data fy t g with an inverse ælter. For an arbitrary parametric system with s j = s j èç æ è in è1è, one may consider the output sequence u t èçè = 1X j=,1 s,1 j èçè y t,j èè where fs,1 j èçèg is the inverse of fs j èçèg. Since the variance alone is no longer suæcient for the discrimination of nonminimum-phase systems, higher-order moments of fu t g have tobe involved in the selection of optimal ælters ëë, ëë, ëë, ë9ë, ë10ë. Minimization of Eèju t j k,r k è with r k = Eèjx t j k è=eèjx t j k è and ké1, for example, was suggested in ëë, whereas maximization of jc k èu t èj=ècèu t èè k= with c k èu t è being the k-th order cumulant ofu t for kéwas discussed in ëë, ëë. The stationarity of fx t g is a crucial requirement in all these procedures, and many of them further require some moments of fx t g to be available. The estimation accuracy of these procedures is usually Oè1=nè ëë. This accuracy limit, however, can be signiæcantly improved when the discreteness of the input signal is taken into account. In fact, for an m-ary signal whose alphabet is A = fa i ;i=1;::: ;mg, a highly eæcient estimator can be obtained by minimizing ^J n èçè = 1 n+1 nx t=,n my j^u t èçè, a i j ; i=1 where f^u t èçèg results from the inverse æltering ^u t èçè = nx j=,n s,1 t,j èçèy j èè èè using only the observed data fy t ;t =,n;::: ;ng èassuming y t = 0 for all jtj é nè. This criterion measures the closeness of f^u t èçèg from being an A- valued discrete sequence. It can be shown ëë, ëë that the minimizer of ^Jn èçè, denoted by ^ç n, is a consistent estimator for the true parameter ç æ and, more importantly, that the estimation error k^ç n, ç æ k is bounded by the tail behavior of the true inverse system so that X k^ç n, ç æ kçc js,1 j èç æ èj èè jjjçn where cé0 is a constant. For ARMA systems, this implies that the error of ^ç n decays as an exponential function rather than the square-root reciprocal of the sample size n. In other words, minimization of ^Jn èçè would produce ësuper-eæcient" estimates for the blind deconvolution problem. If the system is autoregressive with ænite order, the super-eæciency yields lim n!1 Prè^ç n = ç æ è=1: In other words, the minimizer of ^Jn èçè would be equal to the true values of the parameter with probability tending to unity as nincreases. It is also important to point out that all these results can be obtained without requiring the x t to have the same distribution as long as they are independent ëë, ëë. Therefore the supereæciency applies even to nonstationary signals. To demonstrate these results, let us consider a simple nonminimum-phase MAèè system ëë y t =,1:x t +:x t,1,x t, where fx t g is a binary sequence with Prèx t =0è=p t and Prèx t =1è=1,p t. For the general MAèè model y t = b0x t + b1x t,1 + bx t,,we assume b0 + b1 + b =1 and reparametrize the resulting two-parameter system with the zeros of the polynomial b0z + b1z + b denoted by ç = è;è. Therefore, in this example, ç æ =è æ ;zæ è=è1=;è. To compare with other methods which make no use of the discreteness of the input signal, we consider the well-known procedure of maximizing the standardized skewness ëë, ëë, ë9ë ^S n èçè = j^c è^u t èj è^cè^u t èè = where ^u t = ^u t èçè is the output of the inverse ælter in èè and ^c k è^u t è the k-th order sample cumulant of^u t. Two cases are considered: In Case 1 the input signal fx t g is stationary with p t =0: for all t, while in Case it is nonstationary with p t = æèsinètç=18èè where æèæè is the distribution function of the standard normal random variable. In both cases a random sample of size n = 1000 is used in the computation of ^Jn èçè and ^Sn èçè, and the contour plots of these criteria are presented in Figures 1í. As we can see from Figs. 1 and, the èbinarinessè criterion ^Jn èçè has a very sharp valley near the true value ç æ èindicated by +èinboth stationary and nonstationary cases. This implies that minimizing ^Jn èçè will produce very precise estimates for both stationary and nonstationary input signals. On the other hand, 11
3 Fig. 1. Contour of ^Jnèçè: Stationary case. Fig.. Contour of ^Snèçè: Stationary case Fig.. Contour of ^Jnèçè: Nonstationary case. Fig.. Contour of ^Snèçè: Nonstationary case. the standardized skewness ^Sn èçè has a rather broad peak near ç æ in the stationary case èfig. è. Although a solution to the deconvolution problem is provided by maximizing ^Sn èçè, the broad peak in ^Sn èçè as shown by Fig. may yield inaccurate estimates for ç æ. To make things even worse, the peak completely disappears in ^Sn èçè for the nonstationary signal èfig. è. This reveals how crucial the stationarity may be to the successful implementation of procedures like maximization of the standardized skewness. It is evident that the advantage of ^Jn èçè comes primarily from its utilization of the discreteness of the input signal.. A Gibbs Sampling Procedure Suppose fs j g in è1è is an FIR system operated in a noisy environment so that fy t g is obtained from y t = ç j x t,j + æ t è8è where fæ t g is Gaussian white noise with unknown variance ç. For the input signal, we assume that fx t g is a ærst-order Markov chain with state space A, unknown initial probabilities ç i =Prèx1,q=a i è, and unknown transition probabilities ç ij = Prèx t = a j jx t,1 = a i è. The blind deconvolution èor restorationè problem becomes the joint estimation of all the unknown parameters ç =ëç0;::: ;ç q ë T, ç =fç i ;ç ij g, and ç, and the recovery of the unknown input x = fx1,q;::: ;x n g, solely from a ænite data set y = fy1;::: ;y n g. It should 1
4 be pointed out that most of the previously mentioned methods of blind deconvolution do not directly apply to this situation since the input signal fx t g is colored and its moments unknown. To deal with this problem, Chen and this author have recently combined the Bayesian approach with a Gibbs sampling procedure ëë. The gist of method can be summarized as follows. Upon regarding all the unknowns as independent random variablesèvectors, a multivariate Gaussian distribution and an inverse chisquire distribution are used as priors for ç and ç, respectively, so that ç ç Nèç0 ; æ 0è and ç ç ç, èç; çè èi.e., çç=ç ç ç èçèè. Dirichlet distributions are employed as priors for the ç's, namely èç1;::: ;ç m è ç Dèæ1;::: ;æ m è and èç i1;::: ;ç im è ç Dèæ i1;::: ;æ im è; so that pèç1;::: ;ç Q Q P m è è ç æi i P with ç i = 1 and pèç i1;::: ;ç im è è ç æij j with j ç ij = 1. Selection of the parameters in these priors reæects the a priori information about the unknowns. For instance, small values of ç and ç or large variances in æ0 correspond to less informative priors suitable for the situations where information about ç and ç is limited. Jeæray's non-informative Dirichlet prior for èç1;::: ;ç m è corresponds to æ i =,1= while in general æ i é,1. According to the Bayesian approach, one is interested in seeking the conditional expectation Eèx t jyè or the mode of the conditional probability pèx t jyè, for instance, as estimates of x t. The diæculty is that any direct computation of these estimates seems impossible because of the complexity of the problem èmore unknowns than observationsè. Alternatively, one may employ the Monte Carlo method with a Gibbs sampler. The idea of Gibbs sampling is to construct a Markov chain by recursively generating random samples from the conditional posterior distribution of an individual or a subset of the unknowns given the data y and the rest unknowns. This procedure continues until the sampling Markov chain converges in distribution. In this case, the random samples generated by the Gibbs sampler can be regarded as ergodic samples from the joint posterior distribution pèx; ç;ç ;çjyè, so the simple average of the x t components and the maximum relative frequency of x t = a i obtained from these samples, for example, will approximate the conditional expectation èmmse estimatorè Eèx t jyè and the MAP estimator modefpèx t jyèg, respectively. It is not too diæcult to derive for the Gibbs sampler the conditional posterior distributions of the unknowns in our problem. As a matter of fact, it can be shown ëë that the conditional posterior distribution of ç given y and the rest unknonws is Gaussian with mean vector ç1 and covariance matrix æ1, i.e., where æ,1 1 = ç1 pèç j rest; yè ç Nèç1 ; æ 1è nx t=1 = æ1 xtx T t =ç +æ,1 0 and è nx xty t =ç +æ,1 0 ç 0 t=1 with xt =ëx t ;æææ ;x t,q ë T. Similarly, it can be shown ëë that pèç j rest; yè ç ç, èç + n; çç + s è, pèç1;::: ;ç m jrest; yè ç Dèæ1 + æ1;::: ;æ m +æ m è; pèç i1;::: ;ç im j rest; yè ç Dèæ i1 + n i1;::: ;æ im + n im è; where s = P n t=1 èy t, P q ç jx t,j è, n ij =èfèx t ;x t,1 è=èa i ;a j èg;! and and æ i =1ifx1,q=a i and æ i =0ifx1,q=a i. For any æxed t 0 f1,q;::: ;ng, the conditional posterior distribution of x t 0 can be expressed as Prèx t 0 = a i j rest; yè è pèx 0 jçè expè,s 0 =èç èè where x 0 = fx 0 1,q ;::: ;x0 n g with x0 t = a 0 i and x 0 t = x t for t = t 0, and s 0 P P n = t=1 èy q t, ç jx 0 t,j è. Note that under Q the Markovian assumption of fx t g we have pèxjçè=è ç èèq æi i ç nij ij è. As an example of the Gibbs sampling procedure, let us consider the MAèè system y t =,0:18x t +0:91x t,1 +0:81x t,, 0:198x t, + æ t where fx t g is a four-level Markov chain with A = f,;,1; 1; g, ç i =1=, and ëç ij ë= : : : : : : : : :1 : : : :1 : : : : A realization of fx t g with n = 100 is shown in Fig. èaè and the corresponding fy t g shown in Fig. èbè. The 1
5 sample variance of fæ t g is adjusted so that the signalto-noise ratio in fy t g equals 1 db. The parameters in the prior distributions are chosen as follows: ç0 = 0, æ0 = 1000 I, ç =, ç = 0:, and æ i = æ ij = 1. Fig. 1ècè shows the i.i.d. uniform initial guess for fx t g in the Gibbs sampler, and Figs. èdè and èeè present the conditional mean and mode of x t given y, i.e., Eèx t jyè and modefpèx t jyèg, respectively, calculated from the last 00 samples of the total 1000 iterations of Gibbs sampling. The constraints ç1 ç 0: and ç1 çjç i j+0: for i = 1 are used to remove the sign and shift ambiguities in the solution. Estimates of ç p and ç ij are given in the form of Eèæjyè æ V èæjyè by, respectively, and è,0:19; 0:89; 0:9;,0:181è æ è0:0191; 0:019; 0:00; 0:011è : :1 : :18 :1 : :0 :1 :0 : : :19 :1 :1 :9 : æ :11 :08 :09 :08 :0 :08 :08 :0 :0 :0 :08 :0 :08 :0 :10 :09 It is evident by comparing Figs. èdè and èeè with Fig. èaè that the MAP estimator modefpèx t jyèg completely recovers the input signal fx t g from the noisy data while the recovery by the MMSE estimator Eèx t jyè is almost complete except for the last point, even though the sample size is relatively small. The estimates for the system parameters and the transition probabilities are reasonably accurate given that n is merely 100. This demonstrates again the impact of the discreteness of input signals on the improvement of blind deconvolution solutions. References ë1ë P.J. Brockwell and R.A. Davis, Time Series: Theory and Methods, nd Ed., New York: Springer, ëë R. Chen and T.H. Li, ëblind restoration of linearly degraded discrete signals by Gibbs sampler," Tech. Rep. 19, Dept. of Statist., Texas A&M University, College Station, 199. ëë Q. Cheng, ëmaximum standardized cumulant deconvolution of non-gaussian processes", Ann. Statist., vol. 18, pp. 1í18,1990. ëë D. Donoho, ëon minimum entropy deconvolution," in Applied Time Series Analysis II, D. Findley, Ed., New York: Academic, ëë D.N. Godard, ëself-recovering equalization and carrier tracking in two-dimensional data communication systems," IEEE Trans. Commun., vol. COM-8, pp. 18í18, Nov ëë T.H. Li, ëblind identiæcation and deconvolution of linear systems driven by binary random sequences," : (a) x(t) [markov] (b) y(t) [1dB] (c) x0(t) [iid unif] (d) E(x(t) y) (e) mode of p(x(t) y) Fig.. Deconvolution by Gibbs sampling. IEEE Trans. Inform. Theory, vol. IT-8, pp. í8, Jan ëë T.H. Li, ëblind deconvolution of linear systems with nonstationary multilevel inputs," Proc. IEEE Signal Process. Workshop on Higher-Order Statist., S. Lake Tahoe, CA, pp. 10í1, June 199. ë8ë J.G. Proakis, Digital Communications, nd Edn, New York: McGraw-Hill, ë9ë O. Shalvi and E. Weinstein, ënew criteria for blind deconvolution of nonminimum phase systems èchannelsè," IEEE Trans. Inform. Theory, vol. IT-, pp. 1í1, Mar ë10ë J.K. Tugnait, ëinverse ælter criteria for estimation of linear parametric models using higher order statistics," Proc. ICASSP-91, pp. 101í10. 1
BLIND DECONVOLUTION ALGORITHMS FOR MIMO-FIR SYSTEMS DRIVEN BY FOURTH-ORDER COLORED SIGNALS
BLIND DECONVOLUTION ALGORITHMS FOR MIMO-FIR SYSTEMS DRIVEN BY FOURTH-ORDER COLORED SIGNALS M. Kawamoto 1,2, Y. Inouye 1, A. Mansour 2, and R.-W. Liu 3 1. Department of Electronic and Control Systems Engineering,
More informationCS 294-2, Visual Grouping and Recognition èprof. Jitendra Malikè Sept. 8, 1999
CS 294-2, Visual Grouping and Recognition èprof. Jitendra Malikè Sept. 8, 999 Lecture è6 èbayesian estimation and MRFè DRAFT Note by Xunlei Wu æ Bayesian Philosophy æ Markov Random Fields Introduction
More informationBayesian Methods for Machine Learning
Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),
More informationGENERALIZED DEFLATION ALGORITHMS FOR THE BLIND SOURCE-FACTOR SEPARATION OF MIMO-FIR CHANNELS. Mitsuru Kawamoto 1,2 and Yujiro Inouye 1
GENERALIZED DEFLATION ALGORITHMS FOR THE BLIND SOURCE-FACTOR SEPARATION OF MIMO-FIR CHANNELS Mitsuru Kawamoto,2 and Yuiro Inouye. Dept. of Electronic and Control Systems Engineering, Shimane University,
More informationA Canonical Genetic Algorithm for Blind Inversion of Linear Channels
A Canonical Genetic Algorithm for Blind Inversion of Linear Channels Fernando Rojas, Jordi Solé-Casals, Enric Monte-Moreno 3, Carlos G. Puntonet and Alberto Prieto Computer Architecture and Technology
More informationBlind Deconvolution by Modified Bussgang Algorithm
Reprinted from ISCAS'99, International Symposium on Circuits and Systems, Orlando, Florida, May 3-June 2, 1999 Blind Deconvolution by Modified Bussgang Algorithm Simone Fiori, Aurelio Uncini, Francesco
More information256 Facta Universitatis ser.: Elect. and Energ. vol. 11, No.2 è1998è primarily concerned with narrow range of frequencies near ærst resonance èwhere s
FACTA UNIVERSITATIS èni ç Sè Series: Electronics and Energetics vol. 11, No.2 è1998è, 255-261 EFFICIENT CALCULATION OF RADAR CROSS SECTION FOR FINITE STRIP ARRAY ON DIELECTRIC SLAB Borislav Popovski and
More informationI signal (or seismic source) is sent to probe the layered
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 34, NO. 2, MARCH 1996 311 Simultaneous Wavelet Estimation and Deconvolution of Reflection Seismic Signals Qiansheng Cheng, Rong Chen, and Ta-Hsin
More informationSTA205 Probability: Week 8 R. Wolpert
INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and
More informationAnalytical solution of the blind source separation problem using derivatives
Analytical solution of the blind source separation problem using derivatives Sebastien Lagrange 1,2, Luc Jaulin 2, Vincent Vigneron 1, and Christian Jutten 1 1 Laboratoire Images et Signaux, Institut National
More informationImproved system blind identification based on second-order cyclostationary statistics: A group delay approach
SaÅdhanaÅ, Vol. 25, Part 2, April 2000, pp. 85±96. # Printed in India Improved system blind identification based on second-order cyclostationary statistics: A group delay approach P V S GIRIDHAR 1 and
More informationDiscrete time processes
Discrete time processes Predictions are difficult. Especially about the future Mark Twain. Florian Herzog 2013 Modeling observed data When we model observed (realized) data, we encounter usually the following
More informationGaussian processes. Basic Properties VAG002-
Gaussian processes The class of Gaussian processes is one of the most widely used families of stochastic processes for modeling dependent data observed over time, or space, or time and space. The popularity
More informationOn Moving Average Parameter Estimation
On Moving Average Parameter Estimation Niclas Sandgren and Petre Stoica Contact information: niclas.sandgren@it.uu.se, tel: +46 8 473392 Abstract Estimation of the autoregressive moving average (ARMA)
More informationChapter 3 - Temporal processes
STK4150 - Intro 1 Chapter 3 - Temporal processes Odd Kolbjørnsen and Geir Storvik January 23 2017 STK4150 - Intro 2 Temporal processes Data collected over time Past, present, future, change Temporal aspect
More informationX t = a t + r t, (7.1)
Chapter 7 State Space Models 71 Introduction State Space models, developed over the past 10 20 years, are alternative models for time series They include both the ARIMA models of Chapters 3 6 and the Classical
More informationEstimation of the Optimum Rotational Parameter for the Fractional Fourier Transform Using Domain Decomposition
Estimation of the Optimum Rotational Parameter for the Fractional Fourier Transform Using Domain Decomposition Seema Sud 1 1 The Aerospace Corporation, 4851 Stonecroft Blvd. Chantilly, VA 20151 Abstract
More informationBayesian Inference for DSGE Models. Lawrence J. Christiano
Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.
More informationDynamic models. Dependent data The AR(p) model The MA(q) model Hidden Markov models. 6 Dynamic models
6 Dependent data The AR(p) model The MA(q) model Hidden Markov models Dependent data Dependent data Huge portion of real-life data involving dependent datapoints Example (Capture-recapture) capture histories
More informationBlind Identification of FIR Systems and Deconvolution of White Input Sequences
Blind Identification of FIR Systems and Deconvolution of White Input Sequences U. SOVERINI, P. CASTALDI, R. DIVERSI and R. GUIDORZI Dipartimento di Elettronica, Informatica e Sistemistica Università di
More information7. Forecasting with ARIMA models
7. Forecasting with ARIMA models 309 Outline: Introduction The prediction equation of an ARIMA model Interpreting the predictions Variance of the predictions Forecast updating Measuring predictability
More informationIf we want to analyze experimental or simulated data we might encounter the following tasks:
Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction
More informationStat 516, Homework 1
Stat 516, Homework 1 Due date: October 7 1. Consider an urn with n distinct balls numbered 1,..., n. We sample balls from the urn with replacement. Let N be the number of draws until we encounter a ball
More informationModule 9: Stationary Processes
Module 9: Stationary Processes Lecture 1 Stationary Processes 1 Introduction A stationary process is a stochastic process whose joint probability distribution does not change when shifted in time or space.
More informationEXACT MAXIMUM LIKELIHOOD ESTIMATION FOR NON-GAUSSIAN MOVING AVERAGES
Statistica Sinica 19 (2009), 545-560 EXACT MAXIMUM LIKELIHOOD ESTIMATION FOR NON-GAUSSIAN MOVING AVERAGES Nan-Jung Hsu and F. Jay Breidt National Tsing-Hua University and Colorado State University Abstract:
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear
More informationBayesian Methods with Monte Carlo Markov Chains II
Bayesian Methods with Monte Carlo Markov Chains II Henry Horng-Shing Lu Institute of Statistics National Chiao Tung University hslu@stat.nctu.edu.tw http://tigpbp.iis.sinica.edu.tw/courses.htm 1 Part 3
More informationMMSE Dimension. snr. 1 We use the following asymptotic notation: f(x) = O (g(x)) if and only
MMSE Dimension Yihong Wu Department of Electrical Engineering Princeton University Princeton, NJ 08544, USA Email: yihongwu@princeton.edu Sergio Verdú Department of Electrical Engineering Princeton University
More informationLECTURE 17. Algorithms for Polynomial Interpolation
LECTURE 7 Algorithms for Polynomial Interpolation We have thus far three algorithms for determining the polynomial interpolation of a given set of data.. Brute Force Method. Set and solve the following
More informationOn Information Maximization and Blind Signal Deconvolution
On Information Maximization and Blind Signal Deconvolution A Röbel Technical University of Berlin, Institute of Communication Sciences email: roebel@kgwtu-berlinde Abstract: In the following paper we investigate
More informationAsymptotics and Simulation of Heavy-Tailed Processes
Asymptotics and Simulation of Heavy-Tailed Processes Department of Mathematics Stockholm, Sweden Workshop on Heavy-tailed Distributions and Extreme Value Theory ISI Kolkata January 14-17, 2013 Outline
More informationA Subspace Approach to Estimation of. Measurements 1. Carlos E. Davila. Electrical Engineering Department, Southern Methodist University
EDICS category SP 1 A Subspace Approach to Estimation of Autoregressive Parameters From Noisy Measurements 1 Carlos E Davila Electrical Engineering Department, Southern Methodist University Dallas, Texas
More informationProbabilistic and Bayesian Machine Learning
Probabilistic and Bayesian Machine Learning Day 4: Expectation and Belief Propagation Yee Whye Teh ywteh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit University College London http://www.gatsby.ucl.ac.uk/
More informationSystem Identification, Lecture 4
System Identification, Lecture 4 Kristiaan Pelckmans (IT/UU, 2338) Course code: 1RT880, Report code: 61800 - Spring 2012 F, FRI Uppsala University, Information Technology 30 Januari 2012 SI-2012 K. Pelckmans
More informationCIFAR Lectures: Non-Gaussian statistics and natural images
CIFAR Lectures: Non-Gaussian statistics and natural images Dept of Computer Science University of Helsinki, Finland Outline Part I: Theory of ICA Definition and difference to PCA Importance of non-gaussianity
More informationGatsby Theoretical Neuroscience Lectures: Non-Gaussian statistics and natural images Parts I-II
Gatsby Theoretical Neuroscience Lectures: Non-Gaussian statistics and natural images Parts I-II Gatsby Unit University College London 27 Feb 2017 Outline Part I: Theory of ICA Definition and difference
More informationTime Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY
Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY PREFACE xiii 1 Difference Equations 1.1. First-Order Difference Equations 1 1.2. pth-order Difference Equations 7
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate
More informationThe Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision
The Particle Filter Non-parametric implementation of Bayes filter Represents the belief (posterior) random state samples. by a set of This representation is approximate. Can represent distributions that
More informationAn Iterative Blind Source Separation Method for Convolutive Mixtures of Images
An Iterative Blind Source Separation Method for Convolutive Mixtures of Images Marc Castella and Jean-Christophe Pesquet Université de Marne-la-Vallée / UMR-CNRS 8049 5 bd Descartes, Champs-sur-Marne 77454
More informationComputation of Information Rates from Finite-State Source/Channel Models
Allerton 2002 Computation of Information Rates from Finite-State Source/Channel Models Dieter Arnold arnold@isi.ee.ethz.ch Hans-Andrea Loeliger loeliger@isi.ee.ethz.ch Pascal O. Vontobel vontobel@isi.ee.ethz.ch
More informationA6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring
Lecture 8 A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2015 http://www.astro.cornell.edu/~cordes/a6523 Applications: Bayesian inference: overview and examples Introduction
More informationObservers. University of British Columbia. estimates can both be determined without the. èubcè maglev wrist ë6ë support the analysis and are
IEEE International Conference on Robotics and Automation San Diego, California, 994 Estimation of Environment Forces and Rigid-Body Velocities using Observers P. J. Hacksel and S. E. Salcudean Department
More information10-704: Information Processing and Learning Spring Lecture 8: Feb 5
10-704: Information Processing and Learning Spring 2015 Lecture 8: Feb 5 Lecturer: Aarti Singh Scribe: Siheng Chen Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal
More informationComputer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo
Group Prof. Daniel Cremers 10a. Markov Chain Monte Carlo Markov Chain Monte Carlo In high-dimensional spaces, rejection sampling and importance sampling are very inefficient An alternative is Markov Chain
More informationSequential Monte Carlo Methods for Bayesian Computation
Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter
More informationSTOCHASTIC PROCESSES Basic notions
J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving
More information3.4 Linear Least-Squares Filter
X(n) = [x(1), x(2),..., x(n)] T 1 3.4 Linear Least-Squares Filter Two characteristics of linear least-squares filter: 1. The filter is built around a single linear neuron. 2. The cost function is the sum
More informationRecursive Generalized Eigendecomposition for Independent Component Analysis
Recursive Generalized Eigendecomposition for Independent Component Analysis Umut Ozertem 1, Deniz Erdogmus 1,, ian Lan 1 CSEE Department, OGI, Oregon Health & Science University, Portland, OR, USA. {ozertemu,deniz}@csee.ogi.edu
More informationECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering
ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:
More informationLINEAR parametric models have found widespread use
3084 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 45, NO 12, DECEMBER 1997 Subspace Methods for Blind Estimation of Time-Varying FIR Channels Michail K Tsatsanis and Georgios B Giannakis, Fellow, IEEE Abstract
More informationStat 451 Lecture Notes Markov Chain Monte Carlo. Ryan Martin UIC
Stat 451 Lecture Notes 07 12 Markov Chain Monte Carlo Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapters 8 9 in Givens & Hoeting, Chapters 25 27 in Lange 2 Updated: April 4, 2016 1 / 42 Outline
More informationMarkov Chain Monte Carlo (MCMC)
Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can
More informationIntroduction to Spatial Data and Models
Introduction to Spatial Data and Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry
More informationIN HIGH-SPEED digital communications, the channel often
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 7, JULY 1997 1815 Asymptotically Optimal Blind Fractionally Spaced Channel Estimation Performance Analysis Georgios B. Giannakis, Fellow, IEEE, Steven
More informationSemi-Parametric Importance Sampling for Rare-event probability Estimation
Semi-Parametric Importance Sampling for Rare-event probability Estimation Z. I. Botev and P. L Ecuyer IMACS Seminar 2011 Borovets, Bulgaria Semi-Parametric Importance Sampling for Rare-event probability
More informationBLIND SEPARATION OF INSTANTANEOUS MIXTURES OF NON STATIONARY SOURCES
BLIND SEPARATION OF INSTANTANEOUS MIXTURES OF NON STATIONARY SOURCES Dinh-Tuan Pham Laboratoire de Modélisation et Calcul URA 397, CNRS/UJF/INPG BP 53X, 38041 Grenoble cédex, France Dinh-Tuan.Pham@imag.fr
More informationconsistency is faster than the usual T 1=2 consistency rate. In both cases more general error distributions were considered as well. Consistency resul
LIKELIHOOD ANALYSIS OF A FIRST ORDER AUTOREGRESSIVE MODEL WITH EPONENTIAL INNOVATIONS By B. Nielsen & N. Shephard Nuæeld College, Oxford O1 1NF, UK bent.nielsen@nuf.ox.ac.uk neil.shephard@nuf.ox.ac.uk
More informationExpectation propagation for signal detection in flat-fading channels
Expectation propagation for signal detection in flat-fading channels Yuan Qi MIT Media Lab Cambridge, MA, 02139 USA yuanqi@media.mit.edu Thomas Minka CMU Statistics Department Pittsburgh, PA 15213 USA
More informationNonparametric Bayesian Methods (Gaussian Processes)
[70240413 Statistical Machine Learning, Spring, 2015] Nonparametric Bayesian Methods (Gaussian Processes) Jun Zhu dcszj@mail.tsinghua.edu.cn http://bigml.cs.tsinghua.edu.cn/~jun State Key Lab of Intelligent
More information10-704: Information Processing and Learning Fall Lecture 9: Sept 28
10-704: Information Processing and Learning Fall 2016 Lecturer: Siheng Chen Lecture 9: Sept 28 Note: These notes are based on scribed notes from Spring15 offering of this course. LaTeX template courtesy
More informationSAMPLING JITTER CORRECTION USING FACTOR GRAPHS
19th European Signal Processing Conference EUSIPCO 11) Barcelona, Spain, August 9 - September, 11 SAMPLING JITTER CORRECTION USING FACTOR GRAPHS Lukas Bolliger and Hans-Andrea Loeliger ETH Zurich Dept.
More informationStatistics of stochastic processes
Introduction Statistics of stochastic processes Generally statistics is performed on observations y 1,..., y n assumed to be realizations of independent random variables Y 1,..., Y n. 14 settembre 2014
More informationCCNY. BME I5100: Biomedical Signal Processing. Stochastic Processes. Lucas C. Parra Biomedical Engineering Department City College of New York
BME I5100: Biomedical Signal Processing Stochastic Processes Lucas C. Parra Biomedical Engineering Department CCNY 1 Schedule Week 1: Introduction Linear, stationary, normal - the stuff biology is not
More informationNUMERICAL COMPUTATION OF THE CAPACITY OF CONTINUOUS MEMORYLESS CHANNELS
NUMERICAL COMPUTATION OF THE CAPACITY OF CONTINUOUS MEMORYLESS CHANNELS Justin Dauwels Dept. of Information Technology and Electrical Engineering ETH, CH-8092 Zürich, Switzerland dauwels@isi.ee.ethz.ch
More informationBrief introduction to Markov Chain Monte Carlo
Brief introduction to Department of Probability and Mathematical Statistics seminar Stochastic modeling in economics and finance November 7, 2011 Brief introduction to Content 1 and motivation Classical
More informationSTA414/2104 Statistical Methods for Machine Learning II
STA414/2104 Statistical Methods for Machine Learning II Murat A. Erdogdu & David Duvenaud Department of Computer Science Department of Statistical Sciences Lecture 3 Slide credits: Russ Salakhutdinov Announcements
More informationGaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012
Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature
More informationStatistical Inference and Methods
Department of Mathematics Imperial College London d.stephens@imperial.ac.uk http://stats.ma.ic.ac.uk/ das01/ 31st January 2006 Part VI Session 6: Filtering and Time to Event Data Session 6: Filtering and
More informationCS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling
CS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling Professor Erik Sudderth Brown University Computer Science October 27, 2016 Some figures and materials courtesy
More informationIndependent Component Analysis. Contents
Contents Preface xvii 1 Introduction 1 1.1 Linear representation of multivariate data 1 1.1.1 The general statistical setting 1 1.1.2 Dimension reduction methods 2 1.1.3 Independence as a guiding principle
More informationMMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING
MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING Yichuan Hu (), Javier Garcia-Frias () () Dept. of Elec. and Comp. Engineering University of Delaware Newark, DE
More informationproblem of detection naturally arises in technical diagnostics, where one is interested in detecting cracks, corrosion, or any other defect in a sampl
In: Structural and Multidisciplinary Optimization, N. Olhoæ and G. I. N. Rozvany eds, Pergamon, 1995, 543í548. BOUNDS FOR DETECTABILITY OF MATERIAL'S DAMAGE BY NOISY ELECTRICAL MEASUREMENTS Elena CHERKAEVA
More informationCS281A/Stat241A Lecture 22
CS281A/Stat241A Lecture 22 p. 1/4 CS281A/Stat241A Lecture 22 Monte Carlo Methods Peter Bartlett CS281A/Stat241A Lecture 22 p. 2/4 Key ideas of this lecture Sampling in Bayesian methods: Predictive distribution
More informationTime Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley
Time Series Models and Inference James L. Powell Department of Economics University of California, Berkeley Overview In contrast to the classical linear regression model, in which the components of the
More informationMANY papers and books are devoted to modeling fading
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 16, NO. 9, DECEMBER 1998 1809 Hidden Markov Modeling of Flat Fading Channels William Turin, Senior Member, IEEE, Robert van Nobelen Abstract Hidden
More informationTime Series Analysis -- An Introduction -- AMS 586
Time Series Analysis -- An Introduction -- AMS 586 1 Objectives of time series analysis Data description Data interpretation Modeling Control Prediction & Forecasting 2 Time-Series Data Numerical data
More informationCompressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach
Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach Asilomar 2011 Jason T. Parker (AFRL/RYAP) Philip Schniter (OSU) Volkan Cevher (EPFL) Problem Statement Traditional
More informationIntroduction to Spatial Data and Models
Introduction to Spatial Data and Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Department of Forestry & Department of Geography, Michigan State University, Lansing Michigan, U.S.A. 2 Biostatistics,
More informationRESEARCH ARTICLE. Online quantization in nonlinear filtering
Journal of Statistical Computation & Simulation Vol. 00, No. 00, Month 200x, 3 RESEARCH ARTICLE Online quantization in nonlinear filtering A. Feuer and G. C. Goodwin Received 00 Month 200x; in final form
More informationContinuous State MRF s
EE64 Digital Image Processing II: Purdue University VISE - December 4, Continuous State MRF s Topics to be covered: Quadratic functions Non-Convex functions Continuous MAP estimation Convex functions EE64
More informationBasic math for biology
Basic math for biology Lei Li Florida State University, Feb 6, 2002 The EM algorithm: setup Parametric models: {P θ }. Data: full data (Y, X); partial data Y. Missing data: X. Likelihood and maximum likelihood
More informationBayesian Inference for DSGE Models. Lawrence J. Christiano
Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.
More information2 Particle ælters 2. The deænition of particle ælters Particle ælters are the class of simulation ælters which recursively approximate the æltering ra
Auxiliary variable based particle ælters Michael K. Pitt & Neil Shephard Prepared for a book on ësequential Monte Carlo Methods in Practice", edited by Nando de Freitas, Arnaud Doucet and Neil Gordon,
More informationComputer Vision Group Prof. Daniel Cremers. 14. Sampling Methods
Prof. Daniel Cremers 14. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric
More informationTime Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY
Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY & Contents PREFACE xiii 1 1.1. 1.2. Difference Equations First-Order Difference Equations 1 /?th-order Difference
More informationSystem Identification, Lecture 4
System Identification, Lecture 4 Kristiaan Pelckmans (IT/UU, 2338) Course code: 1RT880, Report code: 61800 - Spring 2016 F, FRI Uppsala University, Information Technology 13 April 2016 SI-2016 K. Pelckmans
More informationTIME SERIES AND FORECASTING. Luca Gambetti UAB, Barcelona GSE Master in Macroeconomic Policy and Financial Markets
TIME SERIES AND FORECASTING Luca Gambetti UAB, Barcelona GSE 2014-2015 Master in Macroeconomic Policy and Financial Markets 1 Contacts Prof.: Luca Gambetti Office: B3-1130 Edifici B Office hours: email:
More informationIntroduction to Spectral and Time-Spectral Analysis with some Applications
Introduction to Spectral and Time-Spectral Analysis with some Applications A.INTRODUCTION Consider the following process X t = U cos[(=3)t] + V sin[(=3)t] Where *E[U] = E[V ] = 0 *E[UV ] = 0 *V ar(u) =
More informationStat 248 Lab 2: Stationarity, More EDA, Basic TS Models
Stat 248 Lab 2: Stationarity, More EDA, Basic TS Models Tessa L. Childers-Day February 8, 2013 1 Introduction Today s section will deal with topics such as: the mean function, the auto- and cross-covariance
More informationName of the Student: Problems on Discrete & Continuous R.Vs
Engineering Mathematics 05 SUBJECT NAME : Probability & Random Process SUBJECT CODE : MA6 MATERIAL NAME : University Questions MATERIAL CODE : JM08AM004 REGULATION : R008 UPDATED ON : Nov-Dec 04 (Scan
More informationNew Lagrange Multipliers for the Blind Adaptive Deconvolution Problem Applicable for the Noisy Case
entropy Article New Lagrange Multipliers for the Blind Adaptive Deconvolution Problem Applicable for the Noisy Case Monika Pinchas Department of Electrical and Electronic Engineering, Ariel University,
More informationControl Variates for Markov Chain Monte Carlo
Control Variates for Markov Chain Monte Carlo Dellaportas, P., Kontoyiannis, I., and Tsourti, Z. Dept of Statistics, AUEB Dept of Informatics, AUEB 1st Greek Stochastics Meeting Monte Carlo: Probability
More informationStatistics & Data Sciences: First Year Prelim Exam May 2018
Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book
More informationLinear Dynamical Systems
Linear Dynamical Systems Sargur N. srihari@cedar.buffalo.edu Machine Learning Course: http://www.cedar.buffalo.edu/~srihari/cse574/index.html Two Models Described by Same Graph Latent variables Observations
More informationDensity Estimation: ML, MAP, Bayesian estimation
Density Estimation: ML, MAP, Bayesian estimation CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Introduction Maximum-Likelihood Estimation Maximum
More informationECON 616: Lecture 1: Time Series Basics
ECON 616: Lecture 1: Time Series Basics ED HERBST August 30, 2017 References Overview: Chapters 1-3 from Hamilton (1994). Technical Details: Chapters 2-3 from Brockwell and Davis (1987). Intuition: Chapters
More informationExpressions for the covariance matrix of covariance data
Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden
More informationOptimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko
IEEE SIGNAL PROCESSING LETTERS, VOL. 17, NO. 12, DECEMBER 2010 1005 Optimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko Abstract A new theorem shows that
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 7 Approximate
More information