SAMPLING JITTER CORRECTION USING FACTOR GRAPHS

Similar documents
arxiv:cs/ v2 [cs.it] 1 Oct 2006

Gaussian message passing on linear models: an update

Gaussian Message Passing on Linear Models: An Update

A STATE-SPACE APPROACH FOR THE ANALYSIS OF WAVE AND DIFFUSION FIELDS

Particle Methods as Message Passing

Latent Tree Approximation in Linear Model

NUMERICAL COMPUTATION OF THE CAPACITY OF CONTINUOUS MEMORYLESS CHANNELS

Massachusetts Institute of Technology

Computation of Information Rates from Finite-State Source/Channel Models

Parameter Estimation

Steepest descent on factor graphs

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

9 Forward-backward algorithm, sum-product on factor graphs

3F1 Random Processes Examples Paper (for all 6 lectures)

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017)

Chapter 5 Frequency Domain Analysis of Systems

Cooperative Communication with Feedback via Stochastic Approximation

Time Series Analysis

Chapter 5 Frequency Domain Analysis of Systems

14 - Gaussian Stochastic Processes

On Irreducible Polynomial Remainder Codes

Estimation of the Capacity of Multipath Infrared Channels

Steepest Descent as Message Passing

A NEW BAYESIAN LOWER BOUND ON THE MEAN SQUARE ERROR OF ESTIMATORS. Koby Todros and Joseph Tabrikian

MULTIRATE SYSTEMS. work load, lower filter order, lower coefficient sensitivity and noise,

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Pulse Shaping and ISI (Proakis: chapter 10.1, 10.3) EEE3012 Spring 2018

Modeling nonlinear systems using multiple piecewise linear equations

Module 4. Signal Representation and Baseband Processing. Version 2 ECE IIT, Kharagpur

Computing Transfer Function Dominant Poles of Large Second-Order Systems

ELEN E4810: Digital Signal Processing Topic 11: Continuous Signals. 1. Sampling and Reconstruction 2. Quantization

Outlier-insensitive Kalman Smoothing and Marginal Message Passing

ECE 636: Systems identification

Cover page. : On-line damage identication using model based orthonormal. functions. Author : Raymond A. de Callafon

SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY UNIT 3 RANDOM PROCESS TWO MARK QUESTIONS

Random Processes Handout IV

Dynamic measurement: application of system identification in metrology

ECE-340, Spring 2015 Review Questions

Single-Gaussian Messages and Noise Thresholds for Low-Density Lattice Codes

Extending Monte Carlo Methods to Factor Graphs with Negative and Complex Factors

Sensitivity of hybrid filter banks A/D converters to analog realization errors and finite word length

EE422G Homework #9 Solution

Actuator Fault diagnosis: H framework with relative degree notion

Compressed Sensing with Shannon-Kotel nikov Mapping in the Presence of Noise

Distortion-Rate Function for Undersampled Gaussian Processes

EE5713 : Advanced Digital Communications

Performance Analysis of an Adaptive Algorithm for DOA Estimation

Lecture 8: Signal Reconstruction, DT vs CT Processing. 8.1 Reconstruction of a Band-limited Signal from its Samples

ForneyLab.jl a Julia Toolbox for Factor Graph-based Probabilistic Programming

A SPARSENESS CONTROLLED PROPORTIONATE ALGORITHM FOR ACOUSTIC ECHO CANCELLATION

Chapter 2: Problem Solutions

An Information Theoretic Approach to Analog-to-Digital Compression

An Information Theoretic Approach to Analog-to-Digital Compression

The Gaussian transform. ALECU, Teodor, VOLOSHYNOVSKYY, Svyatoslav, PUN, Thierry. Abstract

Design of Image Adaptive Wavelets for Denoising Applications

Blind Identification of FIR Systems and Deconvolution of White Input Sequences

EE Introduction to Digital Communications Homework 8 Solutions

18th European Signal Processing Conference (EUSIPCO-2010) Aalborg, Denmark, August 23-27, University of Pittsburgh Pittsburgh, PA, USA

Principles of Communications

USING STATISTICAL ROOM ACOUSTICS FOR ANALYSING THE OUTPUT SNR OF THE MWF IN ACOUSTIC SENSOR NETWORKS. Toby Christian Lawin-Ore, Simon Doclo

A Fast Algorithm for. Nonstationary Delay Estimation. H. C. So. Department of Electronic Engineering, City University of Hong Kong

Observability and state estimation

1 Kalman Filter Introduction

NONLINEAR STATISTICAL SIGNAL PROCESSING: A PARTI- CLE FILTERING APPROACH

SAMPLING DISCRETE-TIME PIECEWISE BANDLIMITED SIGNALS

MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING

Lecture Notes 4 Vector Detection and Estimation. Vector Detection Reconstruction Problem Detection for Vector AGN Channel

This examination consists of 10 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS

Discrete and continuous dynamic systems

Factor Graphs and Message Passing Algorithms Part 1: Introduction

EE 5345 Biomedical Instrumentation Lecture 12: slides

Lecture 19 Observability and state estimation

For final project discussion every afternoon Mark and I will be available

Various signal sampling and reconstruction methods

UCSD ECE 153 Handout #46 Prof. Young-Han Kim Thursday, June 5, Solutions to Homework Set #8 (Prepared by TA Fatemeh Arbabjolfaei)

5 Analog carrier modulation with noise

Application of the Tuned Kalman Filter in Speech Enhancement

A Class of Fractional Stochastic Differential Equations

Approximate Message Passing Algorithms

Approximate Capacity of the MIMO Relay Channel

Feedback Stabilization over Signal-to-Noise Ratio Constrained Channels

Reconstruction of graph signals: percolation from a single seeding node

Encoder Decoder Design for Feedback Control over the Binary Symmetric Channel

The Factor Graph Approach to Model-Based Signal Processing

7 The Waveform Channel

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance

Outlier-insensitive Kalman Smoothing and Marginal Message Passing

A Matrix Theoretic Derivation of the Kalman Filter

Multiuser Detection. Summary for EECS Graduate Seminar in Communications. Benjamin Vigoda

Part 4: IIR Filters Optimization Approach. Tutorial ISCAS 2007

Digital Modulation 1

A study on event triggering criteria for estimation

Errors-in-variables identification through covariance matching: Analysis of a colored measurement noise case

Name of the Student: Problems on Discrete & Continuous R.Vs

HARMONIC EXTENSION OF AN ADAPTIVE NOTCH FILTER FOR FREQUENCY TRACKING

Fundamentals of Noise

Sensors. Chapter Signal Conditioning

UCSD ECE250 Handout #24 Prof. Young-Han Kim Wednesday, June 6, Solutions to Exercise Set #7

ENSC327 Communications Systems 2: Fourier Representations. Jie Liang School of Engineering Science Simon Fraser University

Modulation & Coding for the Gaussian Channel

Transcription:

19th European Signal Processing Conference EUSIPCO 11) Barcelona, Spain, August 9 - September, 11 SAMPLING JITTER CORRECTION USING FACTOR GRAPHS Lukas Bolliger and Hans-Andrea Loeliger ETH Zurich Dept. of Information Technology & Electr. Eng. 89 Zürich, Switzerland Email: {bolliger, loeliger}@isi.ee.ethz.ch ABSTRACT Analog-to-digital converters are impaired by sampling clock jitter. The error induced by clock jitter depends on the slope of the analog signal at the sampling instant. In this paper, a continuous-time state space model allows to estimate the slope of the continuous-time signal, which is then used in an iterative algorithm for jitter correction. 1. INTRODUCTION Sampling clock jitter in analog-to-digital A/D) conversion is a well-known problem; it occurs when, due to hardware imperfections, the exact sampling instants are subject to random fluctuations. For bandlimited signals, approaches for clock jitter correction based on Linear Minimum Mean Square Error LMMSE) estimation are proposed in [1 and [. In [1, a filter bank that converts a non-uniformly sampled signal to a uniformly sampled signal is extended for unknown delays in the sampling process. In [, a Fourier series of the discrete-time signal is used to construct an LMMSE filter for clock jitter correction. In this paper, we propose a different approach to clock jitter correction. Unlike [1 and [, we do not assume the continuous-time analog signal to be strictly bandlimited. Instead, following [3, we assume that this signal is the output of a continuous-time linear system or filter) that is driven by white noise and that the state space of this filter is finite dimensional. A factor graph approach to Maximum a Posteriori MAP) estimation = LMMSE estimation) of such signals from noisy discrete-time observations without clock jitter) was presented in [3. In the present paper, we will exploit the fact that, with such a state space model, the slope of the continuous-time signal is computationally accessible. Clearly, if the slope of the continuous-time signal is large at the sampling instant, even a small deviation of the sampling instant has a large impact on the observation. Thus, the noise variance under which the discrete-time signal is observed depends on the signal slopes at the sampling instants. We show how the algorithm proposed in [3 can be extended to estimate the slope of the continuous-time signal as well and we propose an iterative algorithm where the estimates of the slopes of the continuous-time signal are used to improve the quality of the discrete-time signal. Each iteration by itself calculates a regularized least squares estimate, where the first iteration turns out to be similar to the algorithm presented in [. We will use factor graphs and message passing algorithms as introduced in [4. This paper is structured as follows: Sections and 3 are about the system model and its factor graph representation, which are based on [3. Section 4 is about some technicalities regarding the notion of signal-to-noise ratio SNR). The proposed algorithm is presented in Section 5 and some simulation results are presented in Section 6.. SYSTEM MODEL For k =,...,K 1, let Ỹ k be noisy samples of some continuous-time signal Y t) at the sampling instants T k, which are random according to T k = t k + D k, 1) with known t < t 1 < t,... and a random delay D k with E[D k = and E [ D k D. We define the observed discretetime digital) signal as Ỹ k = Y T k ) + Z k, ) where Z k, which models quantization and general hardware imperfections, is additive white Gaussian noise with E [ Z k Z. Some delay D k of the sampling instant causes a larger error for Ỹ k if the slope of Y t) at t = t k is large. This is expressed by a first order Taylor approximation of Y t): Ỹ k = Y t k + D k ) + Z k 3) Y t k ) + D k Ẏ t k ) + Z k, 4) where Ẏ denotes the derivative of Y with respect to time. We define the approximate jitter error with J k = D k Ẏ t k ), 5) E[J k = 6) E [ J k D E[Ẏ tk ). 7) Following [3 we will model Ỹ k as the discrete-time observation of continuous-time filtered noise. This model is illustrated in Figure 1: A continuous-time filter is driven by continuous-time white Gaussian noise; its output is sampled and then corrupted by discrete-time additive white Gaussian noise. The continuous-time system is known and given by its state-space representation. Hence, X R n is the state of the system which evolves in time according to Ẋt) = AXt) + but), 8) EURASIP, 11 - ISSN 76-1465 849

WGN source Ut) f linear system / filter Y t) Y k Z k Figure 1: System model as in [3 WGN source with A R n n and b R n. The input Ut) R is assumed to have mean zero and an auto-correlation function Ỹ k E[Ut + τ)ut) Uδτ), 9) where δ.) denotes the Dirac delta. The output Y t) of the filter is a linear combination of the state variables: Y t) = c T Xt) Y t) R,c R n ). 1) Modeling Ut) as continuous-time white Gaussian noise amounts to a power constraint on the input signal Ut), which causes the spectrum of the estimate of Y t) to be shaped by the filter defined by A, b and c cf. Section 3 and [3). Ẏ t) can be expressed as Ẏ t) = cẋt) 11) = c T AXt) + c T but). 1) In the present paper, we will asssume c T b =, which amounts to insisting that the transfer functions from Ut) to Y t) and Ut) to Ẏ t) have low-pass characteristics. We thus have Ẏ t) = c T AXt). 13) 3. FACTOR GRAPH REPRESENTATION We use Forney factor graphs normal factor graphs) as in [4 where nodes represent factors and edges represent variables. For messages we use the same notation as in [4: all edges are directed, the forward message along some edge X is denoted as µ X x), and the backward message as µ X x). The marginal density f X x) of X equals µ X x) µ X x), up to a scale factor. In the present paper all messages are Gaussian and thus fully defined by a mean vector m and a covariance matrix V. We use N m,v ) to denote a Gaussian density function, thus µ X = N m X, V X ) 14) µ X = N m X, V X ). 15) Gaussian message passing in factor graphs is discussed in detail in [4. The factor graph described in [3 is easily extended for clock jitter correction as seen in Figure. To reduce calculation complexity, J k is modeled as Gaussian noise; note that σ J t k) = E [ J k is not constant for all k. The factor graph is cycle-free, thus all messages µ Xtk ) and µ Xtk ) can be calculated by performing one forward and one backward pass. Xt k ) = f xt k+1 ) xt k ) ) = Xt k+1 )...... N,σ J t k) ) N,σZ ) c T c T Y t k ) Y t k+1 ) J k + J k+1 + Y k Z k Z k+1 + + Ỹ k = ỹ k Y k+1 Ỹ k+1 = ỹ k+1 Figure : A Forney factor graph of the system in Figure 1. Table 1: Computation rules as in [3 for Gaussian messages through node / factor f xt 1 ) xt ) ) with t 1 > t. Xt ) Xt 1 ) f xt 1 ) xt ) ) mxt1 ) = e At 1 t ) mxt ) V Xt1 ) = e At 1 t ) V Xt )e AT t 1 t ) t1 t i.1) + σu e Aτ bb T e ATτ dτ i.) }{{} Q Θt 1 t )Q H see 17) mxt ) = e At 1 t ) mxt1 ) V Xt ) = e At 1 t ) V Xt1 )e AT t 1 t ) t1 t i.3) + σu e Aτ bb T e ATτ dτ i.4) }{{} Q Θt 1 t )Q H see 19) 85

The calculation of the mean and the covariance matrix for Gaussian messages through f xt k+1 ) xt k ) ) are given in [3 and are repeated in Table 1. If the matrix A is diagonalizable, then the integrals in i.) and i.4) can easily be expressed in closed forms. If A = Q λ 1... λ n for some complex square matrix Q, then t Q 1 16) e Aτ bb T e ATτ dτ = Q Θt)Q H 17) where the square matrix Θt) is given by and with Θt)k,l = Q 1 b) k Q 1 b) ) l e λ k+λ l )t 1, 18) λ k + λ l t e Aτ bb T e ATτ dτ = Q Θt)Q H 19) Θt)k,l = Q 1 b) k Q 1 b) ) l 1 e λ k+λ l )t. ) λ k + λ l Note that, in 18) and ), Q 1 b) k denotes the k-th component of the vector Q 1 b. For the message computation rules through all other nodes/factors the reader is referred to [4. Calculating all the messages and maximizing the marginal of Xt k ) yields the Maximum a Posteriori MAP) estimate ˆxt k ) of Xt k ): ˆxt k ) = argmax f Xtk )x ỹ,...,σj t ),...) x 1) = argmax µ Xtk )x) µ Xtk )x). x ) 4. DEFINITIONS OF SNR The following definitions for signal-to-noise ratios SNR) are used in Sections 5 and 6. We measure SNR in db i.e. 1 log 1 SNR)). For the signal Ỹ k we define SNR tot = E[ Y t) σz + E[ Jk 4) SNR Z = E[ Y t) σ Z with E [ J k D E[Ẏ t) 7). 5) SNR J = E[ Y t) E [ Jk, 6) For the SNR of the estimate ŷt k ) = c T ˆxt k ) of Y t k ) we define SNR E [ Y t k ) out = [ Ŷ E tk ) Y t k ) ). 7) Assuming Reλ i ) < for all λ i in 16) i.e., a stable system), the mean vector and the covariance matrix of the message converge to µ X ) = lim µ Xt) 8) t mx ) = lim t eat = 9) V X ) = lim σ UQ Θt)Q H. 3) t µ X ) represents the probability distribution of Xt) in the absence of observations, therefore E[Xt) = m X ) = 31) E [ Xt) = V X ), 3) We define ût) as the estimate of the input signal Ut) of the filter. In this paper we are not interested in ût), cf. [3 on the calculation and the properties of ût). Since all messages are Gaussian, the maximization in 1) is equivalent to a least squares estimation cf. [4). Thus, ˆxt k ) is a regularized least squares estimate of Xt k ) for given σ J t ),...,σ J t K 1) and ỹ,...,ỹ K 1, which minimizes the cost function and thus [ E Y t) [ = c T E X t) c 33) = c T V X ) c 34) E[Ẏ t) = c T E[Ẋ t) c 35) 1 σ U tk t ût) dt + K 1 ỹ k c ˆxt k )) k= σz + σ J t k). 3) This cost function illustrates how the parameter σ U serves as a power constraint on ût). The ratio between the input power σ U and the average output noise power of J k +Z k is a regularization parameter used to force the spectrum of ût) and ŷt) = c T ˆxt) to the passband of the continuous-time filter cf. [3). = c T A V X ) c T A ) T. 36) 5. THE ALGORITHM We now describe an iterative algorithm to estimate the signal Y t k ) for all k. We assume the special case of uniform sampling with rate f s, i.e., t k = k/ f s, although the algorithm can easily be adjusted for non-uniform sampling. Some remarks on notation: x l) denotes the value of some variable x at the l-th iteration and ˆx denotes the estimate of some random variable X. 851

At each iteration l, ˆxt k ) l) is calculated as described in Section 3. The estimates of Y t k ) and its derivatives are ŷt k ) l) = c T ˆxt k ) l) 37) ˆẏt k ) l) = c T A ˆxt k ) l). 38) At the first iteration no estimates of the derivatives are available. Thus, σ J t k) ) is initialized using 36) to the expected average power of J k which is constant for all k: σ J t k ) ) Dc T A V X ) c T A ) T. 39) This first iteration is similar to the algorithm described in [; the main difference is that we are not assuming strictly bandlimited signals. In iteration l = 1,,..., the value for σ J t k) l) is set based on the estimate of Ẏ t k ) of iteration l 1 according to 7): σj t k ) l) D ˆẏt k ) l 1)) D c T A ˆxt k ) l 1)) 4) l = 1,,... 41) The ratio between the input power and the average output power σ l) U σz + σ D N 1 N k=1 c T A ˆxt k ) l 1)) 4) influences the spectrum of ˆxt) as mentioned in the end of Section 3. Thus, σ l) U is chosen to keep 4) constant over all iterations. For the numerical results given in Section 6, the values for σd, σ Z ) and σu are set equal to the values used for signal generation. 6. NUMERICAL RESULTS To test the algorithm, sample signals ỹ k have been generated according to the model in Figure 1 cf. [3 on how to generate sample signals). For the continuous-time system a lowpass Butterworth filter of order 8 was used. Recall that Y t) is not strictly bandlimited but its spectrum is shaped by the spectrum of the continuous-time filter. We characterize the bandwidth of Y t) by the 3 db frequency f c of the Butterworth filter. For signal generation, we chose the random delay D k of the sampling instants to be uniformly distributed in the interval.5 f s,.5 f s ). Note that the estimation algorithm uses a Gaussian distribution for the time delay with E[D k = 43) E [ D.5/ f s ) k =. 44) 3 The algorithm stops after the l-th iteration if SNR l) out[db SNR l 1) out [db SNR l) out[db SNR ) <.1. 45) out[db For the data shown in Figure 3, each point in the plot was generated by 1 signals with K = 1 samples. The algorithm always stopped after three or four iterations. Simulations were performed for three different values of the 3 db frequency f c thus, different bandwiths of Y t)). The results are shown in Figure 3, where SNR out of the estimates after the first and the last iteration are plotted for different values for SNR Z. To show the overall improvement of the SNR, SNR tot of the observed, noisy samples ỹ k is plotted as well. Not surprisingly, the numerical results of the first iteration resemble the results in [. As mentioned in the introduction, the first iteration of our proposed algorithm is similar to the algorithm proposed in [. Since [ assumes strictly bandlimited signals an exact comparison does not make sense. [db 5 4 3 1 SNR out first iteration SNR out last iteration SNR tot 1 3 4 5 6 f c = f s 6 f c = f s 4 f c = f s SNR Z [db Figure 3: Empirical SNR out and SNR tot for different bandwith of Y t). 7. CONCLUSION We have proposed an iterative algorithm to estimate a continuous-time signal from noisy discrete-time observations subject to clock jitter. The noise induced by the clock jitter was modeled to be dependent on the slope of the continuous-time signal at the sampling instant. We showed how to improve the estimate by iterative processing. The algorithm is easily extended to non-uniform sampling. Also the estimation of the input signal Ut) of the continuous-time filter is straightforward cf. [3). ACKNOWLEDGEMENT The authors would like to thank Mr. Daniel Baumann for the initial implementation of the algorithms of this paper and for providing first simulation results. REFERENCES [1 R.S. Prendergast and T.Q. Nguyen, Optimal reconstruction of periodically sampled signals with probabilistic 85

timing delays, Thirty-Eighth Asilomar Conference on Signals, Systems and Computers, 4, Vol., pp. 141 145. [ A. Nordio, C.-F. Chiasserini, and E. Viterbo, Signal reconstruction errors in jittered sampling, IEEE Trans. Signal Proc., Vol. 57, No. 1, pp. 4711 4718, Dec. 9. [3 L. Bolliger, H.-A. Loeliger, and C. Vogel, Simulation, MMSE estimation, and interpolation of sampled continuous-time signals using factor graphs, Information Theory & Applications Workshop, UCSD, La Jolla, CA, USA, 1 [4 H.-A. Loeliger, J. Dauwels, Junli Hu, S. Korl, Li Ping, and F.R. Kschischang, The factor graph approach to model-based signal processing, Proceedings of the IEEE, Vol. 95, No. 6, pp. 195 13, June 7. 853