Bayesian and empirical Bayesian approach to weighted averaging of ECG signal

Similar documents
Impulsive Noise Filtering In Biomedical Signals With Application of New Myriad Filter

p(d θ ) l(θ ) 1.2 x x x

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Learning Gaussian Process Models from Uncertain Data

Graphical Models for Collaborative Filtering

Expectation Propagation in Dynamical Systems

Bayesian decision theory Introduction to Pattern Recognition. Lectures 4 and 5: Bayesian decision theory

Sparse Linear Models (10/7/13)

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density

Lecture 3. Linear Regression II Bastian Leibe RWTH Aachen

Unsupervised Learning with Permuted Data

Ch 4. Linear Models for Classification

Part 1: Expectation Propagation

Choosing the Summary Statistics and the Acceptance Rate in Approximate Bayesian Computation

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions

Pattern Recognition and Machine Learning

Bayesian Methods for Machine Learning

DETECTION theory deals primarily with techniques for

Bayesian Inference for the Multivariate Normal

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Bayesian Inference: Principles and Practice 3. Sparse Bayesian Models and the Relevance Vector Machine

ECE521 week 3: 23/26 January 2017

Bayesian data analysis in practice: Three simple examples

NEAREST NEIGHBOR CLASSIFICATION WITH IMPROVED WEIGHTED DISSIMILARITY MEASURE

RECURSIVE OUTLIER-ROBUST FILTERING AND SMOOTHING FOR NONLINEAR SYSTEMS USING THE MULTIVARIATE STUDENT-T DISTRIBUTION

Classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

Chris Bishop s PRML Ch. 8: Graphical Models

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Markov Chain Monte Carlo methods

Course content (will be adapted to the background knowledge of the class):

STA 4273H: Statistical Machine Learning

Learning with Noisy Labels. Kate Niehaus Reading group 11-Feb-2014

Outline lecture 2 2(30)

Pattern Classification

PILCO: A Model-Based and Data-Efficient Approach to Policy Search

NON-LINEAR NOISE ADAPTIVE KALMAN FILTERING VIA VARIATIONAL BAYES

Relevance Vector Machines for Earthquake Response Spectra

CS 195-5: Machine Learning Problem Set 1

Stochastic Variational Inference

If we want to analyze experimental or simulated data we might encounter the following tasks:

G. Larry Bretthorst. Washington University, Department of Chemistry. and. C. Ray Smith

MODULE -4 BAYEIAN LEARNING

EEL 851: Biometrics. An Overview of Statistical Pattern Recognition EEL 851 1

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

Reliability Monitoring Using Log Gaussian Process Regression

On the Conditional Distribution of the Multivariate t Distribution

Factor Analysis and Kalman Filtering (11/2/04)

Expectation propagation for signal detection in flat-fading channels

Bayesian Machine Learning

Supplement to A Hierarchical Approach for Fitting Curves to Response Time Measurements

Lecture 6: April 19, 2002

RECOVERING THE PRECISE HEART RATE FROM SPARSELY SAMPLED ELECTROCARDIOGRAMS

Lecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions

Recursive Noise Adaptive Kalman Filtering by Variational Bayesian Approximations

Performance Comparison of K-Means and Expectation Maximization with Gaussian Mixture Models for Clustering EE6540 Final Project

Mathematical Formulation of Our Example

Machine Learning Lecture 2

Comparing Priors in Bayesian Logistic Regression for Sensorial Classification of Rice

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING. Non-linear regression techniques Part - II

EM-algorithm for Training of State-space Models with Application to Time Series Prediction

Clustering by Mixture Models. General background on clustering Example method: k-means Mixture model based clustering Model estimation

STA 4273H: Statistical Machine Learning

Bayesian Regularization and Nonnegative Deconvolution for Time Delay Estimation

Approximating high-dimensional posteriors with nuisance parameters via integrated rotated Gaussian approximation (IRGA)

The Variational Gaussian Approximation Revisited

Sparse Bayesian Logistic Regression with Hierarchical Prior and Variational Inference

Probabilistic & Unsupervised Learning

Forecasting Wind Ramps

Probabilistic classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016

Deep Poisson Factorization Machines: a factor analysis model for mapping behaviors in journalist ecosystem

A Note on Auxiliary Particle Filters

Introduction to Statistical Inference

Sampling Methods (11/30/04)

Pattern Recognition and Machine Learning. Bishop Chapter 6: Kernel Methods

Development of Stochastic Artificial Neural Networks for Hydrological Prediction

Probabilistic Graphical Models for Image Analysis - Lecture 4

Variational Principal Components

Introduction to Probability and Statistics (Continued)

Unsupervised Learning

Probabilistic Graphical Models

Bayesian Methods for Sparse Signal Recovery

STA 4273H: Statistical Machine Learning

Introduction to Graphical Models

Bayesian Modeling and Classification of Neural Signals

Bayesian Machine Learning

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Lior Wolf

Lecture 4: Probabilistic Learning

Computer Vision Group Prof. Daniel Cremers. 9. Gaussian Processes - Regression

Statistical and Learning Techniques in Computer Vision Lecture 1: Random Variables Jens Rittscher and Chuck Stewart

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Chapter 9. Non-Parametric Density Function Estimation

Bayesian Sparse Correlated Factor Analysis

Click Prediction and Preference Ranking of RSS Feeds

Outline Lecture 2 2(32)

General Bayesian Inference I

Fundamentals of Data Assimila1on

On Bayesian Computation

Deep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, Spis treści

Probabilistic Graphical Models

Introduction to Machine Learning

Transcription:

BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vol. 55, No. 4, 7 Bayesian and empirical Bayesian approach to weighted averaging of ECG signal A. MOMOT, M. MOMOT, and J. ŁĘSKI,3 Silesian University of Technology, Institute of Computer Science, 6 Akademicka St., 44- Gliwice, Poland Institute of Medical Technology and Equipment, 8 Roosevelt St., 4-8 Zabrze, Poland 3 Silesian University of Technology, Institute of Electronics, 6 Akademicka St., 44- Gliwice, Poland Abstract. One of the prime tool in non-invasive cardiac electrophysiology is the recording of an electrocardiographic signal ECG) which analysis is greatly useful in the screening and diagnosis of cardiovascular diseases. However, one of the greatest problems is that usually recording an electrical activity of the heart is performed in the presence of noise. The paper presents Bayesian and empirical Bayesian approach to problem of weighted signal averaging in time domain which is commonly used to extract a useful signal distorted by a noise. The averaging is especially useful for biomedical signal such as ECG signal, where the spectra of the signal and noise significantly overlap. Using the methods of weighted averaging are motivated by variability of noise power from cycle to cycle, often observed in reality. It is demonstrated that exploiting a probabilistic Bayesian learning framework leads to accurate prediction models. Additionally, even in the presence of nuisance parameters the empirical Bayesian approach offers the method of theirs automatic estimation which reduces number of preset parameters. Performance of the new method is experimentally compared to the traditional averaging by using arithmetic mean and weighted averaging method based on criterion function minimization. Key words: ECG signal, weighted averaging, Bayesian inference.. Introduction In the most of biomedical signal processing systems for example electrocardiographic signal, which is the main case of interest in this work) noise reduction plays very important role. Accuracy of all later operations performed on signal, such as detections, classifications or measurements, depends on the quality of noise-reduction algorithms. Using the fact that certain biological systems produce repetitive patterns, an averaging in the time domain may be used for noise attenuation. Traditional averaging technique assumes the constancy of the noise power cycle-wise, however the most types of noise are not stationary. In these cases a need for using weighted averaging occurs, which reduces influence of hardly distorted cycles on resulting averaged signal or even eliminates them). The paper presents new method for resolving of signal averaging problem which incorporates Bayesian and empirical Bayesian inference. By exploiting a probabilistic Bayesian framework [], [] and an expectation-maximization technique [3] it can be derived an algorithm of weighted averaging which application to electrocardiographic ECG) signal averaging is competitive with alternative methods as will be shown in the later part of the paper.. Signal averaging methods Let us assume that in each signal cycle y i ) is the sum of a deterministic useful) signal x), which is the same in all cycles, and a random noise n i ) with zero mean and variance for the ith cycle equal to σ i. Thus, y i) x)+n i ), where i is the cycle index i {,,..., M}, and the is the sample index in the single cycle {,,..., N} all cycles have the same length N). The weighted average is given by v) w i y i ), ) i where w i is a weight for ith signal cycle and v) is the averaged signal... Traditional arithmetic averaging. The traditional ensemble averaging with arithmetic mean as the aggregation operation gives all the weights w i equal to M. If the noise variance is constant for all cycles, then these weights are optimal in the sense of minimizing the mean square error between v and x, assuming Gaussian distribution of noise. When the noise has a non-gaussian distribution, the estimate ) is not optimal, but it is still the best of all linear estimators of x [4]... Weighted averaging method based on criterion function minimization. As it is shown in [8], for y i [y i ), y i ),...,y i N)] T, w [w, w,..., w M ] T and v [v), v),..., vn)] T minimization the following scalar criterion function I m w, v) w i ) m ρy i v), ) i with respect to the weights vector w yields e-mail: acek.leski@polsl.pl 34

A. Momot, M. Momot, and J. Łęski w i [ρy i v)] m). 3) [ρy i v)] m) for i {,,..., M}, where ρ ) is a measure of dissimilarity for vector argument and m, ) is a weighting exponent parameter. When the most frequently used quadratic function ρ ) is used, the averaged signal can be obtained as v w i ) m y i i, 4) w i ) m i for the weights vector given by ) with the quadratic function. The optimal solution for minimization ) with respect to w and v is a fixed point of 3) and 4) and it is obtained from the Picard iteration. If m tends to one then the trivial solution is obtained where only one weight, corresponding to the signal cycle with the smallest dissimilarity to averaged signal, is equal to one. If m tend to infinity then weights tend to M for all i. Generally, a larger m results in a smaller influence of dissimilarity measures. The most common value of m is which results in greater decrease of medium weights [5]..3. Bayesian and empirical Bayesian weighted averaging methods. Given a data set y {y i )}, where i is the cycle index i {,,..., M} and the is the sample index in the single cycle {,,..., N}, there are made assumptions that y i ) x)+n i ), where a random noise n i ) is zeromean Gaussian with variance for the ith cycle equal to σi, and signal x {x)} has also Gaussian distribution with zeromean and covariance matrix B diagη, η,...,η N ). The zero-mean assumption for the signal expresses the fact that no prior knowledge about the real distance from the signal to the isoelectric line. Thus, from the Bayes rule, the posterior distribution over x and the noise variance is proportional to px, α x, β) i py x, α)px β)pα) py) exp y i ) x)) α i where α i σ i M ) N α i i N β x)) β, 5) and β η, because of assumption that the prior pα) is approximately constant for large M the influence of this prior is very small). The values x and α which maximize 5) are given by α i N y i ) x)), 6) α i y i ) i x) 7) β + M α i i for i {,,..., M} and {,,..., N}. The conditions in equations 6) and 7) are obtained by differentiating logarithm of 5) with respect to x and α respectively and setting the results equal to zero. As can be calculated differentiating logarithm of 5) with respect to α gives: M ) N log α i α k i i N β exp y i ) x)) α i α k N ) log α i i y i ) x)) α i α i N α k i y k ) x)), and with respect to x gives: M ) N xk) log α i i N β x)) β exp y i ) x)) α i x)) β i y i ) x)) α i xk) i x)) β xk) y i k) xk)) α i xk)β k i ) y i k)α i xk) β k + α i. i i 9) Since β could not be observed, the iterative EM algorithms is used like in [6]. Assuming the gamma prior which is conugate prior distribution for the inverse of the normal variance [], []) with scale parameter λ and shape parameter p: 8) 34 Bull. Pol. Ac.: Tech. 554) 7

Bayesian and empirical Bayesian approach to weighted averaging of ECG signal pβ ) λp Γp) βp exp λβ ), λ, p, β > ) for all, as values of β it is taken Eβ x)) β pβ x))dβ β px) β )pβ )dβ px) β )pβ )dβ β β π exp ) x) β λ p Γp) βp exp λβ )dβ β π exp ) x) β λ p Γp) βp exp λβ )dβ λ p πγp) x) λ ) p+ β exp x) λ ) β ) + + px) β )pβ )dβ ) p + x) λ ) px) β )pβ )dβ + px) β )pβ )dβ + ) p + x) + λ ) px) β )pβ )dβ px) β )pβ )dβ p + x) + λ. ) The estimate ˆλ of hyperparameter λ can be calculated by applying empirical method [7]. The probability distribution function px) λ) can be written in the form px) λ) β π exp ) λ p x) β Γp) βp exp λβ )dβ x) + λ ) β t β t x) + λ ) x) + λ ) dβ tdt λ p Γp) π Γp)x) + λ) p+ λ p λ p p )!! Γp)x) + λ) p+, t ) p exp x) + λ t ) t p exp π t )dt t x) + λ dt ) where the last equation is the consequence of the fact that the last integral is the p-th moment of standard normal distribution assuming that p is positive integer. The double factorial is defined as follows p )!! 3... p ). 3) Since p is a positive integer: E x) λ) x)px) λ)dx) λ p p )!! x) dx) Γp)x) + λ) p+ λp p )!! Γp) p )x) + λ) p λp p )!! Γp) p )λ) p p )!! Γp)p ) 3 p λ. + 4) Therefore the estimate ˆλ of hyperparameter λ can be calculated based on first absolute sample moment as follows ˆλ Γp)p ) p 3 p )!! N x). 5) The results above are generalisation of those presented in [8], where the prior distribution of β was exponential which is the special case of gamma distribution, for the shape parameter p equal. Therefore the proposed Empirical Bayesian Weighted Averaging EBWA.) algorithm can be described as follows, where ε and p positive integer) are preset parameters:. Initialize v ) R N. Set the iteration index k.. Calculate the hyperparameter λ k) using 5), next β k) using ) for,,...,n and α k) i using 6) for,,...,m, assuming x v k ). 3. Update the averaged signal for kth iteration v k) using 7) and β k) and α k) i, assuming v k) x. 4. If vk) v k ) > ε then k k + and go to, else stop. v k) When the parameter p is positive integer greater than, the estimate ˆλ of hyperparameter λ can be calculated based on third absolute sample moment. Since p is a positive integer: Bull. Pol. Ac.: Tech. 554) 7 343

A. Momot, M. Momot, and J. Łęski E x) 3 λ) x) 3 px) λ)dx) x) 3 λ p p )!! Γp)x) + λ) p+ dx) λp p )!! Γp) + λ + p 3) x) + λ) p 3 p ) x) + λ) p λp p )!! λ Γp) p 3) λ) p 3 p ) λ) p p )!! 5 p λ3 Γp) p 3 λ 3 p ) p 3)!! p 3)Γp) 7 p λ 3. 6) Therefore the hyperparameter λ can be calculated based on third absolute sample moment as follows ˆλ p 3)Γp) 7 p p 3)!! N x) 3 3 7) In this case the proposed Empirical Bayesian Weighted Averaging EBWA.3) algorithm can be described as follows, where ε and p positive integer greater than ) are preset parameters:. Initialize v ) R N. Set the iteration index k.. Calculate the hyperparameter λ k) using 7), next β k) using ) for,,...,n and α k) i using 6) for,,..., M, assuming x v k ). 3. Update the averaged signal for kth iteration v k) using 7) and β k) and α k) i, assuming v k) x. 4. If vk) v k ) > ε then k k + and go to, else stop. v k) However, the assumption of the gamma prior for β is not necessarily adequate in all cases. When no reliable prior information concerning β exists, it is possible to use noninformative Jeffrey s prior [] given by pβ ) β, β >. 8) In this situation the proposed algorithm would not require setting of additional parameters such as p or λ, because Eβ x)) β pβ x))dβ β px) β )pβ )dβ px) β )pβ )dβ β β π exp ) x) β β exp λβ )dβ β π exp ) x) β β exp λβ )dβ π x)) β exp x)) β ) + + + px) β )pβ )dβ x)) px) β )pβ )dβ px) β )pβ )dβ + x) px) β )pβ )dβ px) β )pβ )dβ x). 9) As can be seen the result above is identical with ) for λ and p. In this case the proposed Bayesian Weighted Averaging BWA) algorithm can be described as follows, where ε is a preset parameter:. Initialize v ) R N. Set the iteration index k.. Calculate β k) using 9) for,,...,n and α k) i using 6) for,,...,m, assuming x v k ). 3. Update the averaged signal for kth iteration v k) using 7) and β k) and α k) i, assuming v k) x. 4. If vk) v k ) > ε then k k + and go to, else stop. v k) 3. Numerical experiments In all experiments using Weighted Averaging method based on Criterion Function Minimization WACFM) as well as Bayesian and Empirical Bayesian Weighted Averaging methods BWA, EBWA. and EBWA.3) calculations were initialized as the means of disturbed signal cycles. Iteration were stopped as soon as the L norm for a successive pair of vectors was less than 6, respectively w vectors for the WACFM and v vectors for the BWA, EBWA. and EBWA.3. For a computed averaged signal the performance of tested methods was evaluated by the maximal absolute difference between the deterministic component and the averaged signal. 344 Bull. Pol. Ac.: Tech. 554) 7

Bayesian and empirical Bayesian approach to weighted averaging of ECG signal The root mean-square error RMSE) between the deterministic component and the averaged signal was also computed. All experiments were run in the R environment for R version.4.. The simulated ECG signal cycles were obtained as the same deterministic component with added realizations of random noise. The deterministic component presented in Fig. was obtained by averaging 5 real ECG signal cycles - Hz and 6-bit resolution) with high signal to noise ratio and before averaging these cycles were time-aligned using the cross correlation method. A series of ECG cycles was generated with the same deterministic component and zeromean white Gaussian noise with four different standard deviations. For the first, second, third and fourth 5 cycles, the noise standard deviations were, 5,, µv, respectively. These signal cycles were averaged using the following methods: Traditional Arithmetic Averaging TAA), WACFM with m, BWA as well as EBWA. and EBWA.3 with various values of parameter p. Subtraction of deterministic component from these averaged signal gives a residual noise. The RMSE and the maximal value MAX) of residual noise for all tested methods are presented in Table. The best results for each power of noise are bolded. However, in practice besides of Gaussian types of noise, it can be observed random noise with heavy-tailed distribution [9]. The example of such distribution is Cauchy distribution with probability density function given by fx) + πs ) ) x l, ) where l is the location parameter and s is the scale parameter. The parameters are commonly used instead of expected value and standard deviation because of the absence of first two moments. In next experiment to the ECG signal was added Cauchy distributed random noise with l and s µv. In Fig. the deterministic component was presented along with example of Cauchy noise. s Fig.. The simulated ECG signal and this signal with µv standard deviation Gaussian noise Table RMSE and maximum absolute error for averaged ECG signals with Gaussian noise Averaging method MAX [µv] RMSE [µv] TAA 39.785.9635 WACFM 5.984498.9385 BWA 8.499.345869 p 5.5853.953 p 5.585997.9564 EBWA. p 3 5.5868.9579 p 4 5.586.95787 p 5 5.5863.958 EBWA.3 p 5.585873.9547 p 3 5.5863.95777 p 4 5.58678.9586 p 5 5.586.959 R is a free software environment for statistical computing and graphics http://www.r-proect.org) Bull. Pol. Ac.: Tech. 554) 7 345

A. Momot, M. Momot, and J. Łęski Fig.. The simulated ECG signal and this signal with Cauchy noise Table RMSE and maximum absolute error for averaged ECG signals with Cauchy noise Averaging method MAX [µv] RMSE [µv] TAA 697.43 44.3746 WACFM 69.5 7.4984 BWA 56.7664 4.739 p 6.5436 6.787 p 6.478 6.834 EBWA. p 3 6.4779 6.38 p 4 6.549 6.49 p 5 6.5657 6.459 EBWA.3 p 6.3484 6. p 3 6.49377 6.3974 p 4 6.537 6.459 p 5 6.54688 6.4789 Fig. 3. The simulated ECG signal and this signal with muscle noise The RMSE and the maximal value MAX) of residual noise for all tested methods are presented in Table and the best results are bolded. In this experiment the signal cycles were averaged using WACFM with m 3, because for most common value of m, the method does not reach stop condition even after iterations although in previous cases it did not require more than iterations to stop). It shows that the smallest RMSE were obtained by BWA method and a lit- 346 Bull. Pol. Ac.: Tech. 554) 7

Bayesian and empirical Bayesian approach to weighted averaging of ECG signal tle bit worse results were obtained by EBWA and WACFM. It can be seen that despite the fact that the assumption of Gaussian distributed random noise is not satisfied, the results of the experiment are much better compared to the ones obtained by traditional arithmetic averaging. Usually in case of electrocardiographic ECG) signal, two principal sources of noise can be distinguished: the technical caused by the physical parameters of the recording equipment and the physiological representing the bioelectrical activity of living cells not belonging to the area of diagnostic interest also called background activity). Both sources produce noise of random occurrence, overlapping the ECG signal in both time and frequency domains []. This was motivation to perform next experiment with ECG signal distorted by muscle noise which can be treated as composition of Gaussian and impulsive distortion well modeled by heavy-tailed distribution). In practical applications signal to noise ratio is often very poor, even below one which mean that noise level exceeds signal level). In Fig. 3 the deterministic component was presented along with example of such muscle noise where signal to noise ratio is equal one db). The RMSE and the maximal value MAX) of residual noise for all tested methods are presented in Table 3 and the best results are bolded. It shows that the smallest RMSE were obtained by BWA method and a little bit worse results were obtained by EBWA and WACFM. It can be seen that despite the fact that the assumption of Gaussian distributed random noise is not satisfied, the results of the experiment are still acceptable see Figs. 4 8) and slightly better compared to the ones obtained by traditional arithmetic averaging. Table 3 RMSE and maximum absolute error for averaged ECG signals with muscle noise Averaging method MAX [µv] RMSE [µv] TAA 6.476 35.9589 WACFM 87.64957 33.8 BWA 79.879 6.9533 p 79.438 3.9347 p 77.9849 3.637 EBWA. p 3 76.844 3.8546 p 4 76.6369 3.937 p 5 76.548 3.9784 EBWA.3 p 78.569 3.546 p 3 76.64 3.853 p 4 76.3566 3.945 p 5 76.46 3.98 Fig. 4. The simulated ECG signal and the averaged signal using TAA method Bull. Pol. Ac.: Tech. 554) 7 347

A. Momot, M. Momot, and J. Łęski Fig. 5. The simulated ECG signal and the averaged signal using WACFM method Fig. 6. The simulated ECG signal and the averaged signal using BWA method 348 Bull. Pol. Ac.: Tech. 554) 7

Bayesian and empirical Bayesian approach to weighted averaging of ECG signal Fig. 7. The simulated ECG signal and the averaged signal using EBWA. method with parameter p Fig. 8. The simulated ECG signal and the averaged signal using EBWA.3 method with parameter p 5 Bull. Pol. Ac.: Tech. 554) 7 349

A. Momot, M. Momot, and J. Łęski 4. Conclusions In this work the new approach to weighted averaging of biomedical signal was presented along with the application to averaging ECG signals. Presented methods use the results of Bayesian and empirical Bayesian methodology which leads to improved reduction of noise comparing with alternative methods. The new methods are introduced as Bayesian inference together with expectation-maximization procedure. It is worth noting that the BWA algorithm does not require setting of additional parameters in contrast to for example WACFM which needs value of an exponential parameter m. In the EBWA algorithms the parameter λ which influences performance of the procedure is estimated during iterations from input values by empirical method and the only parameter which must be set manually is parameter p. However in all performed experiments the best results appear for p. Another advantage of presented method is fast convergence to the optimal result. In all performed experiments it did not require more than iterations to stop for EBWA and 5 for BWA. The results of numerical experiments show usefulness of the presented method in the noise reduction in ECG signal competitively to existing algorithms. The short computational study presented in this paper confirms that applying Bayesian inference has a practical and beneficial use to weighted averaging of biomedical signal in particular electrocardiographic signal. REFERENCES [] B. Carlin and T. Louis, Bayes and Empirical Bayes Methods for Data Analysis, Chapman & Hall, New York, 996. [] A. Gelman, J. Carlin, H. Stern, and D. Rubin, Bayesian Data Analysis, Chapman & Hall, New York, 4. [3] R. Duda, P. Hart, and D. Stork, Pattern Classification, John Wiley & Sons, Inc, New York,. [4] J. Łęski, Application of time domain signal averaging and Kalman filtering for ECG noise reduction, Ph.D Thesis, Silesian University of Technology, Gliwice, 989. [5] J. Łęski, Robust weighted averaging, IEEE Transactions on Biomedical Engineering 49 8), 796 84 ). [6] M. Figueiredo, Adaptive sparseness for supervised learning, IEEE Transaction on Pattern Analysis and Machine Learning 5 9), 5 59 3). [7] H. White, Estimation, Inference and Specification Analysis, Cambridge University Press, Cambridge, 996. [8] A. Momot, M. Momot, and J. Łęski, Empirical Bayesian averaging of biomedical signals, Proc. XI International Conference MIT 6, 76 8 6). [9] R. Adler, R. Feldman, and M. Taqqu, A Practical Guide to Heavy Tails, Birkhauser, Boston, 998. [] P. Augustyniak, Time-frequency modelling and discrimination of noise in the electrocardiogram, Physiological Measurement 4, 5 3). 35 Bull. Pol. Ac.: Tech. 554) 7