Anomaly Detection and Removal Using Non-Stationary Gaussian Processes

Similar documents
Anomaly Detection and Removal Using Non-Stationary Gaussian Processes

Sequential Bayesian Prediction in the Presence of Changepoints

Spatial smoothing using Gaussian processes

Gaussian Process priors with Uncertain Inputs: Multiple-Step-Ahead Prediction

Big model configuration with Bayesian quadrature. David Duvenaud, Roman Garnett, Tom Gunter, Philipp Hennig, Michael A Osborne and Stephen Roberts.

STAT 518 Intro Student Presentation

Pattern Recognition and Machine Learning

Model Selection for Gaussian Processes

Tutorial on Gaussian Processes and the Gaussian Process Latent Variable Model

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

Learning Gaussian Process Models from Uncertain Data

Regression with Input-Dependent Noise: A Bayesian Treatment

Gaussian Processes for Sequential Prediction

Introduction to Gaussian Processes

Nonparametric Bayesian Methods (Gaussian Processes)

Nonparmeteric Bayes & Gaussian Processes. Baback Moghaddam Machine Learning Group

Variational Model Selection for Sparse Gaussian Process Regression

ICML Scalable Bayesian Inference on Point processes. with Gaussian Processes. Yves-Laurent Kom Samo & Stephen Roberts

Lecture 9. Time series prediction

STA 4273H: Statistical Machine Learning

Active and Semi-supervised Kernel Classification

STA 4273H: Sta-s-cal Machine Learning

Lecture : Probabilistic Machine Learning

Mobile Robot Localization

Linear Dynamical Systems

Analytic Long-Term Forecasting with Periodic Gaussian Processes

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING. Non-linear regression techniques Part - II

Nonparametric Regression With Gaussian Processes

Lecture: Gaussian Process Regression. STAT 6474 Instructor: Hongxiao Zhu

STA 414/2104, Spring 2014, Practice Problem Set #1

Te Whare Wananga o te Upoko o te Ika a Maui. Computer Science

Bayesian Hidden Markov Models and Extensions

Computation fundamentals of discrete GMRF representations of continuous domain spatial models

Tokamak profile database construction incorporating Gaussian process regression

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Neutron inverse kinetics via Gaussian Processes

GWAS V: Gaussian processes

Undirected Graphical Models

Global Optimisation with Gaussian Processes. Michael A. Osborne Machine Learning Research Group Department o Engineering Science University o Oxford

Modelling and Control of Nonlinear Systems using Gaussian Processes with Partial Model Information

Gaussian process for nonstationary time series prediction

Gentle Introduction to Infinite Gaussian Mixture Modeling

Lecture 3: Pattern Classification. Pattern classification

A Process over all Stationary Covariance Kernels

STA414/2104. Lecture 11: Gaussian Processes. Department of Statistics

GAUSSIAN PROCESS REGRESSION

Nonparametric Bayesian Methods - Lecture I

Prediction of Data with help of the Gaussian Process Method

Lecture 1c: Gaussian Processes for Regression

ABSTRACT INTRODUCTION

Part 1: Expectation Propagation

Gaussian Processes (10/16/13)

Introduction. Chapter 1

Afternoon Meeting on Bayesian Computation 2018 University of Reading

Gaussian Process Vine Copulas for Multivariate Dependence

Nonparameteric Regression:

Introduction to Machine Learning

output dimension input dimension Gaussian evidence Gaussian Gaussian evidence evidence from t +1 inputs and outputs at time t x t+2 x t-1 x t+1

Density Estimation. Seungjin Choi

CSC2541 Lecture 2 Bayesian Occam s Razor and Gaussian Processes

COMP 551 Applied Machine Learning Lecture 20: Gaussian processes

Gaussian Process Regression: Active Data Selection and Test Point. Rejection. Sambu Seo Marko Wallat Thore Graepel Klaus Obermayer

Where now? Machine Learning and Bayesian Inference

State Space Gaussian Processes with Non-Gaussian Likelihoods

Machine Learning for OR & FE

Inference and estimation in probabilistic time series models

Gaussian Process Regression

Bayesian time series classification

Bayesian Networks BY: MOHAMAD ALSABBAGH

CPSC 540: Machine Learning

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Probabilistic numerics for deep learning

Bayesian Deep Learning

Lecture 3: Latent Variables Models and Learning with the EM Algorithm. Sam Roweis. Tuesday July25, 2006 Machine Learning Summer School, Taiwan

Linear Methods for Regression. Lijun Zhang

Bayesian Inference of Noise Levels in Regression

Advanced Introduction to Machine Learning CMU-10715

Probabilistic Models for Learning Data Representations. Andreas Damianou

State Space and Hidden Markov Models

Introduction to emulators - the what, the when, the why

Using Gaussian Processes for Variance Reduction in Policy Gradient Algorithms *

Massachusetts Institute of Technology

2 Statistical Estimation: Basic Concepts

Bayesian Poisson Regression for Crowd Counting

20: Gaussian Processes

Dynamic Probabilistic Models for Latent Feature Propagation in Social Networks

Chapter 16. Structured Probabilistic Models for Deep Learning

Nonstationary spatial process modeling Part II Paul D. Sampson --- Catherine Calder Univ of Washington --- Ohio State University

Probabilistic Graphical Models Lecture 20: Gaussian Processes

Gaussian with mean ( µ ) and standard deviation ( σ)

Algorithm-Independent Learning Issues

Machine Learning Lecture 3

Gaussian processes in python

Nonstationary Covariance Functions for Gaussian Process Regression

MODULE -4 BAYEIAN LEARNING

Machine Learning Lecture 2

Dynamic Data Modeling, Recognition, and Synthesis. Rui Zhao Thesis Defense Advisor: Professor Qiang Ji

Bayesian optimization for automatic machine learning

How to build an automatic statistician

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Transcription:

Anomaly Detection and Removal Using Non-Stationary Gaussian ocesses Steven Reece Roman Garnett, Michael Osborne and Stephen Roberts Robotics Research Group Dept Engineering Science Oford University, UK ABSTRACT This paper proposes a novel Gaussian process approach to fault removal in time-series data In order to facilitate fault recovery introduce the Markov Region Link kernel for handling non-stationary Gaussian ocesses This kernel is piece-wise stationary but guarantees that functions generated by it and their derivatives (when required) are everywhere continuous We apply this kernel to the removal of drift and bias errors in faulty sensor data and also to the recovery of EOG artifact corrupted EEG signals I INTRODUCTION Gaussian ocesses (GPs) are eperiencing a resurgence of interest Current applications are in diverse fields such as geophysics, medical imaging, multi-sensor fusion [] and sensor placement [2] A GP is often thought of as a Gaussian over functions [7] It can be used to construct a distribution over functions via a prior on the function s values The prior is specified through a positive-definite kernel, which determines the covariance between two outputs as a function of their corresponding inputs A GP is fully described by its mean and covariance functions Suppose we have a set of training data D = {(, y ),, ( n, y n )} drawn from a noisy process: y i = f( i ) + ϵ i () where f( i ) is the real process and ϵ i is zero-mean Gaussian with variance σ 2 For convenience both inputs and outputs are aggregated into = {,, n } and Y = {y,, y n } respectively The GP estimates the value of the function f at sample locations = {,, m } The basic GP regression equations are given in [7]: ˆf =K(, )[K(, ) + σ 2 ni] Y, (2) Cov(f ) =K(, ) K(, )[K(, ) + σ 2 ni] K(, ) T where ˆf is the marginal posterior mean at and Cov(f ) is the corresponding covariance The prior covariance at is K(, ) where the GP kernel matri K(, ) has elements K ij = K( i, j ) The prior mean is traditionally set to zero and we follow this convention However, the results in this paper can be readily generalised to non-zero prior means The function K is called the kernel function The term σ 2 ni captures the noise in Eqn (3) The GP parameters θ (which is specified by σ 2 n and parameters associated with the kernel function) are the hyperparameters of the GP These can be learned using, for eample, Bayesian learning techniques: p(θ Y, ) = p(y, θ) p(θ) (4) p(y ) The hyperparameters are usually given a vague prior distribution, p(θ) Many applications use stationary covariance functions for which the kernel is a function of the distance between the input points Stationary covariance functions are appealing due to their intuitive interpretation and their relative ease of construction Unfortunately, stationary GP functions are not applicable in applications where there are input-dependent variations in the model hyperparameters (eg length-scale, amplitude) and kernel families Consequently, non-stationary GP functions have been proposed, such as the neural network kernel [] and the Gibbs kernel [] Methods for deriving non-stationary kernels from stationary kernels have also been proposed Perhaps the earliest approach was to assume a stationary random field within a moving window [3] This approaches works well when the nonstationarity is smooth and gradual It fails when sharp changes in the kernel structure occur An alternative solution is to introduce an arbitrary non-linear mapping (or warping) u() of the input and then apply a stationary covariance function in the u-space [9] Unfortunately, this approach does not handle sharp changes in the covariance structure very well [4] The miture of GPs [] approach uses the EM algorithm to simultaneously assign GP mitures to locations and optimise their hyperparameters Although the miture approach can use arbitrary local GP kernels, it does not guarantee function continuity over GP kernel transitions Paciorek [6] proposes a non-stationary GP kernel which guarantees continuity over region boundaries Unfortunately, this approach requires that the local, stationary kernels belong to the same family A Eample: The Gibbs Non-Stationary Kernel Gibbs [], [7] derived the covariance function: D ( ) K(, 2ld ()l d ( /2 ) D ( d ) = l 2 d= d () + ep d) 2 l2 d ( ) l 2 d= d () + l2 d ( ) ()

where each length-scale l i () is an arbitrary positive function of and D is the dimensionality of If the length-scale varies rapidly then the covariance drops off quite sharply due to the pre-factor in Eqn As a consequence the inferred function estimate can be quite uncertain at length-scale boundaries This is demonstrated in Figure (a) for which the length-scale changes from l() = 3 for 3 to l() = for > 3 Further, the Gibbs kernel does not guarantee that functions 8 6 4 2 2 2 4 L 2 2 2 4 4 2 2 8 8 9 2 3 4 6 7 (a) 8 9 2 3 4 6 7 Fig Part (a) shows a function and its estimate obtained using the Gibbs Non-stationary Gaussian ocess kernel Also, (b) a random sample drawn from the Gibbs non-stationary kernel typically showing a discontinuity where the length-scale changes generated by the kernel are continuous Figure (b) shows a typical sample drawn from the posterior Gaussian distribution represented in Figure (a) (b) 6 4 2 8 6 4 2 L Fig 2 Low SNR: The left panel shows the ground truth (red line), observations (red dots) and GP estimate with mean (black line) and first standard deviation (grey region) The other panels show the distributions over length scales, L and, inferred for the piece-wise plots each side of = B Eample: Warping of the Input Space This eample demonstrates the limitation of modelling piece-wise stationary functions by warping the input space as proposed by [9] Figures 2 and 3 show a continuous wedge function with low and high signal-to-noise (SNR) respectively The figures also show the mean and first standard deviation of two models: a warped squared eponential two squared eponential functions joined, using the new kernel, at = For low SNR the warping approach smooths over features of high curvature such as the wedge ape For high SNR the warping kernel produces a bell estimate as it is forced to fit a smooth kernel at the ape In many applications a completely different GP function may be required to model different regions within the space of interest and the family of non-stationary covariance functions in the literature may be too restrictive to model these problems especially when there are function continuity conditions at region boundaries We will show how arbitrary stationary GP kernels can be combined to form non-stationary GP covariance priors which preserve function continuity We shall call the new kernel the Markov Region Link (MRL) The paper is organised as follows Section II presents the problem description as a piece-wise stationary problem with boundary constraints Then Section III presents the MRL kernel for functions which are continuous boundaries and this is etended to cases where function derivatives are continuous at region boundaries in Section IV In Sections V and VI we demonstrate the efficacy of our approach on simulated data 8 6 4 2 8 6 4 2 7 6 4 3 2 2 4 L 6 4 2 8 6 4 2 L 4 3 2 2 4 6 4 2 8 6 4 2 Fig 3 High SNR: The left panel shows the ground truth (red line), observations (red dots) and GP estimate with mean (black line) and first standard deviation (grey region) The other panels show the distributions over length scales, L and, inferred for the piece-wise plots each side of = from a faulty sensor target estimation problem as well as a dataset involving EOG artifact corrupted EEG signals Finally, we conclude in Section VII

II PROBLEM DESCRIPTION We will assume that a domain can be partitioned such that within regions (tiles) the process is stationary [4] However, each region may be modelled by kernels from different families For eample, one region may be modelled by a Matérn kernel whereas a neighbouring region may be modelled by a miture of squared eponential and period kernels We do not assume that the functions generated by these kernels are independent between regions and, although we desire sharply changing GP kernels or hyperparameters at region boundaries, we would also like to preserve function continuity at the boundaries Two regions are labelled R and R 2 and collectively they form the global region R A function over R is inferred at sample locations = {,, m } given training data at locations = {,, n } However, the training data locations are partitioned between the regions and the region boundary Let r be the locations internal to region R r and let B be the locations on the boundary Then: = B 2 We will assume that the function can be modelled using a stationary GP in each region and endeavour to design a global GP covariance prior which preserves the individual region kernels We will also endeavour to preserve function continuity where desired across region boundaries including, for eample, function continuity or continuity of function derivatives Thus, the necessary conditions are: ) The global kernel K preserves the individual region kernels, K r That is, K(, ) = K r (, ) for all r B and all regions r 2) The global kernel preserves function continuity, or derivative continuity, at the boundary oposition : If two regions, labelled and 2, are joined at the boundary B and a function defined over the regions is modelled by K in region R and K 2 in R 2 and the function is continuous at the boundary then: K ( B, B ) = K 2 ( B, B ) = K B The boundary covariance K B is a hyperparameter which can be learned from the training data III THE MARKOV LINK KERNEL We assume that the processes internal to each region are conditionally independent given the process at the boundary B The corresponding graphical model is shown in Figure 4 and f( ) and f( 2 ) are the processes internal to the regions labelled and 2 and f( B ) is the process at the boundary The process in region and at the boundary is modelled using the GP kernel K The rows and columns of K correspond to the stacked vector O = (, B ): K ( K =, ) K (, B ) T K (, B ) K ( B, B ) In D problems the region boundary is often referred to as the changepoint Fig 4 f( ) f( 2) f( ) B Non-stationary GP prior graphical model Similarly, the process in region 2 and at the boundary is modelled using the GP kernel K 2 where the row and columns correspond to the stacked vector O 2 = ( B, 2 ): K2 ( K 2 = B, B ) K 2 ( 2, B ) T K 2 ( 2, B ) K 2 ( 2, 2 ) Of course, if the kernels both accurately model the prior covariance of the process at the boundary then: K ( B, B ) = K 2 ( B, B ) = K B So we condition both K and K 2 on K B to yield K and K 2 respectively: K (, 2 ) = K (, ) + G [K B K ( B, B )]G T, K 2 ( 2, 2 ) = K 2 ( 2, 2 ) + G 2 [K B K 2 ( B, B )]G T 2 where G = K (, B )K ( B, B ) and G 2 = K 2 ( 2, B )K 2 ( B, B ) The global prior covariance is then: K (, ) K (, B ) T D K = K (, B ) K B K2 ( 2, B ) T D T K2 ( 2, B ) K2 ( 2, 2 ) where the rows and columns correspond to the stacked vector O = (, B, 2 ) The cross-terms, D, are: D Cov(f 2 ( ), f 2 ( 2 )) where f and f 2 are the region function values conditioned on the function at the boundary: f ( ) = K (, B )K B f( B), (6) f2 ( 2 ) = K 2 ( 2, B )K B f( B) (7) Since Cov(f( B ), f( B )) = K B then: D = G K B G T 2 As a corollary of this approach we can derive a Gaussian ocess kernel for D signals with a change point at B : Corollary : If K and K 2 are two stationary GP kernels (not necessarily from the same family) which model region

and region 2 respectively, and θ and θ 2 are their hyperparameters, then: K(, 2 ; θ, θ 2 ) K (, 2; θ ) + g ( )[K B K ( B, B)]g ( 2) if, 2 < B K 2(, 2; θ 2) + g 2( )[K B K 2( B, B)]g 2( 2) = if, 2 B g ( )K Bg 2( 2) if < B and 2 B where: g ( ) = K (, B )K 2 ( B, B ) g 2 ( 2 ) = K 2 ( 2, B )K 2 ( B, B ) To demonstrate the new kernel we return to the problem in Figure Using identical hyperparameters and observations as in Figure the function estimate obtained using the Markov Region Link approach is shown in Figure where i and j Y The covariance between the derivatives is: ( f(i ) [ K(, Y )] ij Cov, f( ) j) i j = 2 K( i, j ) (8) i j The derivative variance at i can be obtained by setting j to i in Eqn 8 In our notation K(, Y ) denotes partial differentiation with respect to the first parameter, in this case and K(, Y ) denotes double differentiation with respect to both and then Y These relationships can be used to define non-stationary GP covariance priors which impose continuous function derivatives at region boundaries The prior mean and covariance for both the regional and global priors are augmented to include the function derivative For eample, if the first derivative is added to the prior then the prior covariances for regions R and R 2 become: 2 K = K (, ) K (, B ) [ K ( B, )] T K ( B, ) K ( B, B ) [ K ( B, B )] T K ( B, ) K ( B, B) K ( B, B) and: 2( B, B ) [ K 2 ( B, B )] T K 2 ( B, 2 ) K 2 = K 2( B, B) K 2( B, B) K 2( B, 2) [ K 2 ( B, 2 )] T [ K( B, 2 )] T K 2 ( 2, 2 ) 8 9 2 3 4 6 7 Fig Function and estimate obtained using conditionally independent function segments described by stationary Gibbs Gaussian ocess kernels joined at the boundary IV DERIVATIVE BOUNDARY CONDITIONS So far, we have developed covariance functions which preserve function continuity at the boundary The approach can be etended to assert function derivative continuity at the boundary The covariance between a function and any of its derivatives can be determined from the GP kernel [7] For eample, the prior covariance between the function and its first derivative is: f(i ) [ K(, Y )] ij Cov, f( j ) i = K( i, j ) i The rows and columns in K correspond to the stacked vector O = (, B, D( B )) where D( B ) denotes the function derivative at B Similarly, the rows and columns in K 2 correspond to the stacked vector O 2 = ( B, D( B ), 2 ) We have defined the ordering in this way so that the prior covariances can be slotted into the global covariance prior whose rows and columns represent that stacked vector O = (, B, D( B ), 2 ): the function at ; then the function at B, then the function derivatives at B and finally the function at 2 Consequently, we can use the approach outlined in Section III to construct a global prior for processes which are conditionally independent in each region This is done by defining K B as follows: K ( K B = B, B ) [ K ( B, B )] T K ( B, B ) K ( B, B ) and using K and K 2 defined above in place of K and K 2 in Section III If it is not desirable to retain estimates for the region boundary derivatives then the corresponding rows and columns can be deleted from the global prior mean and covariance Figure 6 shows the effect that the derivative constraint can have on the function estimate Two stationary Gibbs kernels are used with l() = 3 for 3 and l() = for > 3 as in Figure Clearly, the approach which imposes a continuous first derivative on the GP model produces a tighter function estimate at = 3 2 We shall use prime to denote the augmented covariances

Bayesian multi-hypothesis approach which is outlined later in this section The bias fault causes a jump in the observation sequence when the fault sets in at t = T The drift fault is gradual and e(t ) is zero at t = T thus causing the combined process f(t) + e(t) to be continuous (see Figure 7) 2 2 4 6 8 2 2 4 6 8 8 9 2 3 4 6 7 2 2 3 4 6 7 8 9 Time (a) Bias 2 3 4 6 7 8 9 Time (b) Drift Fig 6 Function estimated using two stationary Gibbs kernels joined at = 3 with the constraint that the function first derivative is continuous at = 3 V APPLICATION : TARGET ESTIMATION WITH FAULTY SENSORS We shall use a GP to estimate a target s position over time t as it is tracked by a simple sensor The proposed algorithm operates on-line and infers a posterior distribution for the current target s position using observations of its previous locations Smoothing from future observations is not considered The target s trajectory is described by the process f and the sensor is subject to occasional faults The sensor s fault process e can be either a short term fied bias or it can drift over a period of time The, possibly faulty, observation at time t i is: y i = f(t i ) + e(t i ) + ϵ i where ϵ i is zero mean, Gaussian with variance σ 2 = We wish to estimate target location f(t) over time The processes f and e are described by GP kernels K f and K e We will assume that k f is stationary and we will use a simple squared eponential kernel to model the target dynamics However, the fault is intermittent and it starts at time t = T and ends at t = T We model the fault process using a non-stationary kernel Firstly, e(t) is zero over times, t < T and t > T, for which there is no fault For a bias fault, we assume that the bias is a fied offset and thus assert: K bias (t i, t j ) = µ for all T t i, t j T where µ is a scale parameter representing the magnitude of the bias We assume that the drift is gradual and describe it via a squared eponential kernel: K drift (t i, t j ) = µ ep ( (t i t j ) 2 ) where µ and L are scale and length parameters, respectively, and again, T t i, t j T For simplicity, the scale parameters are assumed known However, the time parameters and fault type ft {bias, drift} are inferred using a simple Fig 7 Typical target trajectory and faulty observations for both bias and drift faults Thus, for the drift fault, e(t) is continuous at T, discontinuous at T when the fault disappears and the rate of change of e will be discontinuous at both T and T We use the Markov Region Link kernel to construct the fault process prior covariance Kdrift from K drift and impose the continuity boundary condition at T Using the approach set out in Section III the prior covariance for the drift fault becomes the block matri: Kdrift = Kdrift (T, f ) T Kdrift ( f, T ) Kdrift ( f, f ) The first row and column are zero matrices for times less than T, corresponding to the period before the fault starts The last row and column are zero matrices for times greater than T, corresponding to times after the fault has stopped The central row and column blocks are prior covariances over time samples f during which the sensor is faulty: f = {t T < t T } Continuity of the fault process at T imposes Kdrift (T, T ) = Values for Kdrift are obtained using Corollary in Section III with B = T, K =, K B = Kdrift (T, T ) = and K 2 = K drift The bias kernel is more straight forward: K bias = µ µ µ µ with the rows and columns interpreted as for K drift We assume that the sensor fault e and target trajectory f processes are independent We note that the trajectory process, f, is hidden and thus we use the following slightly modified GP equations to infer the individual processes: ˆf = K f (, )[K f (, ) + K e(, ) + σ 2 I] Y Cov(f ) = K f (, ) K f (, ) [K f (, ) + K e (, ) + σ 2 I] K f (, ) T

where K e = K drift or K e = K bias A distribution over the parameters θ = {ft, T, T } is determined using the procedure outlined in Section I The likelihood function used in 4 is: p(y, θ) = N(Y ;, K f (, ) + K e (, ) + σ 2 I) where N is the multi-variate normal distribution The hyperparameters are marginalised using Monte-Carlo integration Figure 8 shows typical tracks, observations and GP track estimates The target trajectory and observation sequence are randomly generated, f N(, K f ) and y N(, K e + σ 2 I) Notice that the algorithm has successfully corrected for the faulty observations 2 4 6 8 2 4 6 8 2 2 3 4 6 7 8 9 (a) Bias 2 2 2 3 4 6 7 8 9 (b) Drift Fig 8 Typical target trajectory and faulty observations for both bias and drift faults VI APPLICATION 2: EOG ARTIFACT REMOVAL FROM EEG SIGNALS In this eample we use our approach to recover from a drift type fault, specifically to tracking EEG signals and detecting and removing EOG artifacts The recovery of the EEG signal is often treated as a blind source separation problem [8] where ICA identifies the separate artifact free EEG signal (which we refer to as EEG*) and the EOG signal We propose an alternative approach which uses mitures of Gaussian ocesses Our approach allows us to encode any information we have about the shape of the component signals including signal smoothness and continuity at change points We eplicitly encode models for the EEG* and EOG signal and eplicitly stipulate that these signals are independent The observed EEG signal, y, is a miture of EEG* and EOG artifact signals The EEG* signal, s eeg, is modelled as the combination of a smooth function m eeg (generated by a GP with prior covariance K eeg ) and residuals r eeg The EOG artifact, s eog, is modelled as a piece-wise smooth function, m eog (generated from a GP fault model with prior covariance K eog ) and, again, with residuals r eog s eeg = m eeg + r eeg, s eog = m eog + r eog, y = s eeg + s eog where m eeg N(, K eeg ), r eeg N(, R eeg I), m eog N(, K eog ) and r eog N(, R eog I) The random vectors m eeg, m eog, r eeg and r eog are assumed to be mutually independent The residuals, r eeg and r eog, are considered to be part of the signals and are therefore not treated as noise and are not filtered out We use a simple squared eponential to model the EEG* signal As for the EOG model, typically the EOG signal is zero everywhere ecept within a small time window Within this window the EOG artifact can be modelled as two smooth functions (not necessarily monotonic) which join at a spike near the centre of the window Thus, the EOG s prior covariance K eog is chosen to be zero everywhere ecept between the artifacts start and end times, T s and T e We use the methods outlined in Section III to build the EOG artifact prior covariance Between the start and end times the EOG artifact signal is modelled by two piece-wise squared eponential kernels joined mid-way between T s and T e so that they are continuous at the midpoint and also at T s and T e The following GP equations determine the mean and covariance for the hidden variable, m eeg : ˆm eeg ( ) = [K eeg (, )] [K eeg (, ) + K eog (, ) + σ 2 I] y() Cov eeg ( ) = K eeg (, ) K eeg (, ) [K eeg (, ) + K eog (, ) + σ 2 I] K eeg (, ) T where σ 2 = R eeg + R eog Similar epressions can be obtained for m eog To track the EEG signal our algorithm determines s eeg sequentially over time When is the current time and is the previous times at which data points were obtained then: p(s eeg ( ), s eog( ) y( ), ˆm eeg ( ), ˆm eog( )) p(y( ) s eeg ( ), s eog( ), ˆm eeg ( ), ˆm eog( )) p(s eeg ( ), s eog( ) ˆm eeg ( ), ˆm eog( )) δ y( ),s eeg ( )+s eog ( ) p(s eog ( ) ˆm eog ( )) p(s eeg ( ) ˆm eeg ( )) Marginalising s eog : p(s eeg ( ) y( ), ˆm eeg ( ), ˆm eog ( )) p(y( ) s eeg ( ) ˆm eog ( )) p(s eeg ( ) ˆm eeg ( )) In general, when s is Gaussian distributed then its mean, ŝ, is the solution to: log p(s ) s = and its variance, Var(s), is given by: [ 2 ] log p(s ) Var(s) = E s 2 Thus, defining P eeg ( ) Cov eeg ( ) + R eeg and P eog( ) Cov eog ( ) + R eog : ŝ eeg ( ) = P eeg ( )(y( ) ˆm eog ( )) + P eog( ) ˆm eeg ( ) P eeg ( ) + P eog( ) and: Var eeg ( ) = P eeg ( ) + P eog( ) P eeg ( )P eog( )

Similar reasoning leads us to similar epressions for the EOG artifact signal: ŝ eog ( ) = P eog( )(y( ) ˆm eeg ( )) + P eeg ( ) ˆm eog ( ) P eeg ( ) + P eog( ) and: Var eog ( ) = P eeg ( ) + P eog( ) P eeg ( )P eog( ) These epressions for ŝ eeg and ŝ eog determine the proportion of the EEG signal residual that is assigned to the EOG artifact signal and also to the artifact free EEG signal (EEG*) Our model requires eight hyperparameters, collectively referred to as θ: the scale heights and scale lengths for the GP models (we assume that both parts of the EOG model have the same scale heights and lengths); the artifact start and end times and also the residual variances R eeg and R eog The likelihood used in Equation 4 to determine a distribution over the hyperparameter values is given by: p(y( ) θ) = N[y( ); ˆm eeg,θ ( ) + ˆm eog,θ ( ), P eeg,θ( ) + P eog,θ( )] Again, the hyperparameters are marginalised using Monte- Carlo sampling Figure 9 shows a typical EEG signal which is corrupted by EOG artifacts It also shows the one standard error confidence interval for the artifact free EEG* signal and the EOG artifact obtained using our algorithm Figure shows the mean difference between the original EEG signal and the inferred EEG* signal, indicating the epected proportion of the original signal that is retained in the EEG* 8 6 4 2 2 4 6 2 3 4 8 6 4 2 2 4 6 2 3 4 Fig 9 EEG signal (crosses) and standard error confidence intervals for the EEG* (left panel) and EOG (right panel) signals obtained using the GP miture model approach 8 6 4 2 2 4 6 2 2 3 3 4 4 Fig Original EEG signal (dots) and difference (line) between original signal and the mean EEG* obtained using the GP miture model approach the region boundaries The approach has been successfully demonstrated on sensor fault detection and recovery and also on EEG signal tracking REFERENCES [] M N Gibbs Bayesian Gaussian ocesses for Regression and Classification PhD thesis, Department of Physics, Cambridge University, 997 [2] C Guestrin, A Krause, and A Singh Near-optimal sensor placements in gaussian processes In In ICML, pages 26 272 ICML, 2 [3] T C Haas Kriging and automated variogram modeling within a moving window Atmospheric Environment, 24A:79 769, 99 [4] H-M Kim, B K Mallick, and C C Holmes Analyzing nonstationary spatial data using piecewise gaussian processes Journal of the American Statistical Association (Theory and Methods), (47):63 668, June 2 [] M Osborne, A Rogers, A Ramchurn, S Roberts, and N R Jennings Towards real-time information processing of sensor network data using computationally efficient multi-output gaussian processes In IPSN 28: International Conference on Information ocessing in Sensor Networks, St Louis, Missouri, 28 [6] C J Paciorek and M J Schervish Nonstationary covariance functions for gaussian process regression In In oc of the Conf on Neural Information ocessing Systems (NIPS) MIT ess, 24 [7] C E Rasmussen and C K I Williams Gaussian ocesses for Machine Learning The MIT ess, 26 [8] S J Roberts, R Everson, I Rezek, P Anderer, and A Schlogl Tracking ica for eye-movement artefact removal In oc EMBEC 99, Vienna, 999 [9] P D Sampson and P Guttorp Nonparametric estimation of nonstationary covariance structure Journal of the American Statistical Association, 87:8 9, 99 [] V Tresp Mitures of gaussian processes In oc of the Conf on Neural Information ocessing Systems (NIPS 3), pages 64 66, 2 [] C K I Williams Computation with infinite neural networks Neural Computation, ():23 26, 998 VII CONCLUSIONS This paper has presented an approach to building piece-wise stationary prior covariance matrices from stationary Gaussian ocess kernels Where appropriate the approach asserts function continuity or continuity of any function derivative at