Adaptive modelling of conditional variance function

Similar documents
Reading Group on Deep Learning Session 1

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D.

Bayesian Semiparametric GARCH Models

Bayesian Semiparametric GARCH Models

Computational statistics

Bayesian Inference: Principles and Practice 3. Sparse Bayesian Models and the Relevance Vector Machine

Learning Gaussian Process Models from Uncertain Data

Mark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer.

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US

Fisher information for generalised linear mixed models

Stochastic Quasi-Newton Methods

Gaussian Process Approximations of Stochastic Differential Equations

GARCH Models Estimation and Inference

ECE521 lecture 4: 19 January Optimization, MLE, regularization

Linear Classification. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Lecture 4: Heteroskedasticity

DETECTING PROCESS STATE CHANGES BY NONLINEAR BLIND SOURCE SEPARATION. Alexandre Iline, Harri Valpola and Erkki Oja

Time-Varying Parameters

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

Self Adaptive Particle Filter

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

GARCH Models Estimation and Inference. Eduardo Rossi University of Pavia

Generalized Linear Models. Kurt Hornik

Gaussian kernel GARCH models

Data Fitting and Uncertainty

Immediate Reward Reinforcement Learning for Projective Kernel Methods

Nonparametric Bayesian Methods (Gaussian Processes)

Engineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 5: Single Layer Perceptrons & Estimating Linear Classifiers

Cheng Soon Ong & Christian Walder. Canberra February June 2018

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53

STRUCTURAL TIME-SERIES MODELLING

The Effects of Monetary Policy on Stock Market Bubbles: Some Evidence

Gibbs Sampling in Linear Models #2

STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method.

Lecture 13: Data Modelling and Distributions. Intelligent Data Analysis and Probabilistic Inference Lecture 13 Slide No 1

ECON 4160, Lecture 11 and 12

Mobile Robot Localization

Using Kernel PCA for Initialisation of Variational Bayesian Nonlinear Blind Source Separation Method

Last updated: Oct 22, 2012 LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

Economic modelling and forecasting

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Choosing the Summary Statistics and the Acceptance Rate in Approximate Bayesian Computation

Linear Models in Machine Learning

Linear Models for Regression

Tutorial on Machine Learning for Advanced Electronics

An Evolving Gradient Resampling Method for Machine Learning. Jorge Nocedal

Cross-sectional space-time modeling using ARNN(p, n) processes

The Bayesian Approach to Multi-equation Econometric Model Estimation

Sequential Bayesian Updating

Fundamentals of Data Assimila1on

Dynamic System Identification using HDMR-Bayesian Technique

2.5 Forecasting and Impulse Response Functions

The Algebra of the Kronecker Product. Consider the matrix equation Y = AXB where

Forecasting Wind Ramps

Vasil Khalidov & Miles Hansard. C.M. Bishop s PRML: Chapter 5; Neural Networks

Estimation in Generalized Linear Models with Heterogeneous Random Effects. Woncheol Jang Johan Lim. May 19, 2004

Logistic Regression and Generalized Linear Models

SCUOLA DI SPECIALIZZAZIONE IN FISICA MEDICA. Sistemi di Elaborazione dell Informazione. Regressione. Ruggero Donida Labati

Density Estimation. Seungjin Choi

Understanding Regressions with Observations Collected at High Frequency over Long Span

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

PQL Estimation Biases in Generalized Linear Mixed Models

Nonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines

EM-algorithm for Training of State-space Models with Application to Time Series Prediction

Module 11: Linear Regression. Rebecca C. Steorts

Statistical Techniques in Robotics (16-831, F12) Lecture#21 (Monday November 12) Gaussian Processes

Machine Learning and Data Mining. Linear regression. Kalev Kask

Linear Regression (9/11/13)

Now consider the case where E(Y) = µ = Xβ and V (Y) = σ 2 G, where G is diagonal, but unknown.

Robust Backtesting Tests for Value-at-Risk Models

Recent Advances in Bayesian Inference Techniques

Recursive Generalized Eigendecomposition for Independent Component Analysis

Gaussian with mean ( µ ) and standard deviation ( σ)

Financial Time Series: Changepoints, structural breaks, segmentations and other stories.

Smooth Bayesian Kernel Machines

Lawrence D. Brown* and Daniel McCarthy*

The classifier. Theorem. where the min is over all possible classifiers. To calculate the Bayes classifier/bayes risk, we need to know

The classifier. Linear discriminant analysis (LDA) Example. Challenges for LDA

Parametric Modelling of Over-dispersed Count Data. Part III / MMath (Applied Statistics) 1

EXAM IN STATISTICAL MACHINE LEARNING STATISTISK MASKININLÄRNING

1 Outline. 1. Motivation. 2. SUR model. 3. Simultaneous equations. 4. Estimation

Optimization of Gaussian Process Hyperparameters using Rprop

A test for improved forecasting performance at higher lead times

Why should you care about the solution strategies?

Part 6: Multivariate Normal and Linear Models

Introduction to General and Generalized Linear Models

AUTOMATED TEMPLATE MATCHING METHOD FOR NMIS AT THE Y-12 NATIONAL SECURITY COMPLEX

CONTROL CHARTS FOR MULTIVARIATE NONLINEAR TIME SERIES

Accounting for Missing Values in Score- Driven Time-Varying Parameter Models

Presentation in Convex Optimization

Widths. Center Fluctuations. Centers. Centers. Widths

An Introduction to Parameter Estimation

The regression model with one stochastic regressor (part II)

10) Time series econometrics

An example of Bayesian reasoning Consider the one-dimensional deconvolution problem with various degrees of prior information.

Transcription:

Adaptive modelling of conditional variance function Juutilainen I. and Röning J. Intelligent Systems Group, University of Oulu, 90014 PO BOX 4500, Finland, ilmari.juutilainen@ee.oulu.fi juha.roning@ee.oulu.fi Summary. We study a situation where the dependence of conditional variance on explanatory variables varies over time. The possibility and potential advantages of adaptive modelling of conditional variance are recognized. We present approaches for adaptive modelling of the conditional variance function and elaborate two procedures, moving window estimation and online quasi-newton. The proposed methods were successfully tested in a real industrial data set. Key words: adaptive methods, conditional variance function, variance modelling, time-varying parameter 1 Introduction In many problems, both the mean and the variance of the response variable depend on several explanatory variables. A model for the variance is needed to draw the right conclusions based on the predicted conditional distribution. Modelling of the conditional variance function has been applied in many fields, including industrial quality improvement [Gre93]. Adaptive learning (on-line learning) is commonly used to model timevarying dependence of the response on the explanatory variables or to increase model accuracy along with time and new data. Adaptive methods sequentially adjust the model parameters based on the most recent data. Adaptive models have usually described the conditional distribution function of the response as a time-varying relationship between the explanatory variables and the expectation value of the response. Some models such as GARCH and stochastic volatility models, assume time-varying variance which does not depend on the explanatory variables. Models for time-varying dependence of the conditional variance on the explanatory variables have not been discussed earlier at all. Recursive kernels have been proposed for the sequential estimation of conditional variance depending on several explanatory variables [ST95]. The au-

2 Juutilainen I. and Röning J. thors, however, assume that the variance function does not change along with time. Their model does not adapt well to changes in variance function, because old observations are never discarded from the model. In this paper, we propose two methods for adaptive modelling of conditional variance function: moving window estimation and on-line quasi-newton. We also discuss the role of mean model estimation in adaptive modelling of variance. We used the proposed methods to predict the conditional distribution of strength of steel plates based on a large industrial data set. 2 Methods We notate the ith observation of the response variable with y i. The related vector of inputs is notated with x i. The observations (y i, x i ), i = 1, 2,... are observed sequentially at times t 1, t 2,..., t i < t i+1,.... We assume that y i s are normally, independently distributed with the mean µ i = µ(β(t i ), x i ) and the variance σi 2 = σ2 (τ(t i ), x i ). Both the parameter vector of the mean function, β, and the parameter vector of the variance function, τ, change along with time t and form time-continuous processes {β(t)} and {τ(t)}. The expectation of the squared error term equals the conditional variance Eε 2 i = E(y i µ i ) 2 = σi 2. When the response variable is normally distributed, the squared error term is gamma-distributed. If we knew the correct mean model, the variance function can be correctly estimated by maximising the gamma log-likelihood L = i L i = i [ log σ2 (τ, x i ) ε 2 i /σ2 (τ, x i )] using the squared error term ε 2 i = [y i µ(β(t i ), x i )] 2 as the response [CR88]. 2.1 Moving Window Modelling Moving window is a simple and widely used method for adaptive modelling. In the moving window method, the model is regularly re-estimated using only the most recent observations. The drawback of the method is that the whole model must be re-estimated in each model update. The update formulas developed for linear regression reduce essentially the computational cost [Pol03] and are seemingly approximately applicable to gamma generalised linear models by applying the results of [MW98]. The window width, w, can be determined as a time interval or as the number of observations included in the estimation data set. One usual modification is to discount the weight of the earlier observations in the model fitting instead of discarding them completely. The moving window method is easily applicable to the modelling of the variance function. At the chosen time moments t e or after the chosen observations (y e, x e ), the conditional variance function is estimated by maximising the gamma log-likelihood τ e = max τ ω i [ log σ 2 ε 2 ] i (τ, x i ) σ 2. (1) (τ, x i ) i W

Adaptive modelling of conditional variance function 3 in the set of the most recent observations: W = {i t e w t i t e } or W = {e w, e w+1, e w+2,..., e}. One can choose unit weights ω i = 1 i or discount the weight of older observations. The window width and the amount of discounting are set to optimise the speed of adaptivity. 2.2 Stochastic Gradient The stochastic gradient (stochastic approximation) method employs each new observation to move the parameter estimates based on the gradient of the loss function at that observation. After that, the observation is discarded, so that the model is maintained without a need to store any observations. With a non-shrinking learning rate, the model can adapt to time-varying changes in the modelled phenomenon [MK02]. The methods discussed under the title of on-line learning are often variations of the stochastic gradient. We propose to apply the stochastic gradient method for adaptive modelling of conditional variance. We call the proposed method on-line quasi-newton. The proposed method is an adaptive modification of the non-adaptive online quasi-newton algorithm [Bot98] and the recursive estimation method of generalised linear models [MW98]. The modification that yields the adaptivity is the introduction of the learning rate η(i 1) in Eq. (2). The update step directions are controlled by the accumulated outer-product approximation for the information matrix I i (kl) = i j=1 E 2 /( τ k τ l )L j. After each new observation, ε 2 i, we propose to update the parameter estimates like in a single quasi-newton algorithm step. At the same time, we keep track of the inverse of approximated Hessian K i = I 1 i by using the well-known matrix equality (A + BB T ) 1 = A 1 (A 1 B)(I + B T A 1 B) 1 (A 1 B) T. We propose to use a constant learning rate η, because it has been common in the modelling of time-varying dependence [MK02]. Let τ i = τ(t i ), σ i+1 2 = σ2 ( τ i, x i+1 ) and δ(τ, x i ) = ( / τ)σ 2 (τ, x i ) be the vector of partial derivatives. The resulting update formula for parameter estimates is ( ε 2 ) τ i+1 = τ i + η(i + 1)K i+1 δ( τi, x i+1 ) i+1 σ i+1 2 1 σ i+1 2. (2) Note that ik i = o(1), and learning speed thus remains stable when η is constant. The learning rate controls the speed of adaptivity and should be selected based on the application. The inverse of the approximated information matrix is updated after each observation with [ ] [ Ki δ( τ i, x i+1 )/ σ i+1 2 Ki δ( τ i, x i+1 )/ σ 2 T K i+1 = K i i+1] 1 + [ [ ] δ( τ i, x i+1 ) T / σ i+1] 2 Ki δ( τi, x i+1 )/ σ i+1 2. (3) We propose to initialise the algorithm by using the results of maximum likelihood fit in a relatively large initial data set. The initial inverse approximated { [ ] Hessian is obtained by i δ( τ, xi )/ σ i 2 T [ ] } δ( τ, xi )/ σ i 2 1.

4 Juutilainen I. and Röning J. 3 Effect of Mean Model Estimation In practice, the true mean model is not known and has to be estimated. Variance function is estimated using squared residuals ε 2 i as the response variable. The usual practice is to iterate mean model estimation and variance model estimation [CR88]. We first assume that the true mean model is static β(t) = β t. The accuracy of the mean model can be improved with new data by occasional re-estimation. The response variable for variance function modelling should then be formed based on the latest, most accurate mean model. Let β denote the current estimator and ε i = y i µ( β, x i ) denote the residual. One should, however, notice that E ε 2 i = σ2 i + var( µ i) 2cov(y i, µ i ) + (µ i E µ i ) 2. The covariance cov(y i, µ i ) = 0, if the ith observation is not used for mean model fitting but is otherwise positive. The bias (µ i E µ i ) 2 is difficult to approximate, and the usual practice is to assume it negligible. If the covariances i = 2cov(y i, µ i )/σi 2 var( µ i)/σi 2 can be approximated, they should be taken into account in the model fitting by using the corrected response e i = ε 2 i /(1 i), satisfying Ee i = σi 2. For example, in the linear regression context y i = x T i β + ε i holds cov(y i, µ i ) = var( µ i ) = x T i (X T V 1 X) 1 (x i /σi 2) where V is a diagonal matrix with elements V (ii) = σi 2. When the mean model changes over time, it is much more difficult to neglect the uncertainty about the mean. We now assume that the true mean model parameters form a continuous time Lévy process {β(t)} satisfying E[β(t i ) β(t a )] = 0, cov[β(t i ) β(t a )] = B t i t a. We use moving window -type estimator β, which has been estimated based on the observations measured around the time t a so that E β = β(t a ). The estimator follows the true parameter with a delay likely to occur in practice. Conditioned at the time t a, the residual ε i is normally distributed with the expectation E ε i = 0 and variance depending on σ 2 (τ(t i ), x i ), the steepness of µ(β(t), x) around x i, cov[β(t i ) β(t a )], var( µ i ) and cov(y i, µ i ). We suggest that the fluctuation in the mean model can be taken into account in the estimation of conditional variance by using an additional offset variable q i = var [µ(β(t a ), x i ) µ(β(t i ), x i )]. The offset variable is approximated using covariance estimator B, time difference t i t a and the form of the regression function around x i. The model is fitted using the equation E ε 2 i = q i+σ 2 (τ, x i ). Adaptive on-line quasi-newton can be applied to the joint likelihood of mean and variance parameters. Because the information matrix is block diagonal, the mean and variance can be treated separately. As an alternative method to the adaptive joint modelling of mean and variance we sketch a moving window method in a linear case. The mean model is regularly refitted using the moving window method. For each fit, we choose a recent time moment t a, based on which we predict. We had assumed that cov[β(t i ), β(t a )] = t i t a B. Let b i = β(t i ) β(t a ). Now our model becomes y i = x T i β(t a) + x T i b i + ε i and cov(b i b j ) = min ( t i t a, t j t a ) BI [sign(t i t a ) = sign(t j t a )], where I() denotes the indicator function. As discussed in [CP76] it follows that cov(y i, y j ) = I [sign(t i t a ) = sign(t j t a )] min ( t i t a, t j t a ) x T i Bx j. The

Adaptive modelling of conditional variance function 5 covariance matrix B can be estimated by maximum likelihood or MINQUE [CP76] and β(t a ) by generalised least squares, using the tools available for mixed models. We construct squared residuals ε 2 i = [y i β(t a )] 2 and fit variance model using the moving window method σ 2 (τ(t i ), x i ). In variance model fitting, we use an additional offset variable q i = t i t a x T i Bx i. We predict the distribution of the new observation x n to be Gaussian with the expectation µ( β(t a ), x n ) and the variance σ 2 ( τ, x n ) + t n t a x T n Bx n + x T ncov[ β(t a )]x n. 4 Industrial Application We applied adaptive methods for predicting the conditional variance of steel strength. The data set consisted of measurements made on the production line of Ruukki steel plate mill. The data set included about 200 000 observations, an average of 130 from each of the 1580 days. The data included observations of thousands of different steel plate products. We had two response variables: tensile strength (Rm) and yield strength (ReH). We fitted models for strength (Rm and ReH) using the whole data set and used the ensuing series of squared residuals to fit the models for conditional variance. In moving window modelling, we refitted the models at intervals of a million seconds (about 12 days). Based on the results in a smaller validation data set, we decided to use a unit-weighted ω i = 1 i moving window with width w = 350 days and on-line quasi-newton with a learning rate η = 1/30000. We modelled conditional variance using the framework of generalised linear models. We decided to use a linear model for deviation σi 2 = (xt i τ(t i)) 2. Both variances seemed to depend non-linearly on 12 explanatory variables related to the composition of steel and the thickness and thermomechanical treatments of the plate. As a result of our model selection procedure, we ended up with models representing the discovered non-linearities and interactions with 40 and 32 parameters for ReH and Rm, respectively. The first 450 days of the data set were used to fit the basic, non-adaptive model and to initialise the adaptive models. The models were compared for their ability to predict in the rest of the data. We used real forecasting - at each time moment only the earlier observations were available to fit the model used in prediction. Because variance cannot be directly observed, it is somewhat difficult to measure the goodness of models in predicting variance. Let a model predict the variances to be σ i 2 = σ2 ( τ(t i 1 ), x i ). We base the study on the likelihood of the test data set, assuming that the response variable is normally distributed. It is easy to see that the gamma likelihood of squared residuals ε 2 i is equivalent to full Gaussian likelihood when the mean model is kept fixed. Thus, we measure the goodness of a model in predicting [ the ith observation with] the gamma deviance of squared residual d i = 2 log( ε 2 i / σ2 i ) + ( ε2 i σ2 i )/ σ2 i.

6 Juutilainen I. and Röning J. 4.1 Results The average prediction accuracies of the models in the test data set are presented in Table 1 and in Fig. 1. The adaptive models performed better than the non-adaptive basic model. On-line quasi-newton worked better than the moving window method and was also better than non-adaptive fit to the whole data. The differences between the models are significant, but the non-adaptive model seems fairly adequate. Examination of the time paths of the model parameters revealed that many of the model parameters had changed during the examination period. Examples of the development of the estimated parameter values are given in Fig. 2. The changes in a parameter were often compensated for a reverse change in another correlated parameter. We examined the time paths of the predicted variances of some example steel plates. We found two groups of steel plate products whose variance had Table 1. The average test deviances of the models. Note that fit to whole data does not measure real prediction performance Model Rm ReH Stochastic gradient 2.789 2.655 Moving window 2.801 2.667 Non-adaptive model 2.824 2.689 Constant variance 3.265 2.942 Fit to whole data 2.795 2.666 Fig. 1. The smoothed differences between the average model deviances and the average deviance of the fit to the whole data. Negative values mean that the model predicts better than the fit

Adaptive modelling of conditional variance function 7 Fig. 2. The time paths of two parameter estimates Fig. 3. The predicted deviations of two steel plates used as examples slightly decreased during the study period (Fig. 3 ). For most of the products, we did not find any indication about significant changes in variance. 4.2 Discussion One of the main goals of industrial quality improvement is to decrease variance. Variance does not, however, decrease uniformly: changes and variation in the facilities and practices of the production line affect variance in an irregular way. Variance heteroscedasticy may often be explained by differences in the way in which the variability in the manner of production appears in

8 Juutilainen I. and Röning J. the final product. In industrial applications, a model for variance can be employed in determining an optimal working allowance [JR06]. Adjustment of the working allowance to the decreased variance results in economical benefits. An adaptive variance model can be utilised to adjust working allowance automatically and rapidly. The purpose of the strength of steel study was to find out the benefits of adaptive variance modelling in view of the possible implementation in a steel plate mill. The results of the study did not indicate an immediate need for adaptivity. In this application, however, the introduction of new processing methods and novel products creates a need for repetitive model updating. Utilisation of adaptive models is a useful alternative for keeping the models up-to-date. 5 Conclusion We introduced the possibility to model adaptively the conditional variance function and the potential advantages of the approach. We developed two adaptive methods for modelling variance and applied them successfully in a large data set. Acknowledgement. We are grateful to Ruukki for providing the data and the research opportunity. References [Bot98] Bottou L.: Online learning and stochastic approximations. In: Saad, D. (ed) On-Line Learning in Neural Networks, Cambridge University Press (1988) [CR88] Carroll, R.J., Ruppert, D.: Transformation and Weighting in Regression. Chapman and Hall, New York (1988) [CP76] Cooley, T.F., Prescott, E.C.: Estimation in the presence of stochastic parameter variation. Econometrica, 44, 167 184 (1976) [Gre93] Grego, J.M.: Generalized linear models and process variation. J. Qual. Technol., 25, 288 295 (1993) [JR06] Juutilainen, I., Röning, J.: Planning of strenght margins using joint modelling of mean and dispersion. Mater. Manuf. Processes (in press). [MW98] McGilchrist, C.A., Matawie, K.M.: Recursive residuals in generalised linear models. J. Stat. Plan. Infer., 70, 335 344 (1998) [MK02] Murata, N., Kawanabe, M., Ziehe, A., Mller, K.R., Amari, S.: On-line learning in changing environments with applications in supervised and unsupervised learning. Neural Networks, 15, 743 760 (2002) [Pol03] Pollock, D.S.G.: Recursive estimation in econometrics. Comput. Stat. Data An., 44, 37 75 (2003) [ST95] Stadtmüller, U., Tsybakov, A.B.: Nonparamteric recursive variance estimation. Statistics, 27, 55 63 (1995)