KRISTEDJO KURNIANTO and TOM DOWNS

Similar documents
CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18

ISyE 691 Data mining and analytics

Logistic Regression and Boosting for Labeled Bags of Instances

Learning with multiple models. Boosting.

CSE 151 Machine Learning. Instructor: Kamalika Chaudhuri

I D I A P. Online Policy Adaptation for Ensemble Classifiers R E S E A R C H R E P O R T. Samy Bengio b. Christos Dimitrakakis a IDIAP RR 03-69

Stochastic Gradient Descent

Voting (Ensemble Methods)

CS7267 MACHINE LEARNING

Linear Regression. Aarti Singh. Machine Learning / Sept 27, 2010

The Perceptron Algorithm

Bagging and Other Ensemble Methods

Multilayer Neural Networks

Introduction to Machine Learning Lecture 11. Mehryar Mohri Courant Institute and Google Research

CSC2515 Winter 2015 Introduction to Machine Learning. Lecture 2: Linear regression

AdaBoost. Lecturer: Authors: Center for Machine Perception Czech Technical University, Prague

Logistic Regression Logistic

ECE 5984: Introduction to Machine Learning

Neural Networks: Optimization & Regularization

Numerical Learning Algorithms

Computational statistics

Outline: Ensemble Learning. Ensemble Learning. The Wisdom of Crowds. The Wisdom of Crowds - Really? Crowd wiser than any individual

Neural Networks and Ensemble Methods for Classification

Hierarchical Boosting and Filter Generation

Multilayer Neural Networks

Artificial Neural Networks Francesco DI MAIO, Ph.D., Politecnico di Milano Department of Energy - Nuclear Division IEEE - Italian Reliability Chapter

WEATHER DEPENENT ELECTRICITY MARKET FORECASTING WITH NEURAL NETWORKS, WAVELET AND DATA MINING TECHNIQUES. Z.Y. Dong X. Li Z. Xu K. L.

Neural Network Training

COMP-4360 Machine Learning Neural Networks

Analysis of the Performance of AdaBoost.M2 for the Simulated Digit-Recognition-Example

Machine Learning Lecture 7

Machine Learning Lecture 10

Advanced statistical methods for data analysis Lecture 2

AI Programming CS F-20 Neural Networks

Machine Learning for OR & FE

6.036 midterm review. Wednesday, March 18, 15

Generalized Boosting Algorithms for Convex Optimization

Neural Networks and the Back-propagation Algorithm

4. Multilayer Perceptrons

Effect of outliers on the variable selection by the regularized regression

Linear Models in Machine Learning

Data Mining und Maschinelles Lernen

Multi-Layer Boosting for Pattern Recognition

Ensembles of Classifiers.

Nonlinear Classification

Machine Learning Lecture 5

Fast Bootstrap for Least-square Support Vector Machines

Development of Stochastic Artificial Neural Networks for Hydrological Prediction

Boosting. CAP5610: Machine Learning Instructor: Guo-Jun Qi

10-701/ Machine Learning - Midterm Exam, Fall 2010

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2013

A Simple Algorithm for Learning Stable Machines

Understanding Travel Time to Airports in New York City Sierra Gentry Dominik Schunack

Artificial Neural Networks

Neural Networks and Deep Learning

ECE521 week 3: 23/26 January 2017

Stat 602 Exam 1 Spring 2017 (corrected version)

8.6 Bayesian neural networks (BNN) [Book, Sect. 6.7]

Boosting Methods: Why They Can Be Useful for High-Dimensional Data

Wiley. Methods and Applications of Linear Models. Regression and the Analysis. of Variance. Third Edition. Ishpeming, Michigan RONALD R.

Holdout and Cross-Validation Methods Overfitting Avoidance

Multicollinearity and A Ridge Parameter Estimation Approach

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.

10-701/ Machine Learning, Fall

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Neural Networks. Nicholas Ruozzi University of Texas at Dallas

A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation

VBM683 Machine Learning

Adaptive Boosting of Neural Networks for Character Recognition

COMPUTATIONAL INTELLIGENCE (INTRODUCTION TO MACHINE LEARNING) SS16

Matrix Factorization Techniques for Recommender Systems

LEAST ANGLE REGRESSION 469

Intelligent Modular Neural Network for Dynamic System Parameter Estimation

Linear & nonlinear classifiers

Linear Classification. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Ensemble learning 11/19/13. The wisdom of the crowds. Chapter 11. Ensemble methods. Ensemble methods

ECE 5424: Introduction to Machine Learning

Perceptron. Subhransu Maji. CMPSCI 689: Machine Learning. 3 February February 2015

Introduction to Neural Networks

SCMA292 Mathematical Modeling : Machine Learning. Krikamol Muandet. Department of Mathematics Faculty of Science, Mahidol University.

CS229 Supplemental Lecture notes

Learning Ensembles. 293S T. Yang. UCSB, 2017.

Course in Data Science

Lecture 8. Instructor: Haipeng Luo

COMS 4771 Introduction to Machine Learning. James McInerney Adapted from slides by Nakul Verma

Why does boosting work from a statistical view

1 Machine Learning Concepts (16 points)

A Novel Rejection Measurement in Handwritten Numeral Recognition Based on Linear Discriminant Analysis

A Comparison Between Polynomial and Locally Weighted Regression for Fault Detection and Diagnosis of HVAC Equipment

POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH

Relationship between Loss Functions and Confirmation Measures

The exam is closed book, closed notes except your one-page (two sides) or two-page (one side) crib sheet.

Linear Regression Linear Regression with Shrinkage

Comments. Assignment 3 code released. Thought questions 3 due this week. Mini-project: hopefully you have started. implement classification algorithms

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 8. Chapter 8. Classification: Basic Concepts

Machine learning for pervasive systems Classification in high-dimensional spaces

EXAM IN STATISTICAL MACHINE LEARNING STATISTISK MASKININLĂ„RNING

COMP9444: Neural Networks. Vapnik Chervonenkis Dimension, PAC Learning and Structural Risk Minimization

TDT4173 Machine Learning

Transcription:

International Journal of Performability Engineering Vol. 1, No., October 005, pp. 157-165 RAMS Consultants Printed in India KRISEDJO KURNIANO and OM DOWNS School of Information echnology and Electrical Engineering University of Queensland, Qld. 407, Australia.. (Received on January 14, 005) Abstract: When sensor parameters cannot be directly measured, estimation of their performance can be carried out using other plant variables that can be measured, so long as they are correlated with the sensor parameters. he correlations can cause computational difficulties. Well-established techniques exist for dealing with these difficulties in the case where the relationship between the predictor variables and the response variable is approximately linear, but for cases where the relationship is nonlinear, the situation is less-well understood. his paper demonstrates that estimation of sensor performance in the nonlinear case can be reliably achieved using elementary neural networks that are subjected to the method of boosting. We first apply this method to a set of data from a nuclear power plant (NPP) in Florida that has been widely studied elsewhere. We show that for this data, which is close to linear, our boosting method provides estimates that are competitive with those obtained using regularized linear models. We then consider a data set from a NPP in the Netherlands where the relationship between predictor and response variables is considerably nonlinear and show that the boosting method gives excellent estimates. Key Words: inferential sensing, performance estimation, nonlinear data, neural networks, boosting. 1. Introduction Operating a power plant safely and economically requires state information which can be obtained by instrumentation that measures critical plant parameters using sensor systems. he accuracy of these sensor systems often cannot be directly measured and, unless it can be inferred in some way, the sensors have to be taken out of service on a periodic basis for calibration. For some critical sensors, the power plant has to be shut down in order for the calibration to be carried out. his can be very expensive so that some form of inferential sensing is preferable. In inferential sensing, the values of sensor readings are estimated from system variables that are accessible during plant operation. Variables that can be used to estimate sensor readings are generally highly correlated, causing vectors of such variables to be highly collinear, creating difficulties in the estimation process. When the relationship between the predictor variables and the response variable is linear (or nearly so), these difficulties can be overcome by employing linear models with regularization, as has been demonstrated elsewhere [1]. For cases where the relationship is nonlinear, good Currently a Researcher with he Indonesia Nuclear Energy Agency. Corresponding author s email: td@itee.uq.edu.au 157

158 Kristedjo Kurnianto and om Downs predictions can only be obtained by using some form of nonlinear model and, to the best of our knowledge, this paper provides the first description of the application of such a model to a problem where the predictor variables are highly correlated and their relationship with the response variable is nonlinear. In this paper we consider two sensor validation problems. For one of these a linear model is appropriate and for the other, a nonlinear model is required. he first problem concerns estimation of the condition of a venturi meter used to measure feedwater flow in a nuclear power plant (NPP). his is an important problem because the flow rate is used in the calculation of thermal power and so needs to be known accurately. Venturi meters are used to measure the flow rate in most pressurized water reactors but over time they gradually accumulate corrosion products and increasingly overestimate the rate of flow. Our objective here is to estimate the drift in meter readings based upon measurements of a set of predictor variables over time. he data for this problem come from an NPP in Florida, USA and the relationship between the predictor variables and the flow rate is reasonably linear. Drift prediction in this linear case can be carried out using regression methods that incorporate regularization and this has been previously demonstrated for this data set []. We compare the performance obtained in [] with that obtained using the machine learning method of boosting, applied to feedforward neural networks. Boosting [3] provides a means of improving the performance of systems that learn from data and the technique is explained in Section.. As we shall see, the machine learning method applied to the linear data set performs competitively with the regularization methods. But importantly, the machine learning method can be applied to nonlinear, as well as linear data and the paper goes on to investigate how it performs on a nonlinear data set. A benchmark dataset from a European NPP is available which has a significant nonlinear relationship between the electric power it generates and the available predictor variables and we use this dataset to test the machine learning methods. For this dataset there is no actual drift in the generated electric power, so a suitable drift is artificially imposed. he effectiveness of our machine learning methods in detecting this drift is then tested.. Methods used he two sets of data were preprocessed along the lines described in []. hat is, median filtering was employed for the rejection of outliers and this was followed by range scaling. After preprocessing, the data were assessed for collinearity. Such an assessment can be carried out in a variety of ways using the matrix X of data vectors using measures such as condition number and correlation coefficient [4]. Because no single measure is fully reliable, we employed four different measures and, for both data sets, all four indicated high collinearity. hus, collinearity is reliably demonstrated. he fact that the data are collinear means that the regression problem is ill-posed and needs to be dealt with carefully. For instance, estimation of the regression coefficients using ordinary least-squares involves inversion of the matrix X X (see equation (1) below) and this is a highly ill-conditioned matrix. he problem can be eased by using some form of biased estimation, which reduces the degrees of freedom in the problem. A well-known example is ridge regression [5] and another is principal component regression [6]. All three of these methods were employed on the linear data set in []. In this paper, we compare the performance obtained in [] with that achieved by the machine learning method of boosting. We then go on to apply all four methods to the nonlinear data set.

Sensor Validation in Nuclear Power Plant using the Method of Boosting 159 We will now describe the methods used on the two data sets. 3.1 Linear methods Only a brief description of the linear methods we use is given here because they are generally quite well known. In linear regression, we are concerned with a relationship of the form y = Xb + (1) where y is an n -vector of observations of the response variable and X is an n m matrix made up of n observations of m predictor variables. We seek the regression coefficients b that minimize the vector of residuals. he ordinary least squares technique estimates these regression coefficients by the formula 1 b (X X) X y () Ridge regression eases the problem of inverting the ill-conditioned matrix X X by introducing a penalty term into the minimization process and this results in the addition of small positive quantities to the diagonal of X X so that () becomes 1 b ( X X λi) X y (3) When we used ridge regression in this work, the regularization parameter was determined by the L-curve method [7] in which a plot is made of the residual norm Xb y this L-shaped plot. against the solution norm b with the best located at the corner of Principal component regression addresses the ill-conditioning problem by replacing the collinear variables with an orthogonal set. hese are the principal components of the vectors making up X and form a matrix we shall denote by P. he standard regression model can be transformed by noting that because matrix P is orthogonal, the term Xb can be written XPP b Qα, where Q = XP and α P b. Equation (1) can then be 1 written y Qα ε from which α ( Q Q ) Q y and the matrix inversion process is now trivial because of the orthogonality of Q. Once α has been obtained, the solution to the original regression problem is given by b Pα which follows from the definition of α and the fact that P is orthogonal. 3. he nonlinear method boosting a neural network he nonlinear method we employ uses an ensemble of elementary neural network learning systems constructed using the method of boosting [3]. he elementary neural networks are simple systems of the feedforward type, trained in the usual way by backpropagation [8]. Details of the neural networks are given below. Boosting was originally developed to improve the performance of pattern classifiers. When boosting is applied to pattern classification, an ensemble of learning systems is trained in sequence, using a probability distribution to select training examples (with

160 Kristedjo Kurnianto and om Downs replacement) from the training set. For the first learning system, examples used in training are selected as a bootstrap sample from the available training data. hat is, with n examples available for training, a sample of size n is randomly selected, with replacement, so that some examples are duplicated in the training set, and some are missing. hen, for the second system, the procedure is modified so that the probability of selecting training examples that were wrongly classified by the first system is increased. In this way, rather more emphasis is placed on learning those examples that the first learning system classified incorrectly. For subsequent systems in the ensemble, the procedure continues in the same vein, ensuring that the examples that are proving more difficult to classify are given increasing emphasis. As the ensemble develops in this way, its classification performance on a separate, validation set, continuously improves until the point where it begins to overfit to the training data. raining is stopped just prior to this point and the resulting ensemble generally gives state-of-the-art performance. For more details, see [3]. When applied to classification problems, the boosting algorithm operates according to whether a learning system classifies an example correctly or incorrectly. Modification of the algorithm for application to regression problems presents some difficulties because in regression there is no simple right or wrong - the regression function can be wrong by a small amount or by a large amount. he first methods developed for the application of boosting to regression got around this difficulty by transforming the problem into one of classification see, for instance [9]. he first method for regression that closely parallels the boosting method for classification was published in [10]. It had been shown in [11] that the boosting method for classification could be considered as a form of gradient descent on a cost function that measures the error made by the ensemble and which takes the form of a weighted sum of exponentials. he method of [10] uses gradient descent on a very similar cost function. he cost function measures the error in the regression function implemented by the ensemble of learning systems. As each new learning system is added to the ensemble, the error on the training set is reduced and the cost function moves closer to its minimum. For vectors of predictor variables x i and response variables y i (i = 1,.., n) and for an ensemble of learning systems, the cost function takes the form n i 1 c 1 J wi, c exp f ( xi ) yi 1 where 1 1 w i, ct exp ct ft ( xi ) yi and ft ) t1 t1 ( implemented by learning system number t. is the function Equation (4) is a weighted exponential function of the regression error made by learning system. he weight w i, is a measure of the error made by the earlier learning systems 1,,.., 1 in predicting the value of y i. Such weights are used in modifying the his is not the only way to place emphasis on the difficult examples. An alternative is to assign a cost to each example and vary the cost according to difficulty. With this procedure, all training examples are used in each boosting round.

Sensor Validation in Nuclear Power Plant using the Method of Boosting 161 probability distribution used for selecting the training examples employed in the training of learning system (this is explained below). he coefficient c in (4) is chosen so as to minimize J. 1. Input: rain set examples ( x1, y 1 ), (, ), ( n, n ) with y i ; WeakLearn: a learning procedure that produces a hypothesis f t (x) whose accuracy on the training set is judged according to J t. Choose initial distribution 1 p1 ( xi ) pi,1 wi,1 n 3. Iterate : (i) Call WeakLearn (ii) Accept if t p, exp ( ) 1 i i t ft xi yi (iii) Set 0 c t 1 to minimize J t (using line search) (iv) Update training distribution w p c 1 i, t1 wi, t ct exp t ft ( xi ) N i, t1 wi, t1 j1 4. Estimate output y on input x : y ˆ c f ( x) w j, t1 c t t t t t y i Figure 1: Pseudocode of the Procedure A pseudocode listing of the procedure is given in Figure 1. In this listing, the learning procedure is called Weaklearn to emphasise the fact that boosting generally performs best when the learning systems used to build the ensemble are simple and have relatively weak learning abilities. And this is the reason why we employed very elementary neural networks as our learning systems. Details of these networks are given below when we explain our experiments. he listing in Figure 1 states that the initial (uniform) distribution is employed to select the bootstrap sample of training examples from the available training data. hese examples are used to train the first learning system using Weaklearn, so on the first iteration, we have t = 1. his system is only accepted for incorporation into the ensemble if its overall regression error 1 is below a prespecified threshold. If it is not accepted, the same learning system is retrained from a new starting point. Once training has been completed satisfactorily, the parameter c 1 that minimizes J 1 is determined using a line search. he probability distribution for selecting the training set for the second learning system is then found using the steps in 3(iv).

16 Kristedjo Kurnianto and om Downs Iterations continue until overfitting is detected on a validation set. he final regression function is then given by the expression in step 4 of the above listing. A more detailed treatment of the method is given in [10]. 3. he experiments he first experiment was aimed at estimating the condition of a venturi meter using data from measurable variables that correlate with feedwater flow. he relationship between these predictor variables and the flow rate is reasonably linear in this case and a number of experiments using linear models have previously been carried out on this problem []. We applied a nonlinear model to the data employed in [], first to establish that the model could provide performance that was competitive with that obtained from the linear models, and then to investigate performance of the nonlinear approach on a data set where highly correlated predictor variables have a nonlinear relationship with the variable to be estimated. 3.1 Linear data he data set studied in [] is taken from the Florida Power Corporation s Crystal River Nuclear Power Plant. wenty-four variables that correlate strongly with feedwater flow were employed as the predictor variables. he objective was not only to estimate the drift in the feedwater flow sensor but also to address the ill-posedness created by the correlated nature of the predictor variables. he predictor variables are mainly temperatures and pressures but include other factors such as water levels and pump speeds (see [] for details). he data consist of 9601 samples recorded at 30-minute intervals for a period of a little over 6 months of operation following full calibration of the venturi meter. For the first few days of operation, the meter can be assumed to be operating correctly, and the first 600 samples (about 1 and a half days) were used as training data. Given 4 predictor variables and a single response variable, the neural network had 4 inputs and a single neural unit to provide the output. he network had three additional neural units ( hidden units) connecting the inputs to the output unit. All neural units had sigmoidal characteristics. After the network has learnt a regression function from the first 600 data points, that function can be used to infer the true value of the feedwater flow for the remainder of the measurement period. For the purposes of this paper, we will concentrate on flow predictions at the very end of the measurement period. able 1 presents results for drift estimation obtained using our neural network (NN) on its own (ie without boosting) and also using an ensemble of NNs generated by the boosting process. Results obtained using the methods described in [], namely ordinary least squares (OLS), ridge regression (RR) and principal component regression (PCR) are also included in the table. he term drift here refers to the difference between the true flow rate and the flow rate indicated by the venturi meter. It is measured in kilopounds per hour. he results in able 1 list mean values of drift estimates along with the standard deviation in each case. hese are means and standard deviations of the estimates from 50 regression functions generated by training on bootstrap samples selected from the 600 training examples. Use of the bootstrap technique was recommended in [] to ensure that the collinearity in the data does not lead to too great an instability in the results. No actual figures were given for standard deviations in [], so a set was computed in the thesis work of one of the authors [1]. All the results in able 1 are taken from [1].

Sensor Validation in Nuclear Power Plant using the Method of Boosting 163 It can be seen from the table that mean drift values estimated by RR, PCR and the boosting method are in very close agreement, but that the boosting method is rather less stable on this collinear data set than are RR and PCR. 3. Nonlinear data able 1: Drift Estimation Results for Linear Data OLS RR PCR NN without boosting NN with boosting Mean Drift 41.88 39.58 39.41 40.41 39.31 Standard Deviation 6.18 0.9 0.15 6.95.9 he second data set is from the Borssele NPP in the Netherlands which provided benchmark data for SMORN-VII, the 7 th Symposium on Nuclear Reactor Surveillance and Diagnostics, held in Avignon, France, in 1995. he full set of data consist of a variety of tasks and just one of these (task-a4) was employed in this experiment. In this case, we seek to predict generated electric power (GEP) using 11 predictor variables that correlate strongly with GEP. hese 11 variables are mainly pressures, flow rates and temperature differences (full details are given in [1]). he data consist of 1169 samples of which the first 600 were employed as the training set. Unlike the data on feedwater flow dealt with in the previous section, the GEP data do not contain any drift over time. herefore, in order to create a problem of drift estimation in this case, a drift of 5MWe (5 Megawatts electrical) was artificially introduced into this data. Given that the predictor variables are very strongly correlated with the response variable, insertion of this drift was quite straightforward. he neural network for this data set had 11 inputs, 3 hidden units and one output unit. able presents results for drift detection using the same 5 methods as in the previous section. As before, the values of mean and standard deviation were obtained using 50 regression functions obtained from bootstrap samples of the training data. able : Drift Estimation Results for Linear Data OLS RR PCR NN without boosting NN with boosting Mean Drift 7.10 8.61 7.6 5.96 5.03 Standard Deviation 0.35 0.18 0.7 1.4 0.06 he results in able show that the linear methods give poor estimates of the drift in this case. On the other hand, the neural network with boosting gives a very accurate estimate with excellent stability. 4. Concluding Remarks In this paper we have investigated the problem of inferring sensor readings that are unmeasurable from values of measurable variables that are highly correlated with the sensor readings. For the feedwater flow problem, in which the relationship between the

164 Kristedjo Kurnianto and om Downs predictor variables and the response variable is linear, we found that linear models with regularization were able to overcome the presence of high collinearity and to give more reliable estimates of sensor readings than neural networks subjected to the method of boosting. For the SMORN benchmark data, where the relationship between the predictor variables and the response variable is nonlinear, the linear models performed poorly while the boosting method performed extremely well. his is not the first time that nonlinear models have been applied to the linear problem of feedwater flow. In [13], feedforward neural networks were trained in various ways to estimate the drift in venturi readings, and quite mediocre results were obtained. In addition, in [14], a number of kernel methods were applied to the problem and although this led to rather mixed results, those obtained using the support vector machine were comparable with those obtained by our boosting method. his is unsurprising because boosting and support vector methods have been shown, over a variety of machine learning tasks, to give similar, state-of-the-art performance. It should be noted that models that can infer drift in the way described in this paper can be combined with a technique such as the sequential probability ratio test [15] to generate an alarm, if needed, for excessive drift. A description of an implementation of such an alarm system is given in [1]. Acknowledgements We thank A Gribok, J W Hines and A Urmanov for providing the feedwater flow data and E urkcan for providing the SMORN-VII data. We also thank R Zemel for some helpful discussion regarding the boosting algorithm for regression. References [1]. ikhonov, A. N. and Arsenin, V. I. Solutions of ill-posed problems, Halstead Press, 1977. []. Gribok A.V., Attieh, I. Hines, J.W. and.uhrig, R.E. Regularization of Feedwater Flow Rate Evaluation for Venturi Meter Fouling Problem in Nuclear Power Plants, Nuclear echnology, vol.134, pp.3-14, 001. [3]. Freund Y. and Schapire, R.E. Experiments with a New Boosting Algorithm, Proc. 13 th Int. Conf. Machine Learning, pp.148-156, 1996. [4]. Stewart G.W., Collinearity and Least Squares Regression, Statistical Science, vol., pp.68-84, 1987. [5]. Hoerl A.E., and Kennard R.W., Ridge regression: Biased Estimation for Nonorthogonal Problems, echnometrics, vol.1, pp.55-67, 1970. [6]. Jolliffe I.., Principal Component Analysis, Springer-Verlag, 1986. [7]. Hansen P.C., Analysis of Discrete Ill-Posed Problems By Means of the L- Curve, SIAM Review, vol.34, pp.561-580, 199. [8]. Haykin S., Neural Networks: a Comprehensive Foundation, Macmillan, 1994. [9]. Freund Y., and.schapire, R.E. A Decision-heoretic Generalization of Online Learning and an Application to Boosting, Journal Computer and System Sciences, vol.55, pp.119-139, 1997. [10]. Zemel R.S., and Pitassi,. A Gradient-Based Boosting Algorithm for Regression Problems, Advances in Neural Information Processing Systems, vol.13, pp.696-70, 001.

Sensor Validation in Nuclear Power Plant using the Method of Boosting 165 [11]. Mason L., Baxter, J., Bartlett, P. and Frean, M. Boosting Algorithms as gradient descent, Advances in Neural Information Processing Systems, vol.1, pp.51-518, 000. [1]. Kurnianto K., Application of Machine Learning echniques to Power Plant Sensor Validation, MEngSc hesis, University of Queensland, 004. [13]. Hines J.W., Gribok, A.V., Attieh, I. and Uhrig, R.E. Improved Methods for On-Line Sensor Calibration Verification, Proc. 8 th Int. Conf. Nuclear Eng., 9 pp, 000. [14]. Gribok A.V.,Hines, J.W. and Uhrig, R.E. Use of Kernel Based echniques for Sensor Validation in Nuclear Power Plants, Int. opical Mtg. Nuclear Plant Instrumentation, Controls and Human-Machine Interface ech., 15pp, 000. [15]. Schoonewelle H., Van Der Hagen,. and.hoogeboom, J.E. heoretical and Numerical Investigations into the SPR Method for Anomaly Detection, Annals Nuclear Energy, vol., pp.731-74, 1995.