Counter-propagation neural networks in Matlab

Similar documents
Simultaneous determination of composition of human urinary calculi by use of artificial neural networks

Classification of Urinary Calculi using Feed-Forward Neural Networks

Table S1 shows the SOM parameters that were used in the main manuscript. These correspond to the default set of options of the SOM Toolbox.

Analytica Chimica Acta 491 (2003) Received 19 December 2002; received in revised form 10 June 2003; accepted 20 June 2003

FACULTY OF NATURAL SCIENCES AND MATHEMATICS, SKOPJE. Igor Kuzmanovski

Performance assessment of quantum clustering in non-spherical data distributions

Neural Network to Control Output of Hidden Node According to Input Patterns

Analysis of Interest Rate Curves Clustering Using Self-Organising Maps

ADRIANA.Code and SONNIA. Tutorial

SIMULTANEOUS SPECTROPHOTOMETRIC DETERMINATION OF Pb(II) AND Cd(II) USING ARTIFICIAL NEURAL NETWORKS

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

y(n) Time Series Data

Novel determination of dierential-equation solutions: universal approximation method

International Journal of Advanced Research in Computer Science and Software Engineering

A Method for Comparing Self-Organizing Maps: Case Studies of Banking and Linguistic Data

Sample Exam COMP 9444 NEURAL NETWORKS Solutions

Application of SOM neural network in clustering

Data Mining Part 5. Prediction

Scuola di Calcolo Scientifico con MATLAB (SCSM) 2017 Palermo 31 Luglio - 4 Agosto 2017

Artificial Neural Networks. Edward Gatt

Neural Networks Lecture 7: Self Organizing Maps

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

Using Kernel PCA for Initialisation of Variational Bayesian Nonlinear Blind Source Separation Method

Artificial Neural Networks

A Polynomial Curve Fitting Method for Baseline Drift Correction in the Chromatographic Analysis of Hydrocarbons in Environmental Samples

Self-organized criticality and the self-organizing map

EM-algorithm for Training of State-space Models with Application to Time Series Prediction

ORIENTAL JOURNAL OF CHEMISTRY An International Open Free Access, Peer Reviewed Research Journal.

NEURAL NETWORK TECHNIQUES FOR BURST PRESSURE PREDICTION IN KEVLAR/EPOXY PRESSURE VESSELS USING ACOUSTIC EMISSION DATA

Analysis of Multilayer Neural Network Modeling and Long Short-Term Memory

Determination of Density 1

NEW NEURAL NETWORKS FOR STRUCTURE-PROPERTY MODELS

molecules ISSN

On Improving the k-means Algorithm to Classify Unclassified Patterns

Classification and Clustering of Printed Mathematical Symbols with Improved Backpropagation and Self-Organizing Map

Matlab Code for the Discrete Hankel Transform Ugo Chouinard 1, Natalie Baddour 2

Discrete Applied Mathematics

Training of Recurrent Internal Symmetry Networks by Backpropagation

Recurrence Enhances the Spatial Encoding of Static Inputs in Reservoir Networks

Classification of Hand-Written Digits Using Scattering Convolutional Network

Instruction Sheet for SOFT COMPUTING LABORATORY (EE 753/1)

Artificial Neural Networks Examination, June 2005

Neural Networks Based on Competition

Artificial Neural Network Approach for Land Cover Classification of Fused Hyperspectral and Lidar Data

Multivariate class labeling in Robust Soft LVQ

Parsimonious Gaussian Mixture Models

ARTIFICIAL INTELLIGENCE MODELLING OF STOCHASTIC PROCESSES IN DIGITAL COMMUNICATION NETWORKS

Algorithm for Coding DNA Sequences into Spectrum-like and Zigzag Representations

RELATION BETWEEN WIENER INDEX AND SPECTRAL RADIUS

Handwritten English Character Recognition using Pixel Density Gradient Method

A Comparison of Probabilistic, Neural, and Fuzzy Modeling in Ecotoxicity

HOW TO USE MIKANA. 1. Decompress the zip file MATLAB.zip. This will create the directory MIKANA.

Principal Component Analysis, A Powerful Scoring Technique

Statistics Toolbox 6. Apply statistical algorithms and probability models

A Reservoir Sampling Algorithm with Adaptive Estimation of Conditional Expectation

Jae-Bong Lee 1 and Bernard A. Megrey 2. International Symposium on Climate Change Effects on Fish and Fisheries

APPLICATION OF RECURRENT NEURAL NETWORK USING MATLAB SIMULINK IN MEDICINE

INDEPENDENT COMPONENT ANALYSIS (ICA) IN THE DECONVOLUTION OF OVERLAPPING HPLC AROMATIC PEAKS OF OIL

Choosing Variables with a Genetic Algorithm for Econometric models based on Neural Networks learning and adaptation.

Prototype-based Analysis of GAMA Galaxy Catalogue Data

A New Weight Initialization using Statistically Resilient Method and Moore-Penrose Inverse Method for SFANN

day month year documentname/initials 1

International Journal of ChemTech Research CODEN (USA): IJCRGG ISSN: Vol.8, No.9, pp 12-19, 2015

Table of Contents. Multivariate methods. Introduction II. Introduction I

Reduction of complex models using data-mining and nonlinear projection techniques

Artificial Neural Networks Examination, March 2004

SEPARATION OF ACOUSTIC SIGNALS USING SELF-ORGANIZING NEURAL NETWORKS. Temujin Gautama & Marc M. Van Hulle

Real Estate Price Prediction with Regression and Classification CS 229 Autumn 2016 Project Final Report

Integer weight training by differential evolution algorithms

Computer-Assisted Structure Verification of Eudesmane-type Sesquiterpenes using Generalized Regression Neural Network (GRNN)

Integrated Electricity Demand and Price Forecasting

Neural Networks for Two-Group Classification Problems with Monotonicity Hints

Oil Field Production using Machine Learning. CS 229 Project Report

VISUALIZATION OF GEOSPATIAL DATA BY COMPONENT PLANES AND U-MATRIX

Concurrent Self-Organizing Maps for Pattern Classification

Deconvolution of Overlapping HPLC Aromatic Hydrocarbons Peaks Using Independent Component Analysis ICA

SUPPORT VECTOR REGRESSION WITH A GENERALIZED QUADRATIC LOSS

9 Competitive Neural Networks

ARTIFICIAL NEURAL NETWORKS APPROACH IN MICROWAVE FILTER TUNING

String averages and self-organizing maps for strings

Economic Performance Competitor Benchmarking using Data-Mining Techniques

PDF hosted at the Radboud Repository of the Radboud University Nijmegen

Artificial Neural Network and Fuzzy Logic

Tutorial: Basic Gridding of ALSM Point Data

CSC Neural Networks. Perceptron Learning Rule

Learning Vector Quantization with Training Count (LVQTC)

Mr. Harshit K. Dave 1, Dr. Keyur P. Desai 2, Dr. Harit K. Raval 3

A New Hybrid System for Recognition of Handwritten-Script

Structure-Activity Modeling - QSAR. Uwe Koch

This is a repository copy of Improving the associative rule chaining architecture.

For most practical applications, computing the

Impersonality of the Connectivity Index and Recomposition of Topological Indices According to Different Properties

Artificial Neural Networks The Introduction

Artifical Neural Networks

Intelligent Modular Neural Network for Dynamic System Parameter Estimation

Single layer NN. Neuron Model

EPL442: Computational

Lecture 3 GIS outputs. Dr. Zhang Spring, 2017

The Self-Organizing Map and it s variants as tools for geodemographical data analysis: the case of Lisbon s Metropolitan Area

FORECASTING OF ECONOMIC QUANTITIES USING FUZZY AUTOREGRESSIVE MODEL AND FUZZY NEURAL NETWORK

Transcription:

Available online at www.sciencedirect.com Chemometrics and Intelligent Laboratory Systems 90 (2008) 84 91 www.elsevier.com/locate/chemolab Software description Counter-propagation neural networks in Matlab Igor Kuzmanovski a,b,, Marjana Novič b a Institut za hemija, PMF, Univerzitet Sv. Kiril i Metodij, P.O. Box 162, 1001 Skopje, Macedonia b National Institute of Chemistry, Ljubljana, Hajdrihova 19, SLO-1115 Ljubljana, Slovenia Received 9 May 2007; received in revised form 16 July 2007; accepted 20 July 2007 Available online 2 August 2007 Abstract The counter-propagation neural networks have been widely used by the chemometricians for more than fifteen years. This valuable tool for data analysis has been applied for solving many different chemometric problems. In this paper the implementation of counter-propagation neural networks in Matlab environment is described. The program presented here is an extension of Self-Organizing Maps Toolbox for Matlab that is not widely used by chemometricians. This program coupled with the excellent visualization tools available in Self-Organizing Maps Toolbox and with other valuable functions in this environment could be of great interest for analysis of chemical data. The use of the program is demonstrated on the development of the regression and classification models. 2007 Elsevier B.V. All rights reserved. Keywords: Counter-propagation neural networks; Kohonen self-organizing maps; Classification; Regression; Matlab 1. Introduction Kohonen self-organizing maps (SOM) [1,2] and counterpropagation neural networks (CPNN) [3 8] have become common tools in the chemometric community in the last fifteen to twenty years. Their principle is an experience-based modeling, which can be exploited for black box models; as such, they have shown good results in many different application in chemistry where analytical function which connects input with output variables does not exist or it is impossible to derive one (as in QSAR studies) or where there are deviations from linear behavior which is a case in their applications in analytical chemistry. Self-Organizing Maps Toolbox [9 11] for Matlab [12] is available on internet for free [10]; nevertheless the authors have not found many applications of this software in chemometrics [13 16]. Due to its simplicity, comfort of use for their excellent incorporation in visualization tools and easy integration with other toolboxes available in Matlab environment, we decided to Corresponding author. Institut za hemija, PMF, Univerzitet Sv. Kiril i Metodij, P.O. Box 162, 1001 Skopje, Macedonia. Tel.: +398 2 3249952; fax: +398 2 3226865. E-mail address: shigor@iunona.pmf.ukim.edu.mk (I. Kuzmanovski). develop a program for counter-propagation neural networks (CPNN) based on this toolbox. The algorithm for the counter-propagation neural network tool was written and elaborated for the use in Matlab environment. It was integrated into the SOM Toolbox to exploit the advantages of the standard visualization tools available. The performances of the program are demonstrated on the models built for prediction of the boiling points for the series of substances [17,18] and for classification of famous Italian olive oils data set [19 21]. 2. Counter-propagation neural networks algorithm From the didactical point of view this type of artificial neural networks are usually represented as consisting of two layers (Fig. 1): one layer is the competitive Kohonen layer and the other is the output layer. The input layer in CPNN performs the mapping of the multidimensional input data into lower dimensional array (most often two-dimensional, since the exploration and visualization of more than two-dimensional SOMs are not easy for human perception). The mapping is performed by use of competitive learning often called winner-takes-it-all strategy. The main steps of the training procedure of CPNN are presented on the flow chart given in Fig. 2. 0169-7439/$ - see front matter 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.chemolab.2007.07.003

I. Kuzmanovski, M. Novič / Chemometrics and Intelligent Laboratory Systems 90 (2008) 84 91 85 Fig. 1. Graphical representation of counter-propagation neural network. The Kohonen layer serves for mapping the (multidimensional) input data x into twodimensional map and finding the winning neurons, while the weights in both layers (Kohonen and output) are adjusted under the same conditions (the learning rate and the neighbourhood function) using pairs of input and target vectors (x, y). The training process of the CPNN is performed in similar way as the training process of Kohonen self-organizing maps [7,8]. This means that the vectors with the N input variables (x s = x s,1,, x s,i,, x s,n ) are compared only with the weights (w j = w j,1,, w j,i,, w j,n ) of the neurons in the Kohonen layer. Once the winning (or central) neuronc is found among the neurons in the Kohonen layer only, the weights of both layers (Kohonen and output layer) are adjusted according to the pairs of input and target vectors (x, y) using suitably selected learning rate η(t) and neighborhood function a (d j d c ): w new u new ¼ w old ¼ u old þ gðþad t j d c xi w old þ gðþad t j d c yi u old ð1þ : ð2þ The difference d j d c in Eqs. (1) and (2) represents the topological distance between the winning neuron c and the neuron j which weights are adjusted. w old j,i and w new j,i are weights of the Kohonen layer before and after its adjustments were performed, while u old j,i and u new j,i are the weights of the output layer before and after the performed adjustments. The learning rate η(t) is a non-increasing function, which defines the intensity of the changes of the weights during the training process. Some of the commonly used functions, which define the change of the learning rate with time, are given in Fig. 3. Beside the changes of the weights of the central neuron, the weights of the neighboring neurons are also corrected. The intensity of the correction is determined by the shape of the neighborhood function (Fig. 4). The width of the neighborhood defined by the topological distance between the central neuron and the furthest neuron affected by the correction, during the training, also decreases. At the end of this part we should mention that from the programmer's point of view the division of CPNN into two layers is only artificial. The necessary part is to exclude the weight levels, which correspond to dependent variable(s) in Fig. 2. Flow-chart with representation of some of the most important steps in the training process.

86 I. Kuzmanovski, M. Novič / Chemometrics and Intelligent Laboratory Systems 90 (2008) 84 91 3. Data sets Fig. 3. Commonly used learning rate functions (linear, power and inverse). the phase of finding the winning neuron. Afterwards, in the correction of the weights process both layers are treated as one. In the test phase after the winning neuron is found (using only Kohonen layer) the root-mean-square-error of prediction (RMSEP) is calculated using only weights in the output layer. After the training is finished the Kohonen layer serves as pointing device. After the sample vector x s is introduced to the CPNN the comparison of the weights of the Kohonen layer is performed and the position of the winning neuron is determined (this time without further adjustment of the weights training was finished earlier), the corresponding neuron in the output layer and the values of the weights stored in it, are selected as the best matching for the sample vector x s. Even if during the training phase some of the neurons in the Kohonen layer have never been excited by training samples, due to existing interactions between the neurons in this phase, the neurons in the output layer will have stored values for possible samples that have not been used during the training. These properties of CPNN together with suitably selected final neighborhood radius are important in development of model with good generalization performances for the interpolation of the properties modeled. Fig. 4. Some of the neighbourhood functions available in SOM Toolbox (a bubble, b Gaussian, c cut Gaussian). As previously stated the data sets used for the demonstration of this program were taken from the literature [17 20]. The data set used for development of the regression model consists of 185 saturated acyclic compounds (ethers, diethers, acetals and peroxides, as well as their sulfur analogs) [17,18]. Twelve calculated descriptors were used for prediction of the boiling points of these substances. The Italian olive oils data set [19,20], which was used for classification, consists of 572 samples of olive oils produced in nine different regions in Italy (North Apulia, Calabria, South Apulia, Sicily, Inner Sardinia, Costal Sardinia, East Laguria, West Laguria and Umbria). For each of the samples the percentage of the following fatty acids was determined: palmitic, palmitoleic, stearic, oleic, linoleic, arachidic, linolenic and eicosenoic. In this case, as dependent variables were used vectors y with length nine. If the sample belongs to region labeled as k, then y k =1, and all the other elements of y are set to 0. In order to perform the analysis, the data sets were randomly divided into training and test sets. In the case of development of the regression model, 40% of the structures were used as a test set. The remaining structures were used as a training set. In the case of development of the classification model, the Italian olive oil data set was divided into training set consisted of 1/3 of the sample, while all the other samples were used as a test set. In both cases before the optimization started the variables were autoscalled. 4. Software specifications and requirements The CPNN program was developed in Matlab 6.5 (Release 13) [12] on the bases of SOM Toolbox [9] developed by J. Vesanto et al. All the features available for SOM in the toolbox are also available for our CPNN program. The Matlab function which executes the program is called som_counter_prop. The syntax for execution of CPNN program could be found by typing help som_counter_prop in Matlab's Command Window after the installation of the program. The required inputs as well as the outputs are given in the help section of som_counter_prop program. The parameters used for definition of the shape of the CPNN and for its training are numerical. Only three input/output parameters are not numerical. These parameters are structured variables representing training data, test data and the trained CPNN. The supporting documentation and demo scripts available in the SOM Toolbox [9] should be used in order to learn how to extract maximum information of the analyzed data by this counter-propagation neural networks program. The input data file format used in our program is the same as the one used in the SOM Toolbox [9]. This data format is in details described in the SOM Toolbox documentation [9]. Our CPNN program is capable of handling the missing values in the same way as the other functions available in the SOM Toolbox. For this purpose the user should replace them with the label NaN. The training of the CPNN could be performed in two phases: rough (with large learning rate and large neighborhood radius)

I. Kuzmanovski, M. Novič / Chemometrics and Intelligent Laboratory Systems 90 (2008) 84 91 87 and fine-tuning phase (with smaller learning rate and smaller neighborhood radius). If one prefers to train the networks in only one phase it could be achieved by adjusting the number of epochs in one of the phases to zero. Of course, all the adjustment of the weights of the map neurons are performed by finding the winning neurons while considering the weight levels only for the independent variables. While RMSEP for the training set and RMSEP for the test set were calculated comparing only the weight levels from the output layer with the corresponding values for the dependent variables. The Matlab program presented here as input variables accepts almost all the parameters necessary for creation of highly customizable CPNN: training and test data sets, training parameters (epochs in rough and fine-tuning phase, neighborhood function), size of the CPNN, neighborhood (hexagonal or rectangular), shape of the network (sheet, cylinder, toroid) weights initialization function, the parameter called mask and the labels which will be used for labeling of the trained map. The initialization of the weights of the network could be performed assigning random numbers to the weights in all the levels or by its initialization along the first two principal components. The later weight initialization algorithm is better, since in two consecutive optimizations the network will show the same performances. The user may chose between one of these initialization functions. As we mentioned earlier from the programmer's point of view the division of CPNN into two layers is only artificial. What is important is to exclude the weight levels which correspond to dependent variable(s) in the process of finding the winning neurons during the training. This is performed using the mask. The mask is an important input vector (m) its length is equal to the total number of variables, independent and dependent ones. It is responsible for the division of the variables into dependent and independent ones. If its element is set to zero, the corresponding variable is excluded for the process of finding the winning neuron (it becomes dependent variable) and the corresponding weight level becomes part of the output layer of the CPNN. In addition, by setting a proper value for the elements of the mask (0bm i 1) that correspond to the independent variables, the relative importance of these variables is adjusted. The output variables of the trained map are: RMSEP values (calculated for the training and for the test set), the structured variable which contains the weights levels (called codebook vectors by the authors of the SOM Toolbox) of the trained CPNN as well as the predicted values for the input and output variables for the training and the test set. The compatibility of the CPNN program was tested with Matlab 6.5 and 7.0 versions, using computers running under Windows XP operating system. The time required for training of the CPNN depends on the size of the analyzed data. 5. Demonstration In order to present some features of the CPNN program, together with it, two demo scripts (called som_counter_prop_ Table 1 CPNN parameters for the presented regression example Network size Width 20 Length 8 Training parameters Rough training phase Number of epochs 18 Initial neighbourhood radius 3 Final neighbourhood radius 1 Initial learning rate 0.10 Fine-tuning phase Number of epochs 230 Initial neighbourhood radius 2 Final neighbourhood radius 1 Initial learning rate 0.05 demo_class.m and som_counter_prop_demo_reg.m are provided) are provided. For these demonstrations we used CPNN with hexagonal neighborhood, with plain boundary condition, Gaussian neighborhood function and linearly decreasing learning rate. In order to perform the finding of most suitable network size and optimal number of epochs (in both training phases) in automated manner the optimization was performed with genetic algorithms [22 28]. The explanation about how the optimization was performed using genetic algorithms is out of the scope of this article and will be published elsewhere. 5.1. Regression demo As previously stated genetic algorithms were used for finding the most suitable network size and the training parameters for prediction of the boiling points with CPNN. For the sake of the demonstration, in this case as well as in the case of the demonstration of the use of this program for classification purposes, the data sets were randomly divided into a training and test set. However, in the case of using our CPNN program for research, the users are advised to perform the separation of the data set using an algorithm suitable for this purpose [29,30]. The optimization procedure was repeated several times. The network size and the training parameters for one of the best solutions are presented in Table 1. The agreement between the expected and calculated values for the samples in the test set is presented in Fig. 5. The correlation coefficients both, for the training set and the test set, are above 0.95 and show that the model presented here has acceptable performances. 5.2. Classification demo After few repetitions of the optimization the most suitable size of the network and training parameters were selected (Table 2). The unified distance matrix for this trained network, calculated considering only weight levels corresponding to dependent variables, is presented in Fig. 6. One can notice that all the regions are well differentiated on the map. Exceptions are the samples from Sicily, which are divided in two parts. Also, only 3 of the samples in the training set are misclassified or are in

88 I. Kuzmanovski, M. Novič / Chemometrics and Intelligent Laboratory Systems 90 (2008) 84 91 Table 2 CPNN parameters for the presented classification example Network size Width 22 Length 21 Training parameters Rough training phase Number of epochs 24 Initial neighbourhood radius 7 Final neighbourhood radius 1 Initial learning rate 0.1 Fine-tuning phase Number of epochs 177 Initial neighbourhood radius 2 Final neighbourhood radius 1 Initial learning rate 0.05 the regions not occupied by most of the samples. One of the samples from South Apulia is in the upper right region of the map, which belongs to olive oil samples from Sicily. The second sample is from East Laguria and it is placed at the left bottom part of the map. Although this neuron is in the region, which belongs to olive oil samples from West Laguria, the topological distance of this and its neighboring neurons is larger than among other neurons in this region. The third sample is from Sicily and it is found in the region of the network that belongs to North Apulia, in the neuron, which is close to the region of the network, which belongs to Sicily. Since this sample is close to the borderline of the two clusters we decided to examine the weights of this neuron a little bit more carefully. In Table 3 the corresponding weights of the weight levels from the output layer for this neuron and of the surrounding neurons are presented. Simple inspection of the weights presented in this table shows the reasons for misclassification of the particular sample. We tested the trained CPNN with the test samples (Fig. 7). Only 16 samples from the test set (which represent only 4.1% of all samples) were misclassified. Most of the misclassified samples have origin from Sicily (8 samples), three misclassified samples were from South Apulia and two were from Calabria. Another three misclassified samples belong to three different regions (East Laguria, North Apulia and Costal Sardinia). 6. Conclusion The presented program for counter-propagation neural networks, according to authors of this manuscript, is valuable extension of SOM Toolbox and it could be of great interest not only for chemists. Using this program maximum information could be extracted from your data when it is used in combination with other functions of SOM Toolbox, especially its excellent capabilities for different kinds of visualizations of the data and the trained networks. Other advantage achieved by using this program is the openness of the algorithms developed in Matlab environment for easy integration with other functions available from other toolboxes. For example it could be used coupled with functions that could perform optimization of its parameters or variable selection using genetic algorithms. As mentioned earlier in this article the automated optimization procedure of the CPNN using Fig. 5. Expected vs. predicted values for the boiling points for the (a) training set and (b) test set.

I. Kuzmanovski, M. Novič / Chemometrics and Intelligent Laboratory Systems 90 (2008) 84 91 89 Fig. 6. Unified distance matrix, calculated using weigh levels that correspond to output variables, for the trained CPNN together with the neurons labelled with the training samples (the number of the samples is in the brackets). North Apulia NA, Calabria Ca, South Apulia SA, Sicily Si, Inner Sardinia IS, Costal Sardinia CS, East Laguria EL, West Laguria WL and Umbria Um. genetic algorithms is in progress in our laboratory. Additionally, our program can also be applied to data with missing values. At the end we would like to say that the program, together with the demo scripts that describe some of its features as well as the Italian olive oil data sets could be downloaded from: http://www.hemija.net/chemometrics/. 7. Validation The chemometric work groups from Skopje, Macedonia, and Ljubljana, Slovenia, developed a toolbox for analyzing and modeling data sets by means of a Counter Propagation Neural (CPN) network. A CPN network resembles two associated Kohonen networks which are trained in a supervised way. The provided readme file described all the necessary steps to be taken for installing the toolbox successfully. This was evaluated by running the provided demo script som_counter_- prop_demo_class.m. This demo shows in a nutshell how the CPN is trained and how the results for the training and test set can be inspected nicely in a graphical way. During run time no errors were encountered. The time to train the CPN network is only a few seconds (for the relatively small data sets I used). The toolbox was tested with Matlab 6.5. There are just a few non-severe outstanding issues. I missed a kind of comprehensive tutorial which addresses how the most important scripts must be called, i.e., which input and output parameters and variables must be provided. Moreover, a description of the format of the input data must be provided.

90 I. Kuzmanovski, M. Novič / Chemometrics and Intelligent Laboratory Systems 90 (2008) 84 91 Table 3 Comparison of the weights that correspond to dependent variables of the best matching neuron and its neighbouring neurons for the misclassified sample from Sicily (North Apulia NA, Calabria Ca, South Apulia SA, Sicily Si, Inner Sardinia IS, Costal Sardinia CS, East Laguria EL, West Laguria WL and Umbria Um) Positions of the neurons NA Ca SA Si IS CS EL WL Um Upper right (NA region) 0.503 0.001 0.000 0.496 0.000 0.000 0.000 0.000 0.000 Upper left (Si region) 0.212 0.011 0.000 0.777 0.000 0.000 0.000 0.000 0.000 Right (NA region) 0.765 0.000 0.000 0.235 0.000 0.000 0.000 0.000 0.000 Lower right (NA region) 0.773 0.000 0.000 0.227 0.000 0.000 0.000 0.000 0.000 Lower left (NA region) 0.680 0.000 0.000 0.320 0.000 0.000 0.000 0.000 0.000 Left (Si region) 0.398 0.004 0.000 0.597 0.000 0.000 0.001 0.000 0.000 Best matching neuron 0.534 0.001 0.000 0.465 0.000 0.000 0.000 0.000 0.000 Dependent variables for the sample 0 0 0 1 0 0 0 0 0 And last, I minor one, in the 3D PCA plot coloured symbols are used for the data objects. I suggest using the same colouring in the legend of the figure as well. In summary, the CPN toolbox is a useful tool for analyzing and modeling (large) data sets in a supervised way. The toolbox is easy to install and runs fast and error free. Fig. 7. Unified distance matrix, calculated using weigh levels that correspond to output variables, together with the neurons labelled with the samples from the test set.

I. Kuzmanovski, M. Novič / Chemometrics and Intelligent Laboratory Systems 90 (2008) 84 91 91 Dr. Willem Melssen Institute for Molecules and Materials (IMM) Analytical Chemistry Radboud University Nijmegen Toernooiveld 1 6525 ED Nijmegen The Netherlands http://www.cac.science.ru.nl/ Acknowledgements The financial support of the European Union in the framework of the Marie-Curie Research Training Network IBAAC project (MCRTN-CT-2003-505020) and of the Ministry of Education, Science, and Sport of Slovenia (P1-017 grant) as well as the financial support of the Ministry of Education and Science of Macedonia and Ministry of Education, Science, and Sport of Slovenia is gratefully acknowledged (project grant: BI-MK/04-06-12). References [1] T. Kohonen, Neural Netw. 1 (1988) 3. [2] T. Kohonen, Self-organizing Maps, 3rd Edition, Springer, Berlin, 2001. [3] R. Hecht-Nielsen, Proc. IEEE First Int. Conf. Neural Netw. vol. II (1997) 19. [4] R. Hecht-Nielsen, Appl. Opt. 26 (1987) 4979. [5] R. Hecht-Nielsen, Neural Netw. 1 (1988) 131. [6] J. Dayhof, Neural Network Architectures, An Introduction, Van Nostrand Reinhold, New York, 1990, p. 192. [7] J. Zupan, M. Novič, I. Ruisánchez, Chemometr. Intell. Lab. Syst. 38 (1997) 1. [8] J. Zupan, J. Gasteiger, Neural Networks in Chemistry and Drug Design, Wiley, Weinheim New York, 1999. [9] J. Vesanto, J. Himberg, E. Alhoniemi, J. Parhankangas, SOM Toolbox for Matlab 5, Technical Report A57, Helsinki University of Technology, 2000. [10] http://www.cis.hut.fi/projects/somtoolbox/ [11] J. Vesanto, Intell. Data Anal. 6 (1999) 111. [12] MATLAB 6.5, 1984 1998 Mathworks. [13] M.H. Hyvönen, Y. Hiltunen, W. E-Deredy, T. Ojala, J. Vaara, P.T. Kovanen, M. Ala-Korpela, J. Am. Chem. Soc. 123 (2001) 810. [14] G. Espinoza, A. Arenas, F. Giralt, J. Chem. Inf. Comput. Sci. 42 (2002) 343. [15] I. Kuzmanovski, M. Trpkovska, B. Šoptrajanov, J. Mol. Struct. 744 747 (2005) 833. [16] I. Kuzmanovski, S. Dimitrovska-Lazova, S. Aleksovska, Anal. Chim. Acta 595 (2007) 182. [17] A.T. Balaban, L.B. Kier, N. Joshi, J. Chem. Inf. Comput. Sci. 32 (1992) 237. [18] H. Lohninger, J. Chem. Inf. Comput. Sci 33 (1993) 736. [19] M. Forina, C. Armanino, Ann. Chim. (Rome) 72 (1982) 127. [20] M. Forina, E. Tiscornia, Ann. Chim. (Rome) 72 (1982) 144. [21] J. Zupan, M. Novič, X. Li, J. Gasteiger, Anal. Chim. Acta 292 (1994) 219. [22] D. Jouan-Rimbaud, D.L. Massart, R. Leardi, O.E. de Noord, Anal. Chem. 67 (1995) 4295. [23] R. Leardi, A.L. Gonzalez, Chemometr. Intell. Lab. Syst. 41 (1998) 195. [24] K. Hasegawa, Y. Miyashita, K. Funatsu, J. Chem. Inf. Comput. Sci. 37 (1997) 306 310. [25] B.M. Smith, P.J. Gemperline, Anal. Chim. Acta 423 (2000) 167. [26] S.S. So, M. Karplus, J. Med. Chem. 39 (1996) 5246. [27] H. Handels, T. Roß, J. Kreusch, H.H. Wolff, S.J. Pöppl, Artif. Intell. Med. 16 (1999) 283. [28] H. Yoshida, R. Leardi, K. Funatsu, K. Varmuza, Anal. Chim. Acta 446 (2001) 485. [29] R.W. Kennard, L.A. Stone, Technometrics 11 (1969) 137. [30] M. Novič, J. Zupan, J. Chem. Inf. Comput. Sci. 35 (1995) 454.