Concurrent Self-Organizing Maps for Pattern Classification

Similar documents
1. Introduction. S.S. Patil 1, Sachidananda 1, U.B. Angadi 2, and D.K. Prabhuraj 3

Deriving Uncertainty of Area Estimates from Satellite Imagery using Fuzzy Land-cover Classification

EECS490: Digital Image Processing. Lecture #26

Keywords Eigenface, face recognition, kernel principal component analysis, machine learning. II. LITERATURE REVIEW & OVERVIEW OF PROPOSED METHODOLOGY

COMPARING PERFORMANCE OF NEURAL NETWORKS RECOGNIZING MACHINE GENERATED CHARACTERS

Classification of High Spatial Resolution Remote Sensing Images Based on Decision Fusion

Multiple Similarities Based Kernel Subspace Learning for Image Classification

CITS 4402 Computer Vision

Vegetation Change Detection of Central part of Nepal using Landsat TM

Classification of handwritten digits using supervised locally linear embedding algorithm and support vector machine

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

KNOWLEDGE-BASED CLASSIFICATION OF LAND COVER FOR THE QUALITY ASSESSEMENT OF GIS DATABASE. Israel -

Supervised locally linear embedding

M.C.PALIWAL. Department of Civil Engineering NATIONAL INSTITUTE OF TECHNICAL TEACHERS TRAINING & RESEARCH, BHOPAL (M.P.), INDIA

Lecture 13 Visual recognition

Mapping granite outcrops in the Western Australian Wheatbelt using Landsat TM data

Land Cover Classification Over Penang Island, Malaysia Using SPOT Data

Classification Techniques with Applications in Remote Sensing

COS 429: COMPUTER VISON Face Recognition

Selection of Classifiers based on Multiple Classifier Behaviour

The Self-adaptive Adjustment Method of Clustering Center in Multi-spectral Remote Sensing Image Classification of Land Use

A Statistical Framework for Analysing Big Data Global Conference on Big Data for Official Statistics October, 2015 by S Tam, Chief

EFFECT OF ANCILLARY DATA ON THE PERFORMANCE OF LAND COVER CLASSIFICATION USING A NEURAL NETWORK MODEL. Duong Dang KHOI.

INTERNATIONAL JOURNAL OF GEOMATICS AND GEOSCIENCES Volume 2, No 3, 2012

Machine learning for pervasive systems Classification in high-dimensional spaces

A New Hybrid System for Recognition of Handwritten-Script

ATINER's Conference Paper Series PLA

o 3000 Hannover, Fed. Rep. of Germany

Digital Change Detection Using Remotely Sensed Data for Monitoring Green Space Destruction in Tabriz

Robot Image Credit: Viktoriya Sukhanova 123RF.com. Dimensionality Reduction

A Comparison Between Multilayer Perceptron and Fuzzy ARTMAP Neural Network in Power System Dynamic Stability

A Method to Improve the Accuracy of Remote Sensing Data Classification by Exploiting the Multi-Scale Properties in the Scene

Aruna Bhat Research Scholar, Department of Electrical Engineering, IIT Delhi, India

Discriminant Uncorrelated Neighborhood Preserving Projections

Kazuhiro Fukui, University of Tsukuba

IMPROVING REMOTE SENSING-DERIVED LAND USE/LAND COVER CLASSIFICATION WITH THE AID OF SPATIAL INFORMATION

AN INVESTIGATION OF AUTOMATIC CHANGE DETECTION FOR TOPOGRAPHIC MAP UPDATING

Two-Layered Face Detection System using Evolutionary Algorithm

Trademarks Cassification by Moment Invariant and FuzzyARTMAP

International Journal of Scientific & Engineering Research, Volume 6, Issue 7, July ISSN

Artificial Neural Network Approach for Land Cover Classification of Fused Hyperspectral and Lidar Data

ART Based Reliable Method for Prediction of Agricultural Land Changes Using Remote Sensing

Rapid Object Recognition from Discriminative Regions of Interest

Self Organizing Maps. We are drowning in information and starving for knowledge. A New Approach for Integrated Analysis of Geological Data.

Land cover classification of QuickBird multispectral data with an object-oriented approach

GIS GIS.

GEOGRAPHICAL DATABASES FOR THE USE OF RADIO NETWORK PLANNING

Parameter selection for region-growing image segmentation algorithms using spatial autocorrelation

URBAN LAND COVER AND LAND USE CLASSIFICATION USING HIGH SPATIAL RESOLUTION IMAGES AND SPATIAL METRICS

Face Recognition Using Multi-viewpoint Patterns for Robot Vision

Lecture: Face Recognition

Iterative Laplacian Score for Feature Selection

CS 231A Section 1: Linear Algebra & Probability Review

Comparison of MLC and FCM Techniques with Satellite Imagery in A Part of Narmada River Basin of Madhya Pradesh, India

GEOG 4110/5100 Advanced Remote Sensing Lecture 12. Classification (Supervised and Unsupervised) Richards: 6.1, ,

STUDY ON METHODS FOR COMPUTER-AIDED TOOTH SHADE DETERMINATION

Subcellular Localisation of Proteins in Living Cells Using a Genetic Algorithm and an Incremental Neural Network

Monitoring and Change Detection along the Eastern Side of Qena Bend, Nile Valley, Egypt Using GIS and Remote Sensing

CS 231A Section 1: Linear Algebra & Probability Review. Kevin Tang

Mitsuhiko Kawakami and Zhenjiang Shen Department of Civil Engineering Faculty of Engineering Kanazawa University Japan ABSTRACT

DETECTING HUMAN ACTIVITIES IN THE ARCTIC OCEAN BY CONSTRUCTING AND ANALYZING SUPER-RESOLUTION IMAGES FROM MODIS DATA INTRODUCTION

Spectral and Spatial Methods for the Classification of Urban Remote Sensing Data

Real Time Face Detection and Recognition using Haar - Based Cascade Classifier and Principal Component Analysis

International Journal of Remote Sensing, in press, 2006.

Principal Component Analysis, A Powerful Scoring Technique

Using Image Moment Invariants to Distinguish Classes of Geographical Shapes

Fuzzy Support Vector Machines for Automatic Infant Cry Recognition

SYMBOL RECOGNITION IN HANDWRITTEN MATHEMATI- CAL FORMULAS

INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY

Part 8: Neural Networks

Landuse and Landcover change analysis in Selaiyur village, Tambaram taluk, Chennai

CS 6375 Machine Learning

Machine Learning to Automatically Detect Human Development from Satellite Imagery

Non-linear Measure Based Process Monitoring and Fault Diagnosis

COMPARISON OF PIXEL-BASED AND OBJECT-BASED CLASSIFICATION METHODS FOR SEPARATION OF CROP PATTERNS

Object Based Image Classification for Mapping Urban Land Cover Pattern; A case study of Enugu Urban, Enugu State, Nigeria.

STUDY OF NORMALIZED DIFFERENCE BUILT-UP (NDBI) INDEX IN AUTOMATICALLY MAPPING URBAN AREAS FROM LANDSAT TM IMAGERY

Small sample size generalization

GeoComputation 2011 Session 4: Posters Accuracy assessment for Fuzzy classification in Tripoli, Libya Abdulhakim khmag, Alexis Comber, Peter Fisher ¹D

Principal Component Analysis (PCA)

DEVELOPMENT OF DIGITAL CARTOGRAPHIC DATABASE FOR MANAGING THE ENVIRONMENT AND NATURAL RESOURCES IN THE REPUBLIC OF SERBIA

Neural Networks for Two-Group Classification Problems with Monotonicity Hints

Eigenface-based facial recognition

A MULTI-RESOLUTION HIERARCHY CLASSIFICATION STUDY COMPARED WITH CONSERVATIVE METHODS

Yanbo Huang and Guy Fipps, P.E. 2. August 25, 2006

Feature selection and extraction Spectral domain quality estimation Alternatives

Artificial Neural Networks Examination, March 2004

Analysis of Interest Rate Curves Clustering Using Self-Organising Maps

PHONEME CLASSIFICATION OVER THE RECONSTRUCTED PHASE SPACE USING PRINCIPAL COMPONENT ANALYSIS

7.1 INTRODUCTION 7.2 OBJECTIVE

Karhunen-Loève Transform KLT. JanKees van der Poel D.Sc. Student, Mechanical Engineering

Preparation of LULC map from GE images for GIS based Urban Hydrological Modeling

A Novel Activity Detection Method

2 Dr.M.Senthil Murugan

MODULE 5 LECTURE NOTES 5 PRINCIPAL COMPONENT ANALYSIS

Object-based land use/cover extraction from QuickBird image using Decision tree

COMP 551 Applied Machine Learning Lecture 13: Dimension reduction and feature selection

Linear & Non-Linear Discriminant Analysis! Hugh R. Wilson

CSC Neural Networks. Perceptron Learning Rule

Real Estate Price Prediction with Regression and Classification CS 229 Autumn 2016 Project Final Report

Transcription:

Concurrent Self-Organizing Maps for Pattern Classification Victor-Emil NEAGOE and Armand-Dragos ROPOT Depart. of Applied Electronics and Information Eng., POLITEHNICA University of Bucharest, Bucharest, 77206 Romania Email: vneagoe@xnet.ro Abstract We present a new neural classification model called Concurrent Self-Organizing Maps (C), representing a winner-takes-all collection of small networks. Each of the system is trained individually to provide best results for one class only. We have considered two significant applications: face recognition and multispectral satellite image classification. For first application, we have used the ORL database of 400 faces (40 classes). With C (40 small linear s), we have obtained a recognition score of 1%, while using a single big one obtains a score of 83.5% only! For second application, we have classified the multispectral pixels belonging to a LANDSAT TM image with 7 bands into seven thematic categories. The experimental results lead to the recognition rate of 5.2% using C (7 circular s), while with a single big, one obtains a 4.31% recognition rate. Simultaneously, C leads to a significant reduction of training time by comparison to. 1. Introduction The Self-Organizing Map () (also called Kohonen network) is characterized by the fact that neighbouring neurons in the above neural network develop adaptively into specific detectors of different vector patterns. The neurons become specifically tuned to various classes of patterns through a competitive, unsupervised or self-organizing learning. Only one cell (neuron) or group of cells at a time gives the active response to the current input. The spatial location of a cell in the network (given by its co-ordinates) corresponds to a particular input vector pattern. One important characteristics of is that it can simultaneously extract the statistics of the input vectors and it performs the classification as well. Starting from the idea to consider the as a cell characterizing a specific class only, we present a new neural recognition model called Concurrent Self- Organizing Maps (C) (proposed by Neagoe in []), representing a collection of small s, which use a global winner-takes-all strategy. Each is used to correctly classify the patterns of one class only and the number of networks equals the number of classes. We have tested the proposed C model for two significant applications: (1) face recognition; (2) multispectral satellite image classification. 2. Concurrent Self-Organizing Maps for Pattern Classification Concurrent Self-Organizing Maps (C) are a collection of small, which use a global winnertakes-all strategy. Each network is used to correctly classify the patterns of one class only and the number of networks equals the number of classes. The C training technique is a supervised one, but for any individual net the specific training algorithm is used. We built n training patterns sets and we used the training algorithm independently for each of the n s. The C model for training is shown in Fig. 1. Pattern Set 1 1 DATA BASE Pattern Set 2 2 Figure 1. The C model phase). Pattern Set n n (training For the recognition, the test pattern has been applied in parallel to every previously trained. The map providing the least quantization error is decided to be the winner and its index is the class index that the pattern belongs to (see Fig. 2). 0-765-1724-2/02 $17.00 2002 IEEE Authorized licensed use limited to: Victor-Emil Neagoe. Downloaded on October 1, 2008 at 00:57 from IEEE Xplore. Restrictions apply.

1 Input Pattern 2 n Face recognition is specific topics of computer vision that has been studied for 25 years and has recently become a hot topic. However, face recognition remains a difficult task because of variation of factors such as lighting conditions, viewpoint, body movement and facial expressions. The algorithms of face recognition have numerous potential applications in areas such as visual surveillance, criminal identification, multimedia and visually mediated interaction. Minimization of Quantization Error CLASS Figure 2. The C model (classification phase). 3. C for Face 3.1. Face Database For experimenting the proposed C model, we have used The ORL Database of Faces provided by the AT&T Laboratories from Cambridge University with 400 images, corresponding to 40 subjects (namely, 10 images for each class).we have divided the whole gallery into a training lot (200 pictures) and a test lot (200 pictures). Each image has the size of 2 x 112 pixels with 256 levels of grey. For the same subject (class), the images have been taken at different hours, lighting conditions, and facial expressions, with or without glasses. For each class, one chooses five images for training and five images for test (see Figs. 3-4). Figure 3. Example of training images (5 classes). Figure 4.Test images corresponding to the training ones given in figure 3 (5 classes). 0-765-1724-2/02 $17.00 2002 IEEE Authorized licensed use limited to: Victor-Emil Neagoe. Downloaded on October 1, 2008 at 00:57 from IEEE Xplore. Restrictions apply.

3.2. Experimental Results of Face For the task of face recognition, we have used a processing cascade having two stages: (a) Feature extraction using the Principal Component Analysis (PCA); (b) Pattern classification using C. We have software implemented the proposed technique and have experimented the model using the previously mentioned face database of 400 images. Feature Extraction with PCA The original pictures of 2 x 112 pixels have been resized to 46 x 56, so that the input space has the dimension of 2576. The PCA stage is equivalent to the computation of the Karhunen-Loeve Transform [4], [12]; for example, we can reduce the space dimension from 2576 to 158, by preserving % of the We have computed the covariance matrix of the whole training set of 200 vectors X R 2576, the eigenvalues and the eigenvectors. We have ordered the eigenvalues λ 1 λ 2 λ 3... λ 2575 λ 2576, and have computed the energy preservation factor E, by retaining only n eigenvalues n å = å λi i= 1 E 100. 2576 λ i= 1 In Table 1, the factor of energy preservation is given for various n. We have considered the case of n=158 (E=.02%) and n=10 (E=65.28%). i Number of Features (n) Energy preservation factor E Table 1. Energy preservation factor for various n. 1 158 135 117 100 2 56 50 10 100.02 8.05 7.03 5.7 5.0 0.14 88.84 65.28 Table 2. Experimental results of face classification with C versus. Nr Number Total of retained Number Training Type of number score for the score for the principal of Time classifier of training lot test lot components networks (s) neurons (%) (%) n 1 158 Linear Cs (40 x 4) 160 40 100 1 15 2 158 Linear 160 1 8 71 / 83.5 225 3 158 Cs 1600 40 100 88 250 [40 x (8 x 5)] 4 158 1600 1 100 3 / 81 3750 (40 x 40) 5 10 Linear Cs 120 40 100 85 2 (40 x 3) 6 10 Linear 120 1 7 68 / 77.5 30 7 10 Cs 1600 40 100 85.5 25 [40 x (8 x 5)] 8 10 (40 x 40) 1600 1 100 16 / 83 500 0-765-1724-2/02 $17.00 2002 IEEE Authorized licensed use limited to: Victor-Emil Neagoe. Downloaded on October 1, 2008 at 00:57 from IEEE Xplore. Restrictions apply.

C versus for Face Classification For the second processing stage of face recognition, we have performed a neural clasification using the following techniques: a. the new C model b. the classifier with classical calibration rate [%] 100 0 80 70 60 50 C Classical calibration k-nn calibration c. the classifier with k-nn calibration. The results of simulation are given in Table 2 and figs. 5-8. For the recognition score on the test lot using, there are shown both variants of calibration ( b/c ). rate [%] 100 0 80 70 60 50 C 0 40 80 120 160 200 1600 eurons 0 40 80 120 160 200 1600 Neurons Figure 5. rate on the test lot as a function of the total number of neurons (n = 158 features). rate [%] 100 0 80 70 60 50 10 0 C Classical alibration 40 80 120 160 200 1600 Neurons Figure 7. rate on the test lot as a function of the total number of neurons (n = 10 features). Figure 6. rate on the training lot as a function of the total number of neurons (n = 158 features). rate [%] 100 0 80 70 60 50 10 0 C 40 80 120 160 200 1600 Neurons Figure 8. rate on the training lot as a function of the total number of neurons (n = 10 features). 0-765-1724-2/02 $17.00 2002 IEEE Authorized licensed use limited to: Victor-Emil Neagoe. Downloaded on October 1, 2008 at 00:57 from IEEE Xplore. Restrictions apply.

4. C for Classification of Multispectral Satellite Imagery Processing of satellite imagery has wide applications for generation of various kinds of maps: maps of vegetation, maps of mineral resources of the Earth, land-use maps (civil or military buildings, agricultural fields, woods, rivers, lakes, and highways), and so on. The standard approach to satellite image classification uses statistical methods. A relative new and promising category of techniques for satellite image classification is based on neural models. The concluding remarks obtained as a result of the research on applying neural networks for classification of satellite imagery are the following: neural classifiers do not require initial hypotheses on the data distribution and are able to learn nonlinear and discontinuous input data; neural networks can adapt easily to input data containing texture information; the neural classifiers are generally more accurate than the statistical ones; architecture of neural networks is very flexible, so it can be easily adapted for improving the performances of a particular application. 4.1. Satellite Image Database barren fields, C-bushes, D- agricultural fields, E- meadows, F- woods, G- water (Fig. 10). 2 4 6 Fig..a. Spectral Band 1 Fig..b. Spectral Band Fig..c. Spectral Band 3 Fig..d. Spectral Band Fig..e. Spectral Band 5 Fig..f. Spectral Band For training and testing the software of the proposed C classification model as well as the classical (for comparison), we have used a LANDSAT TM image with 7 bands (Figs..a-g), having a number of 368,125 pixels (7-dimensional), out of which 6,331 pixels were classified by an expert into seven thematic categories: A- urban area; B- Fig..g. Spectral Band 7 Fig.10. image Calibration Figure 11. Classified multispectral pixels (7 categories) using a circular C architecture with 7 x 112 neurons (classification error 5.2%). Figure 12. Classified multispectral pixels (7 categories) using a circular architecture with 784 neurons (classification error 4.31 %). 0-765-1724-2/02 $17.00 2002 IEEE Authorized licensed use limited to: Victor-Emil Neagoe. Downloaded on October 1, 2008 at 00:57 from IEEE Xplore. Restrictions apply.

A: B: C: D: E: F: G: N: Histogram of the C classified image 7.21 % : Urban area 32.76 % : Barren fields 20.17 % : Bushes 10.36 % : Agricult. fields 21.64 % : Meadows 4.75 % : Woods 3.11 % : Water 000%:Un l ssified Figure 13. Histogram of the classified multispectral LANDSAT TM image given in Fig. 11 (using C). A: B: C: D: E: F: G: N: Histogram of the classified image 5.12 % : Urban area 35.53 % : Barren fields 1.73 % : Bushes 13.13 % : Agricultural fields 18.50 % : Meadows 4.58 % : Woods 261%:W ter Figure 14. Histogram of the classified multispectral LANDSAT TM image given in Fig. 12 (using ). Table 3. Experimental results of multispectral satellite image classification with C, and Bayes classifiers (input vector space has the dimension 7). Nr 1 2 3 4 5 6 7 Type of classifier Circular Cs (7 x 112) Circular Linear Cs (7 x 112) Linear Cs [7 x (14 x 8)] (28 x 28) Bayes classifier Total number of neurons Number of networks score for the training lot (%) score for the test lot (%) Training Time (s) 784 7 8.71 5.2 100 784 1 6.4 4.31 3800 784 7 8.64 5.10 50 784 1 7.06 4.12 3700 784 7 7.8 5.07 2 784 1 6.53 2.80 3500 5.83 4.22 4.2. Experimental Results of C Satellite Image Classification Each multispectral pixel (7 bands) is characterized by a corresponding 7-dimensional vector containing the pixel projections in each band. These vectors are applied to the input of the neural classifier. For clasification, we have experimented the following techniques: the new C model the classical classifier the Bayes classifier (by assuming the seven classes have normal repartitions). The results of simulation are given in Tables 3-8. Two classified multispectral images are given in Figs. 11 and 12, and the corresponding histograms are shown in Figs. 13 and 14. The recognition rates for the training lot and also for the test lot are shown in Figs. 15-16. 0-765-1724-2/02 $17.00 2002 IEEE Authorized licensed use limited to: Victor-Emil Neagoe. Downloaded on October 1, 2008 at 00:57 from IEEE Xplore. Restrictions apply.

Table 4. Comparison of the best pixel classification scores obtained by and C for the training lot as a function of the number of neurons. Number of neurons 4 8 16 32 784 Bayes 1.60 3.53 5.10 5.86 7.06 5.83 rate [%] C 3.27 5.01 6.62 7.76 8.71 Table 5. Comparison of the best pixel classification scores obtained by and C for the test lot as a function of the number of neurons. Number of neurons 4 8 16 32 784 Bayes 2.04 3.87 3.62 4.34 4.31 5.17 rate [%] C 2.86 4.7 3.71 4.85 5.2 5 Recogniti on 4 1 3 78 euro C Bayes Figure 15. rate on the training lot as a function of the total number of neurons. 7 6 5 4 Recognitio n C Bayes 3 4 8 1 32 78 Neuron Figure 16. rate on the test lot as a function of the total number of neurons. 0-765-1724-2/02 $17.00 2002 IEEE Authorized licensed use limited to: Victor-Emil Neagoe. Downloaded on October 1, 2008 at 00:57 from IEEE Xplore. Restrictions apply.

Table 6. Confusion matrix for the circular with 784 neurons (test lot). Assigned Real class Class A B C D E F G Total [%] A 80.00 0.08 1.7 0.00 0.00 0.21 0.62 1.6 B 8.57.41 0.66 0.00 0.00 0.00 0.00 37.54 C 5.71 0.17 73.68 0.33 0.48 4.5 3.73 4.80 D 0.00 0.00 1.7 6.45 0.00.28 0.00 2.00 E 0.00 0.00 0.66 0.00 8.55 0.00 0.00 6.48 F 0.00 0.00 14.47 2.77 0.00 84.74 1.86 14.57 G 5.71 0.08 4.61 0.00 0.00 0.41 3.7 5.21 Unclassifie d 0.00 0.25 1.7 0.44 0.7 0.41 0.00 0.44 Total [%] 2.21 37.54 4.80 28.50 6.54 15.32 5.0 100.00 Table 7. Confusion matrix for the circular Cs with (7 x 112) neurons (test lot). Real class Assigned Total Class A B C D E F G [%] A 0.00 0.25 0.00 0.22 0.00 0.21 0.00 2.18 B 2.86.58 0.00 0.00 0.00 0.00 0.00 37.44 C 4.2 0.17 84.87 0.44 1.45 6.80 2.48 5.62 D 0.00 0.00 1.32 5.7 0.00 6.3 0.00 28.34 E 0.00 0.00 0.00 0.00 8.55 0.00 0.00 6.45 F 0.00 0.00 5.26 3.55 0.00 85.77 0.00 14.41 G 2.86 0.00 8.55 0.00 0.00 0.82 7.52 5.56 Unclassifie d 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Total [%] 2.21 37.54 4.80 28.50 6.54 15.32 5.0 100.00 Table 8. Training time required by the previous and C as a function of the number of neurons. Number of neurons 4 8 16 32 784 Training 276 545 1140 2040 4872 5. Concluding Remarks time [sec] C 56 3 171 423 1020 1. The proposed C model uses a collection of small, each network having the task to correctly classify the patterns of one class only. The decision is based on a global winner-takes-all strate gy. 2. From the experimental results of the considered applications, we can evaluate the advantage of the C over both from the point of view of recognition rate and also regarding the training time. A. Face n= 158 principal components 3. By retaining only 158 components in the transformed space (instead of 2576 components of the vectors in the original space), we preserve.02 % of the signal energy. 4. Using a C built by a set of 40 small linear Cs, each with 4 neurons, one obtains a recognition score of 1 % (for the test lot), while using a single linear with the same total number of neurons (160), one obtains a recognition score of only 71% for classical calibration as well as of 83.5 % for improved (k-nn) calibration. 5. Using a C consisting of a collection of 40 rectangular s, each with 8 x 5 = 40 neurons, we have obtained a recognition score of 88%, while with a big rectangular having the same total number of neurons (40 x 40 = 1600), we have obtained only a 3% rate of recognition with classical calibration and 81% for the k-nn calibrated. 0-765-1724-2/02 $17.00 2002 IEEE Authorized licensed use limited to: Victor-Emil Neagoe. Downloaded on October 1, 2008 at 00:57 from IEEE Xplore. Restrictions apply.

6. For the C model, the recognition rate over the test lot increases by increasing the number of neurons till it reaches an optimum (for example, 1% for a number of 160 neurons), and then the recognition rate decreases (see Fig. 5). n= 10 principal components 7. By retaining only 10 components in the transformed space, one preserves 65.28 % of the signal energy contained in the original space having the dimensionality of 2576. 8. Using a set of 40 small linear Cs, each with 3 neurons, one obtains a recognition score of 85 %, while using a corresponding linear with the same total number of neurons (120), one obtains a recognition score of only 68% for classical calibration as well as 77.5 % for k-nn calibration.. For a rectangular architecture, the C model leads also to better results than (see Table 2). 10. From the point of view of training time, the advantage of C over is obvious. Theoretically, for 40 classes, the training time of C must be about 40 times less than that of the corresponding with the same number of neurons. During the training of C, each input vector is applied only to a specific small corresponding to the vector class; then, one has to compute only 1/40 of the number of distances computed for. Moreover, the radiuses of neighbourhoods are smaller for C components than for a single big. The results of simulation are given in Table 2. B. Classification of Multispectral Satellite Imagery 11. We can evaluate the very good classification score of multispectral pixel classification for all the experimented classifiers, but the C model leads to slightly better results for all the presented variants by comparison to and Bayes. 12. The best results (a classification rate of 5.2%) is obtained using a C model containing 7 circular s with 112 neurons each of them. Taking into account the architecture variants for the components of C, for this application the best variant is circular, followed by linear and then by rectangular. 13. Moreover, the C model requires a significantly less training time by comparison to a single big (Tables 3 and 8). 14. From the histogram of C classified image given in Fig. 13, one deduces there are no unclassified pixels, while for the corresponding there are 0.7% unclassified pixels (Fig. 14). 15. The C model does not require a calibration phase, while does. 16. The classification score increases by increasing the number of neurons (Tables 4 and 5, Figs. 15 and 16). 17. The confusion matrices (Tables 6 and 7) show there are specific differences regarding the recognition of the seven thematic categories (for example, one can better identify pixels belonging to barren fields than those representing woods!). 6. References [1] T. Kohonen, The Self-Organizing Map, Proceedings IEEE, Vol. 78, No., Sept. 10, pp. 1464-147. [2] T. Kohonen, Self-Organizing Maps, Springer- Verlag, Berlin, 15. [3] P. W. Hallinan, G. C. Gordon, A. L. Yuille, P. Giblin, and D. Mumford, Two- and Three-Dimensional Patterns of the Face, Natick, A K Peters, Massachusetts, 1. [4] S. Gong, S. J. McKenna, and A. Psarron, Dynamic Vision, Imperial College Press, London, 2000. [5] R. Chelappa, C. Wilson, and S. Sirohey, Human and Machine of Faces: A Survey, Proceedings IEEE, vol. 83, pp. 705-740, 15. [6] G.A. Carpenter, M. N. Gjaja, S. Gopal, and C. E. Woodcock, "ART Neural Networks for Remote Sensing: Vegetation Clasification from LANDSAT TM and Terrain Data", IEEE Transactions on Geoscience and Remote Sensing, Vol. 35, nr. 2, 17, pp. 308-325. [7] G.A. Carpenter, S. Grossberg, N. Markuzon, J. Reynolds, and D. B. Rosen, Fuzzy ARTMAP: A Neural Network Architecture for Incremental Supervized Learning of Analog Multidimensional Maps, IEEE Transactions on Neural Networks, Vol.3, nr. 5, 12, pp.68-713. [8] N. Kopco, P. Sincak, H. Veregin, Extended Methods for Classification of Remotely Sensed Images Based on ARTMAP Neural Networks, Computational Intelligence Theory and Applications, (B. Reusch Ed.), Springer, Berlin-New York, 1, pp. 206-21. [] V. Neagoe: Concurrent Self-Organizing Maps for Automatic Face, Proceedings of the 2th International Conference of the Romanian Technical Military Academy, published by Technical Military Academy, Bucharest, Romania, November 15-16, 2001, Section (Communications), ISBN: 73-820-27-,(2001), pp. 35-40. [10] V. Neagoe and I. Fratila, "A Neural Segmentation of Multispectral Satellite Images", Computational Intelligence, Theory and Applications, (ed. B. Reusch), Springer, Berlin-New York, 1, pp. 334-341. [11] V. Neagoe, A Circular Kohonen Network for ImageVector Quantization, Parallel Computing: State-of the Art and Perspectives (E. H. D Hollander, G. R Joubert, F. J. Peters, and D. Trystram eds.), Vol. 11, Elsevier, Amsterdam-New York, 16, pp. 677-680. [12] V. Neagoe and O. Stanasila, Recunoasterea formelor si retele neurale - algoritmi fundamentali (Pattern and Neural Networks-Fundamental Algorithms), Ed. Matrix Rom, Bucharest, 1. 0-765-1724-2/02 $17.00 2002 IEEE Authorized licensed use limited to: Victor-Emil Neagoe. Downloaded on October 1, 2008 at 00:57 from IEEE Xplore. Restrictions apply.