Active Sonar Target Classification Using Classifier Ensembles

Similar documents
Bagging and Boosting for the Nearest Mean Classifier: Effects of Sample Size on Diversity and Accuracy

Classification Ensemble That Maximizes the Area Under Receiver Operating Characteristic Curve (AUC)

Ensembles of Classifiers.

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18

Pattern Recognition and Machine Learning

Voting (Ensemble Methods)

A TWO-STAGE COMMITTEE MACHINE OF NEURAL NETWORKS

Neural Networks and Ensemble Methods for Classification

Statistical Machine Learning from Data

Learning theory. Ensemble methods. Boosting. Boosting: history

Holdout and Cross-Validation Methods Overfitting Avoidance

A Brief Introduction to Adaboost

Ensemble learning 11/19/13. The wisdom of the crowds. Chapter 11. Ensemble methods. Ensemble methods

Bagging and Other Ensemble Methods

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.

Statistics and learning: Big Data

CS7267 MACHINE LEARNING

Artificial Neural Networks Examination, June 2005

THE problem of classifying underwater targets using active

VBM683 Machine Learning

Learning Ensembles. 293S T. Yang. UCSB, 2017.

Lossless Online Bayesian Bagging

Comparison of Shannon, Renyi and Tsallis Entropy used in Decision Trees

ECE 5424: Introduction to Machine Learning

A Simple Algorithm for Learning Stable Machines

8.6 Bayesian neural networks (BNN) [Book, Sect. 6.7]

PATTERN CLASSIFICATION

Ensembles. Léon Bottou COS 424 4/8/2010

Boosting. CAP5610: Machine Learning Instructor: Guo-Jun Qi

W vs. QCD Jet Tagging at the Large Hadron Collider

The exam is closed book, closed notes except your one-page cheat sheet.

IEEE TRANSACTIONS ON SYSTEMS, MAN AND CYBERNETICS - PART B 1

Discriminant analysis and supervised classification

Machine Learning Ensemble Learning I Hamid R. Rabiee Jafar Muhammadi, Alireza Ghasemi Spring /

Ensemble Methods and Random Forests

Data Mining Classification: Alternative Techniques. Lecture Notes for Chapter 5. Introduction to Data Mining

Source localization in an ocean waveguide using supervised machine learning

Variance Reduction and Ensemble Methods

A Simple Implementation of the Stochastic Discrimination for Pattern Recognition

Neural Networks. Nicholas Ruozzi University of Texas at Dallas

Invariant Scattering Convolution Networks

Big Data Analytics. Special Topics for Computer Science CSE CSE Feb 24

Myoelectrical signal classification based on S transform and two-directional 2DPCA

A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE

SF2930 Regression Analysis

Classification of Longitudinal Data Using Tree-Based Ensemble Methods

day month year documentname/initials 1

TDT4173 Machine Learning

Low Bias Bagged Support Vector Machines

Robust Speaker Identification

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Decision Trees. Tobias Scheffer

Intelligent Modular Neural Network for Dynamic System Parameter Estimation

Model Averaging With Holdout Estimation of the Posterior Distribution

15-388/688 - Practical Data Science: Decision trees and interpretable models. J. Zico Kolter Carnegie Mellon University Spring 2018

Neural Networks and the Back-propagation Algorithm

Machine Learning 11. week

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2013

Algorithm-Independent Learning Issues

CS 188: Artificial Intelligence Fall 2011

CS 229 Final report A Study Of Ensemble Methods In Machine Learning

Weight Initialization Methods for Multilayer Feedforward. 1

Boosting & Deep Learning

CS534 Machine Learning - Spring Final Exam

A Modified Incremental Principal Component Analysis for On-Line Learning of Feature Space and Classifier

What makes good ensemble? CS789: Machine Learning and Neural Network. Introduction. More on diversity

What Do Neural Networks Do? MLP Lecture 3 Multi-layer networks 1

Probabilistic Random Forests: Predicting Data Point Specific Misclassification Probabilities ; CU- CS

Classification using stochastic ensembles

A Modified Incremental Principal Component Analysis for On-line Learning of Feature Space and Classifier

Random Forests. These notes rely heavily on Biau and Scornet (2016) as well as the other references at the end of the notes.

CSE 151 Machine Learning. Instructor: Kamalika Chaudhuri

Part I Week 7 Based in part on slides from textbook, slides of Susan Holmes

Adaptive Boosting of Neural Networks for Character Recognition

An Empirical Study of Building Compact Ensembles

RANDOMIZING OUTPUTS TO INCREASE PREDICTION ACCURACY

On Improving the k-means Algorithm to Classify Unclassified Patterns

Dreem Challenge report (team Bussanati)

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

I D I A P. Online Policy Adaptation for Ensemble Classifiers R E S E A R C H R E P O R T. Samy Bengio b. Christos Dimitrakakis a IDIAP RR 03-69

Support Vector Machine, Random Forests, Boosting Based in part on slides from textbook, slides of Susan Holmes. December 2, 2012

Data Mining und Maschinelles Lernen

TDT4173 Machine Learning

Random projection ensemble classification

BAGGING PREDICTORS AND RANDOM FOREST

Classification of Targets Improved by Fusion of the Range Profile and the Inverse Synthetic Aperture Radar Image

Overview of Statistical Tools. Statistical Inference. Bayesian Framework. Modeling. Very simple case. Things are usually more complicated

Chapter 14 Combining Models

ECLT 5810 Classification Neural Networks. Reference: Data Mining: Concepts and Techniques By J. Hand, M. Kamber, and J. Pei, Morgan Kaufmann

Advanced Statistical Methods: Beyond Linear Regression

Sparse Support Vector Machines by Kernel Discriminant Analysis

Final Examination CS540-2: Introduction to Artificial Intelligence

Dimension Reduction (PCA, ICA, CCA, FLD,

Data Dependence in Combining Classifiers

Ensemble Methods. Charles Sutton Data Mining and Exploration Spring Friday, 27 January 12

Deep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, Spis treści

Hierarchical models for the rainfall forecast DATA MINING APPROACH

Nonlinear Classification

Final Overview. Introduction to ML. Marek Petrik 4/25/2017

B555 - Machine Learning - Homework 4. Enrique Areyan April 28, 2015

Data Mining Classification: Basic Concepts and Techniques. Lecture Notes for Chapter 3. Introduction to Data Mining, 2nd Edition

Transcription:

International Journal of Engineering Research and Technology. ISSN 0974-3154 Volume 11, Number 12 (2018), pp. 2125-2133 International Research Publication House http://www.irphouse.com Active Sonar Target Classification Using Classifier Ensembles Jongwon Seok Department of Information and Communication, Changwon National University, Korea. ABSTRACT In this paper, classifier ensemble methods for active sonar target classification to improve the classification performance is presented. Bagging, random selection samples, random subspace method and rotation forest are selected as classifier ensemble methods. With the features extracted from the synthesized sonar returns, four different targets are classified using various classifier ensemble methods. The experiments carried out in this study illustrates the effectiveness of the ensemble methods compared to the single classifier based scheme. Keywords - : active sonar, classification, fractional Fourier transform, classifier ensemble, backpropagation neural network. I. INTRODUCTION The problem of underwater target detection and classification has been attracted a substantial amount of attention and studied from many researchers for both military and non-military purposes. The difficulty is complicate due to various environmental conditions. Until now, a range of pattern recognition approaches with the active sonar signals are under study, but there are many problems to be considered. Most of previous researches focused on feature extraction method from returned sonar signal in time and frequency domain to increase classification performance based on various classifiers such as Hidden Markov Model (HMM), Support Vector Machine (SVM) and neural networks. In addition, since it is difficult to collect real data for research, most studies focused on the experimentally generated data such as sonar returns from submerged elastic cylindrical shaped targets in the water tank or lake [1]-[3]. As an alternative approach to this, synthesized sonar signals on the certain target condition can be used [4]-[6]. In [5], target classification with synthesized active sonar signals using matching pursuit and multi-aspect hidden Markov model was introduced.

2126 Jongwon Seok And, fractional Fourier transform (FrFT) was applied to the synthesized sonar returns to extract shape variation in the FrFT domain depending on aspects of the target in [6]. In this letter, we study classifier ensemble methods for active sonar target classification to improve the classification performance. It is well-known that a more reliable mapping can be obtained by combining the output of multiple classifiers. Many experimental studies conducted by the researchers in this field show that combining the results of multiple classifiers reduces the generalization error [7]-[9]. In general, classifier ensemble method is useful for classifiers whose variance is relatively large such as decision trees and neural networks. In this study, bagging [10], random selection samples [9], random subspace method [11] and rotation forest [12] are selected as classifier ensemble methods. As a basic classifier, backpropagation neural network (BPNN) is selected. BPNN have been employed efficiently as pattern classifiers in numerous applications. With the FrFTbased features extracted from the synthesized sonar returns as in [6], four different targets are classified using various classifier ensemble methods. II. CLASSIFIER ENSEMBLE METHODS To improve the classification performance, many classifier ensemble techniques have been introduced such as bagging, random selection samples, random subspace and rotational forest. They all resample or modify the training data set, train classifiers on these resampled or modified training sets, and then combine classification results into a final decision rule by various voting such as simple or weighted majority voting. Bagging (bootstrap aggregating) [10] is the most popular method in classifier ensemble. It partitions original training data set Z = (Z 1, Z 2, Z 3, Z n ) into several subsets Z m = (Z 1 m, Z 2 m, Z 3 m,, Z n m ) using random sampling with replacement named bootstrap replica, where m is the index of the subset. The subsets created from the original training set are overlapped generally. Each classifier is then trained on a subset taken bootstrap replica. Random selection samples (RSS) [9] is a kind of bagging and uses nearly same resampling method except for the size of resampled training data set. Bagging uses randomly selected samples with ratio R from original training data plus additional samples with ratio 1 R from pre-selected samples.. Therefore, size of resampled training data set is same of original training data set in bagging. On the other hands, RSS only uses randomly selected with ratio R from original training data set. Random subspace method (RSM) [11] is also similar to bagging from a resampling point of view. However, resampling of RSM is performed in the feature space. RSS selects an r-dimensional random subspace Z m from the original p-dimensional feature space Z, where r < p. This method is suitable for redundant large feature set to avoid the curse of dimensionality. Fig. 1 shows resampling method in feature space in RSM. Rotation forest (RF) [12] is a recently announced classifier ensemble method. This method is based on Principal Component Analysis (PCA). RF transforms the training

Active Sonar Target Classification Using Classifier Ensembles 2127 data set while preserving all information. PCA is used to transform the training data by simple rotation of the coordinate axes. In RF, feature space is split randomly into several subsets, PCA is applied to each subset separately, and a new set of linear extracted features is constructed by pooling all principal components. The data is transformed linearly into the new feature space. p-dimensional Original feature set r-dimensional Random subspace Fig. 1 Resampling method in feature space in RSM SYNTHESIS OF ACTIVE SONAR RETURNS For the synthesis of active sonar returns, an underwater environment with direct reflections from the target and indirect reflections from sea level and sea bottom was assumed. The depth of water was set to 300 m. The source and receiver were located at the same position in the water, i.e. monostatic mode, and an unknown target was at 50m below sea level. We adopted the sound velocity profile to calculate the sound velocity at a certain depth of water. Four targets with different shapes were modeled using a 3D highlight model, and active sonar returns from each target depending on the target aspects were synthesized using a ray tracing method considering the sound velocity profile [13]. Fig. 2 shows highlighted models of the four targets designed for the synthesis of sonar returns. All the targets have several highlights lying mainly in the horizontal line. Each highlight is assumed to reflect the acoustic wave in all directions. All echo components can be considered a summation of an individual echo from certain equivalent scattering points. The underwater target can be characterized by the highlights distributed within a spatial target structure. Underwater acoustic wave is then propagated over being attenuated and bent by sound velocity. We can obtain the synthesized signal by summing traced signals from each highlight at the receiver position.

2128 Jongwon Seok FEATURE EXTRACTION BASED ON FrFT The FrFT is a generalization of the conventional Fourier transform and has a history in mathematical physics and digital signal processing [14]. The FrFT relies on a parameter α and can be interpreted as a rotation by an angle in the time-frequency plane. If α = 0, the FrFT corresponds to an identity operator, and when α = 1, it becomes a Fourier transform. The α th order FrFT of a signal s(t) can be obtained by (a) (b) (c) (d) Fig.2 3D highlight models of targets for synthesis of active sonar signals. (a) Type 1 (b) Type 2 (c) Type 3 (d) Type 4 F α (u) = 1 icot ( απ ) exp [iπ (cot (απ ) 2 2 u2 2csc ( απ ) uv + 2 cot ( απ ) 2 v2 )] s(v)dv (1) where u and v define the axes of the fractional domain. The potential of FrFT lies in its ability of FrFT to process chirp like signals better than the conventional Fourier transform. If the frequency of a signal varies with time such as LFM signal, we can obtain the optimal transform result with an optimal transform order α opt which is maximally compressed with smallest bandwidth. The optimum transform order α opt is defined as α opt = 2 π tan 1 ( f s 2 /N 2a ) (2) where 2a is the chirp rate, f s is the sampling frequency, and N is the total number of time samples.

Active Sonar Target Classification Using Classifier Ensembles 2129 An active sonar return is obtained by summing multiple time-overlapped Linear Frequency Modulation (LFM) signals reflected from the highlighted points of a target. The FrFT of order, α opt, was calculated on the signal received from the highlight model. The application of the FrFT with an optimal order to the multiple timeoverlapped LFM signals compresses the signals maximally in the FrFT domain, where multiple LFM signals are represented by multiple peaks. Feature vector is obtained by dividing the FrFT domain into 100 equal bands and calculating the energy for each band. This process produces 100 FrFT based features which reflect the characteristics of shape change adequately and possess discrimination capability. IV. EXPERIMENTS AND DISCUSSION In the synthesis of active sonar signals, the sampling frequency and LFM pulse duration were set to 31.25 khz and 50ms, respectively. The center frequency and bandwidth of the LFM signal were 7 khz, and 400 Hz, respectively. The signals synthesized by summing the signals traced from each highlight model depending on aspect angle of the target were then obtained. In this study, 1440 active sonar returns were generated from four highlight models by varying its aspect from 0 to 359 in 1 increments. Using the four ensemble methods (Bagging, RSS, RSM and RF), the classification tests were carried out and performances were compared. In RSS, random selection ratio R was set to 0.75. Feature dimension number r and p in RSM were set to 75 and 100 depending on selection ratio R. The classification tests were carried out for performances comparison depending on the ensemble methods. The total number of classifier used for ensemble was 31. All the networks were trained using gradient-based least squares learning with the back-propagation algorithm. In each neural network, we used 100-24-4 structure, with 100 input neurons which corresponds to 100 FrFT based features, 24 hidden-layer neurons, and four outputs. The stopping criterion used is as follows: the training is stopped either when the average error is reduced to 0.001 or if a maximum of 10,000 epochs is reached in order to avoid exhaustive learning of the training data. Among a total 1440 data set, 360 samples were used to train the neural networks and the remaining 1080 samples were used to test the classification performance. To make a final decision, we used a simple majority voting scheme which chooses the class selected by at least one more than half of the number of classifiers. Fig. 3 shows the overall structure of classifier ensemble for the experiment. Table 1 lists recognition rates of 31 BPNN classifiers for the four ensemble methods and without ensemble. Table 2 shows the comparison of average recognition rates (R ave ) of 31 BPNN classifiers and final majority voting results (R mv ) for four ensemble methods.

2130 Jongwon Seok Index Table 1 Results of recognition rate of 31 BPNN classifiers. [%] Without Ensemble Bagging RSS RSM RF 1 80.15 81.94 85.56 89.79 86.04 2 85.75 89.86 82.92 85.63 88.54 3 83.43 87.71 84.51 83.47 84.38 4 88.30 87.29 85.42 84.44 88.40 5 79.88 86.18 83.82 80.49 83.75 6 81.25 82.22 85.07 85.76 87.99 7 87.67 86.11 85.00 85.28 90.42 8 85.35 87.15 86.25 82.92 88.06 9 86.60 86.60 87.36 83.54 87.15 10 83.61 87.29 62.12 84.44 89.86 11 80.12 87.92 87.64 89.03 84.10 12 87.56 86.46 84.03 88.61 86.81 13 82.15 84.44 87.15 83.61 84.72 14 88.54 87.64 84.86 85.83 85.49 15 77.45 81.11 83.89 87.15 84.58 16 79.67 85.07 84.10 86.53 87.22 17 87.36 83.47 84.17 86.11 88.33 18 80.25 87.36 83.06 86.74 85.21 19 88.68 85.76 83.61 87.43 88.06 20 87.97 85.97 82.71 85.14 87.15 21 75.62 85.28 60.80 84.44 86.67 22 77.78 87.78 87.01 85.21 86.88 23 88.52 83.82 82.78 87.22 86.25 24 83.41 84.31 86.81 84.17 87.43 25 84.15 87.15 85.90 86.25 82.35 26 84.46 86.46 84.86 87.64 89.31 27 86.55 87.92 87.08 86.81 87.01 28 73.57 85.21 85.21 87.29 85.97 29 85.55 86.25 85.28 83.06 86.88 30 83.29 84.72 85.07 85.63 87.64 31 82.31 83.89 82.50 84.72 86.53

Active Sonar Target Classification Using Classifier Ensembles 2131 Table 2 Comparison of average recognition rates of 31 BPNN classifiers and final majority voting results [%] Without Ensemble Bagging RSS RSM RF R ave 83.45 85.82 83.44 85.63 86.75 R mv 85.25 88.13 87.43 87.99 88.96 Final Decision Majority Voting Input Training Set 1 Training Set 2 Training Set 31 Data Set Fig. 3 Structure of classifier ensemble for the experiment V. CONCLUSION This paper has described classifier ensemble methods for active sonar target classification to improve the classification performance. With the features extracted from the synthesized sonar returns, four different targets are classified using various classifier ensemble methods. Bagging, random selection samples, random subspace method and rotation forest are selected as classifier ensemble methods. Using the four ensemble methods based on 31

2132 Jongwon Seok BPNN classifiers, the classification tests were carried out and performances were compared. The highly reliable classification results could be obtained by classifier ensemble methods with majority voting of the individual classifiers in the ensemble. The experiments carried out in this study illustrated the effectiveness of the ensemble methods compared to the single classifier based scheme. Acknowledgements This research is financially supported by Changwon National University in 2017~2018 REFERENCES [1] A. Pezeshki, M. R. Azimi-Sadjadi, and L. L. Scharf, Undersea Target Classification Using Canonical Correlation Analysis, IEEE Journals of Oceanic Engineering, vol. 32, no. 4, pp. 948-955, Oct. 2007. [2] M. R. Robinson, M. R. Azimi-Sadjadi, and J. Salazar, Multi-Aspect Target Discrimination Using Hidden Markov Models and Neural Networks, IEEE Transactions on Neural Networks, vol. 16, no. 2, pp. 447-459, Mar. 2005. [3] M. R. Azimi-Sadjadi, D. Yao, A. A. Jamshidi, and G. J. Dobeck, Underwater Target Classification in Changing Environments Using an Adaptive Feature Mapping, IEEE Transactions on Neural Networks, vol. 13, no. 5, pp. 1099-1111, Sep. 2002. [4] B. Kim, H. Lee, and M. Park, A Study on Highlight Distribution for Underwater Simulated Target, Proceeding of ISIE 2001, vol. 3, pp. 1988-1992, 2001. [5] T. Kim, and K. Bae, HMM-Based Underwater Target Classification with Synthesized Active Sonar Signals, IEICE Trans. Fundamentals, vol. E94-A, no. 10, pp. 2039-2042, Oct. 2011. [6] J. Seok, and K. Bae, Target Classification Using Features Based on Fractional Fourier Transform, IEICE Trans. Inf. & Syst., vol. E94-D, no. 9, pp. 2518-2521, Sep. 2014. [7] J. Quinlan, Bagging, Boosting, and C4.5, In Proceedings of the Thirteenth National Conference on Artificial Intelligence, vol. 1, pp. 725-730, 1996. [8] E. Bauer, and R. Kohavi, An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants, Machine Learning, vol. 36, pp. 105-139, Jul. 1999. [9] D. Opitz, and R. Maclin, Popular Ensemble Methods: An Empirical Study, Journal of Artificial Research, vol. 11, pp. 169-198, 1999. [10] L. Breiman, Bagging predictors, Machine Learning, vol. 24, no. 2, pp. 123-140, 1996.

Active Sonar Target Classification Using Classifier Ensembles 2133 [11] T. Ho, The Random subspace method for constructing decision Forests, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 20, no. 8, pp. 832-844, 2006. [12] J. Rodriguez, L. Kuncheva, and C. Alonso, Rotation Forest: A New Classifier Ensemble Method, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 28, no. 10, pp. 1619-1630, 2006. [13] A. Michael, Principles of sonar performance modeling, Springer-Verlag, Berlin, 2010. [14] V. Namias, The fractional order Fourier transform and its application to quantum mechanics, IMA J. Appl. Math., vol. 25, no. 3, pp.241 265, Mar. 1980.

2134 Jongwon Seok