Robustness of the Quadratic Discriminant Function to correlated and uncorrelated normal training samples

Similar documents
KWAME NKRUMAH UNIVERSITY OF SCIENCE AND TECHNOLOGY

THE UNIVERSITY OF CHICAGO Graduate School of Business Business 41912, Spring Quarter 2012, Mr. Ruey S. Tsay

6-1. Canonical Correlation Analysis

Discriminant analysis and supervised classification

Lecture 8: Classification

12 Discriminant Analysis

POWER AND TYPE I ERROR RATE COMPARISON OF MULTIVARIATE ANALYSIS OF VARIANCE

DISCRIMINANT ANALYSIS. 1. Introduction

Stat 216 Final Solutions

SOME RESULTS ON THE MULTIPLE GROUP DISCRIMINANT PROBLEM. Peter A. Lachenbruch

Classification: Linear Discriminant Analysis

MS-E2112 Multivariate Statistical Analysis (5cr) Lecture 8: Canonical Correlation Analysis

Applied Multivariate Statistical Analysis Richard Johnson Dean Wichern Sixth Edition

MARGINAL HOMOGENEITY MODEL FOR ORDERED CATEGORIES WITH OPEN ENDS IN SQUARE CONTINGENCY TABLES

University of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout 2:. The Multivariate Gaussian & Decision Boundaries

EEL 851: Biometrics. An Overview of Statistical Pattern Recognition EEL 851 1

ISSN X On misclassification probabilities of linear and quadratic classifiers

An Introduction to Multivariate Statistical Analysis

EPMC Estimation in Discriminant Analysis when the Dimension and Sample Sizes are Large

Ch 4. Linear Models for Classification

CLASSICAL NORMAL-BASED DISCRIMINANT ANALYSIS

Robust Linear Discriminant Analysis

MS-E2112 Multivariate Statistical Analysis (5cr) Lecture 5: Bivariate Correspondence Analysis

5. Discriminant analysis

Discrimination: finding the features that separate known groups in a multivariate sample.

Bayesian Decision Theory

Supervised Learning: Linear Methods (1/2) Applied Multivariate Statistics Spring 2012

Feature selection and classifier performance in computer-aided diagnosis: The effect of finite sample size

Generative classifiers: The Gaussian classifier. Ata Kaban School of Computer Science University of Birmingham

Discriminant Analysis with High Dimensional. von Mises-Fisher distribution and

Multivariate statistical methods and data mining in particle physics

6.867 Machine Learning

Machine Learning Linear Classification. Prof. Matteo Matteucci

Linear Classification. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Linear Classification: Probabilistic Generative Models

Application of Parametric Homogeneity of Variances Tests under Violation of Classical Assumption

University of Cambridge Engineering Part IIB Module 4F10: Statistical Pattern Processing Handout 2: Multivariate Gaussians

Linear Methods for Prediction

Zero-Mean Difference Discrimination and the. Absolute Linear Discriminant Function. Peter A. Lachenbruch

Classification Methods II: Linear and Quadratic Discrimminant Analysis

STAT 501 Assignment 1 Name Spring 2005

SF2935: MODERN METHODS OF STATISTICAL LECTURE 3 SUPERVISED CLASSIFICATION, LINEAR DISCRIMINANT ANALYSIS LEARNING. Tatjana Pavlenko.

LDA, QDA, Naive Bayes

COMPARISON OF FIVE TESTS FOR THE COMMON MEAN OF SEVERAL MULTIVARIATE NORMAL POPULATIONS

Construction of Combined Charts Based on Combining Functions

T 2 Type Test Statistic and Simultaneous Confidence Intervals for Sub-mean Vectors in k-sample Problem

A nonparametric spatial scan statistic for continuous data

Goodness of Fit Test and Test of Independence by Entropy

Linear Methods for Prediction

Psychology 282 Lecture #4 Outline Inferences in SLR

CS 340 Lec. 18: Multivariate Gaussian Distributions and Linear Discriminant Analysis

Lecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis

Consistency of Test-based Criterion for Selection of Variables in High-dimensional Two Group-Discriminant Analysis

GENERAL PROBLEMS OF METROLOGY AND MEASUREMENT TECHNIQUE

* Tuesday 17 January :30-16:30 (2 hours) Recored on ESSE3 General introduction to the course.

Machine Learning for Signal Processing Bayes Classification and Regression

Overview. Multidimensional Item Response Theory. Lecture #12 ICPSR Item Response Theory Workshop. Basics of MIRT Assumptions Models Applications

On some Hermite Hadamard type inequalities for (s, QC) convex functions

STA 4273H: Statistical Machine Learning

Consistency of Distance-based Criterion for Selection of Variables in High-dimensional Two-Group Discriminant Analysis

Lecture 5: LDA and Logistic Regression

CHAPTER 1 INTRODUCTION

Modelling Academic Risks of Students in a Polytechnic System With the Use of Discriminant Analysis

Discriminant Analysis and Statistical Pattern Recognition

Introduction to Machine Learning

Efficient Approach to Pattern Recognition Based on Minimization of Misclassification Probability

Learning Gaussian Process Models from Uncertain Data

By Bhattacharjee, Das. Published: 26 April 2018

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

Machine Learning 2017

Principal component analysis

Logistic Regression: Regression with a Binary Dependent Variable

Estimation Under Multivariate Inverse Weibull Distribution

Bayes Rule for Minimizing Risk

Introduction to Machine Learning

Consistency of test based method for selection of variables in high dimensional two group discriminant analysis

Robust portfolio selection under norm uncertainty

LINEAR MODELS FOR CLASSIFICATION. J. Elder CSE 6390/PSYC 6225 Computational Modeling of Visual Perception

Computer Vision Group Prof. Daniel Cremers. 9. Gaussian Processes - Regression

Research Article Improved Estimators of the Mean of a Normal Distribution with a Known Coefficient of Variation

University of Cambridge Engineering Part IIB Module 4F10: Statistical Pattern Processing Handout 2: Multivariate Gaussians

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

Leverage effects on Robust Regression Estimators

Research Article A Nonparametric Two-Sample Wald Test of Equality of Variances

APPLICATION AND POWER OF PARAMETRIC CRITERIA FOR TESTING THE HOMOGENEITY OF VARIANCES. PART IV

THE information capacity is one of the most important

PATTERN RECOGNITION AND MACHINE LEARNING

Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data

Further results involving Marshall Olkin log logistic distribution: reliability analysis, estimation of the parameter, and applications

Computer Vision Group Prof. Daniel Cremers. 4. Gaussian Processes - Regression

Engineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 5: Single Layer Perceptrons & Estimating Linear Classifiers

Linear Discrimination Functions

Pattern Recognition and Machine Learning

Fast and Robust Discriminant Analysis

A Simple Implementation of the Stochastic Discrimination for Pattern Recognition

Dependence. MFM Practitioner Module: Risk & Asset Allocation. John Dodson. September 11, Dependence. John Dodson. Outline.

Approximations to inverse tangent function

Linear Classification

Machine Learning Lecture 2

Chap 2. Linear Classifiers (FTH, ) Yongdai Kim Seoul National University

Transcription:

DOI 10.1186/s40064-016-1718-3 RESEARCH Open Access Robustness of the Quadratic Discriminant Function to correlated and uncorrelated normal training samples Atinuke Adebanji 1,2, Michael Asamoah Boaheng 2* and Olivia Osei Tutu 1 *Correspondence: asboaheng@yahoo.com 2 Institute of Research, Innovation and Development (IRID), Kumasi Polytechnic, Box 854, Kumasi, Ghana Full list of author information is available at the end of the article Abstract This study investigates the asymptotic performance of the Quadratic Discriminant Function (QDF) under correlated and uncorrelated normal training samples. This paper specifically examines the effect of correlation, uncorrelation considering different sample size ratios, number of variables and varying group centroid separators (δ, δ = 1; 2; 3; 4; 5) on classification accuracy of the QDF using simulated data from three populations (π i, i = 1, 2, 3). The three populations differs with respect to their mean vector and covariance matrices. The results show the correlated normal distribution exhibits high coefficient of variation as δ increased. The QDF performed better when the training samples were correlated than when they were under uncorrelated normal distribution. The QDF performed better resulting in the reduction in misclassification error rates as group centroid separator increases with non increasing sample size under correlated training samples. Keywords: QDF, Correlated normal, Uncorrelated normal, Group centroid Background Discriminant analysis (DA) as a topic in Multivariate Statistical Analysis has attracted much research interest over the years, with the evaluation of discriminant functions when the covariances matrices are unequal with moderate sizes being well explained by Wahl and Kronmal (1977). Linear discriminant function (LDF) is commonly used by researchers because of its simplicity of form and concept. In spite of theoretical evidence supporting the use of the Quadratic Discriminant Function (QDF) when the covariance matrices are heterogeneous, its actual employment has been sporadic because there are unanswered questions regarding its performance in the practical situation where the discriminant function must be constructed using training samples that do not satisfy the classical assumption of the model. The pioneering work on quadratic discrimination was by Smith (1947) using Fisher s Iris data. He provided a full expression for the QDF and his results showed the QDF outperforming the LDF when the homogeneity of variance covariance structure was violated. Marks and Dunn (1974) approached the problem of discrimination by comparing the asymptotic and small sample performance of the QDF, best linear and Fisher s LDF for both proportional and non-proportional covariance differences under the assumption 2016 Adebanji et al. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Page 2 of 15 of normality and unequal covariance matrices. Two populations were used and sample sizes were chosen from 10 to 100. The number of variables selected were 2 and 10. They employed the application of Monte Carlo simulation. Their results indicated that for small samples the QDF performed worse than the LDF when covariances were nearly equal with large dimensions (ie LDF was satisfactory when the covariance matrices were not too different). Lawoko (1988) studied the performance of the LDF and QDF under the assumption of correlated training samples. The researcher aimed at allocating an object to one of two groups on the basis of measurements on the object. He found that the discriminant functions formed under the model did not perform better than W and Z formed under the assumption of independent training observation. Asymptotic expected error rate for W under the model (W m ) and W were equal when the training observations followed an autoregressive process but there was a slight improvement in the overall error rate when W m was used instead of W for numerical evaluations of the asymptotic expansions. He concluded that the efficiency of the discriminant analysis estimator is generally lowered by positively correlated training observations. Mardia et al. (1995) reported that it might be thought that a linear combination of two variables would provide a better discriminator if they were correlated than when they were uncorrelated. However, this is not necessarily so. To show this they considered two bivariate populations π 1 and π 2. Supposing π 1 is N 2 (0, ) and π 2 is N 2 (µ, ) where µ = (µ 1, µ 2 ) and with known. They indicated that discrimination is improved unless ρ lies between zero and 2f /(1 + f 2 ) but a small value of ρ can actually harm discrimination. Adebanji and Nokoe (2004) have considered evaluating the quadratic classifier. They restricted their attention to two multivariate normal populations of independent variables. In addition to some theoretical result, with known parameters, they conducted a Monte Carlo simulation in order to investigate the error rates. Results indicated that the total error rate computed showed that there was an increase in the error rate with re-substitution estimator for all K values. On the other hand, there was a decline across K. The cross-validation estimator showed a steady decline for and across all values K and the recorded value showed a substantially low error rate estimates than re-substitution estimator for K = 4 and K = 8. Kakaï and Pelz (2010) studied the asymptotic error rates of linear, quadratic rules and conducted a Monte Carlo study in 2, 3 and 5-group discriminant analysis. Hyodo and Kubokawa (2014) studied a variable selection criterion for linear discriminant rule and its optimality in high dimensional data where a new variable procedure was developed for selecting the true variable set. An enormous deal of study has been made since Fisher s (1936) original work on discriminant analysis as well as several other researchers tackling similar problem. Some estimation methods have been proposed and some sampling properties derived. However, there is little investigation done on large sample properties of these functions. Also a considerable number of studies had been carried out on discriminant analysis but not much is done on the effect or the performance of the QDF under correlated and uncorrelated data with varying sample size ratios, different variable selections and with different centroid separators for three populations.

Page 3 of 15 In this study we therefore investigate the performance of classification functions (i.e Quadratic Discriminant Functions) when the covariance matrices are heterogeneous with the data of interest being correlated, sample size ratios being unequal, considering different number of variables and varying values of group centroid separator (δ). Methods Simulation design To evaluate the performance of QDF for correlated and uncorrelated training samples of distributions, we considered a Monte Carlo study with multivariate normally correlated random data generated for three populations with their mean vector µ 1 = (0,...,0), µ 2 = (0,..., δ) and µ 3 = (0,...,2δ) respectively. The covariance matrices, i (i = 1, 2, 3). Where k = l, σ kl = 0.7 for all groups except the diagonal entries given as σk 2 = i, for i = 1, 2, 3. The covariance matrices were transformed to be uncorrelated to generate the uncorrelated data. The QDF was then performed in each case and the leave-one-out method was used to estimate the proportion of observations misclassified. Factors considered in this study were: 1. Mean vector separator which is set at δ from 1 to 5 where δ is determined by the difference between the mean vectors. 2. Sample sizes which are also specified. Here 14 values of n 1 set at 30, 60, 100, 150, 200, 250, 300, 400, 500, 600, 700, 800, 1000, 2000 and the sample size of n 2 and n 3 are determined by the sample ratios at 1:1:1, 1:2:2 and 1:2:3 and these ratios also determine the prior probabilities to be considered. 3. The number of variables for this study is also specified. The number of variables are set at 4, 6 and 8 following Murray (1977) who considered this in selection of variables in discriminant analysis. 4. The size of population 1 (n 1 ) is fixed throughout the study and the sizes of population 2 and population 3, n 2 and n 3 respectively are determined by the sample size ratio under consideration. Subroutine for QDF Series of subroutines were written in MatLab to perform the simulation and discrimination procedures on QDF. Below are the important ones. Classification into several populations Generalization of classification procedure for more than two discriminating groups (ie from 2 to g 2) is straight forward. However, not much is known about the properties corresponding sample classification function, and in particular, their error rates have not been fully investigated. Therefore, we focus only on the Minimum ECM classification with equal misclassification cost and Minimum TPM for multivariate normal population with unequal covariance matrices (quadratic discriminant analysis). Minimum ECM classification with equal misclassification cost Allocate x 0 to k if p k f k (x) >p i f i (x) for all i = k (1)

Page 4 of 15 or, equivalently, Allocate x 0 to k if ln p k f k (x) >ln p i f i (x) for all i = k (2) Note that the classification rule in Eq. (1) is identical to the one that maximizes the posterior probability P( i x) = P(x comes from i given that x is observed) where P( i x) = p kf k (x) g i=1 p if i (x) = (prior) (likelihood) [(prior) (likelihood)] (3) Therefore, one should keep in mind that in general minimum ECM rule must have the prior probability, misclassification cost and density function before it can be implemented. Minimum TPM rule for unequal covariance normal populations Suppose that the i are multivariate normal populations, with different mean vectors µ and covariance matrices i (i = 1,..., g). An important special case occurs when the 1 f i (x) = (2π) p/2 i 2 1 { exp 1 } 2 (x µ i) i 1 (x µ i ) with c(i i) = 0, c(k i) = 1, k =i then ( p ) ln p k f k (x) = ln p k ln{(2π)} 1 2 2 ln k 1 2 (x µ k) 1 k (x µ k ) = max ln{p i f i (x)} i (4) The constant (p/2) ln(2π) can be ignored in Eq. (4), since it is the same for all population. Therefore, quadratic discriminant score for ith population is defined as d Q i (x) = 1 2 ln i 1 2 (x µ i) i 1 (x µ i ) + ln p i (5) The quadratic score d Q i (x) is composed of contributions from the generalized variance i, the prior probability p i, and the square of the distance from x to the population mean µ i. Allocate x to k if the quadratic score d Q k (x) = largest of dq 1 (x), dq 2 (x),..., dq g (x). (6) In practice, the µ i and i are unknown, but a training set of correctly classified observations if often available for the construction of estimates. The relevant sample quantities for population i are the sample mean vector, x i, sample covariance matrix, S i and sample size, n i. The estimate of the quadratic discriminant score (6) is then ˆd Q i (x) = 1 2 ln S i 1 2 (x x i) S 1 i (x x i ) + ln p i for i = 1, 2,..., g (7)

Page 5 of 15 The quadratic classifier ( 1 = 2 ) Suppose that the joint densities of X =[X 1, X 2,..., X p ] for population 1 and 2 are given by [ 1 f i (x) = (8) (2π) p/2 exp 1 ] i 1/2 2 (x µ i) i 1 (x µ i ) The covariance matrices as well as the mean vectors are different from one another for the two populations. The regions of minimum expected cost misclassification (ECM) and minimum total probability of misclassification (TPM) depends on the ratio of the densities, (f 1 (x))/(f 2 (x), or equivalently, the natural logarithm of the density ratio, ln[(f 1 (x)/(f 2 (x)] =ln[f 1 (x)] ln[f 2 (x)] when the multivariate normal densities have different covariance structures, the terms in the density ratio involving 1/2 i do not can- cel as they do when we have equal covariance matrices and also the quadratic forms in the exponents of f i (x) do not combine. Therefore substituting multivariate normal densities with different covariance matrices into Eq. (1) and after taking the natural logarithms and simplifying, the likelihood of the density ratios gives the quadratic function in x 1 if 1 2 x ( 1 1 1 2 )x + (µ 1 1 1 µ 2 1 2 )x k ln [( c(1 2) c(2 1) )( p2 p 1 )], where k = 1 ( ) 2 ln 1 + 1 2 2 (µ 1 1 1 µ 1 µ 2 1 2 µ 2) (9) otherwise, x 1. This function is easily extended to the 3 group classification where 2 cut off points are required for assigning observations to the 3 groups (Johnson and Wichern 2007). Results This section presents the performance of QDF when the training data are correlated and then when they are uncorrelated. Effects of sample size on QDF under correlated and uncorrelated normal distribution Evaluating the effect of sample size on QDF with respect to the correlated normal distribution for δ = 1 is present in Fig. 1. From Fig. 1 it was observed that the average error rate for 4, 6 and 8 variables with δ = 1 were higher as compared to the other values of the δ and among the sample size ratios used, sample size ratio 1:1:1 gives the lowest average error rates as the sample size increases asymptotically. Results also show that n 1 = 30 gave highest average error rates and lower average error rate are for n 1 = 2000 for variables 4, 6 and 8. There is a rapid decrease in the average error rate from total sample size of 90 180 of sample size ratio (1:1:1) of 8 variables for all δ. The results of 4 variables were higher than the other number of variables. δ = 5 gave the lowest average error rates as the sample size increases. It was also observed that the average error rates of sample size ratio 1:1:1 and 1:2:2 were marginal for δ = 1. The difference between the ratios decreased as δ increased and with maintained total sample size and the average

Page 6 of 15 Fig. 1 Average error rates of correlated normal distribution for δ: n 1 :n 2 :n 3 = 1:1:1 error rates decreased as the number of variables increased. In δ = 5 the performance of the three sample size ratios were marginal. From Table 1, the effects of the sample size on the QDF for the various group centroids (δ = 1, 2, 3, 4, 5) for the correlated samples gave an indication that, generally as the sample size increases with increasing group centroids, the mean error rates decreases marginally in that order. The standard deviation of the error rate for the correlated normal distribution reveals that as the sample size increases, standard deviation of the error rate for sample size ratio 1:1:1 exhibit low standard deviations for δ = 1. For a particular δ, the standard deviation decreases as the number of variables also increases. From δ = 2 to δ = 5, the standard deviations decreases as the sample size increases asymptotically. There is a sharp decrease of the standard deviation of sample size ratio 1:1:1. For the uncorrelated distribution from Table 2 the average error rate was similar to the results obtained in the correlated normal distribution with the exception of the average error rate of sample size ratio 1:1:1 which decreased rapidly from total sample size of 90 180 for 8 variables in all δs. The average error rate decreased as the total sample size increased asymptotically. And it reduced when δ also increased. The graphical representation of this result for δ = 1 is shown in Fig. 2. The coefficients of variation generally increased exponentially and stabilized with increasing total sample size and number of variables in δ = 1 exhibited lower variations as compared with the remaining δs as shown in Fig. 3. For δ = 4, the coefficients of variation in sample size ratio 1:1:1 decreased while the remaining ratios did not give any particular pattern for the 4 variable situation. For 4 variable situation with δ = 5, the coefficients of variation decreased as the total sample size increased. The coefficient of variation of the other 6 and 8 variables situations did not show any particular pattern as the total sample size increased. The coefficients of variation in correlated normal distribution in Fig. 2 increased exponentially and then stabilized with averagely lower variations in sample size ratio 1:2:2 and with higher variations in sample size ratio 1:2:3 as the total sample size increases asymptotically. The variations also increased as δ increased. δ = 3 gives a steady coefficients of variation as the total sample size increased for variable 4 while it gave a little increase and then stabilized in variables 4 and 6. There was a decline in the coefficients

Page 7 of 15 Table 1 Effects of sample size on QDF for correlated normal based on error rates, CV and SD Centroid Sample size (n) SD CV Mean error rate δ = 1 90 0.0530 0.04276 0.1240 180 0.0536 0.04656 0.1151 300 0.0543 0.04853 0.1120 450 0.0547 0.04903 0.1116 750 0.0534 0.04908 0.1089 900 0.0526 0.04857 0.1082 1200 0.0524 0.04868 0.1077 1500 0.0529 0.04893 0.1081 1800 0.0526 0.04881 0.1077 2100 0.0531 0.04913 0.1080 6000 0.0524 0.04898 0.1069 δ = 2 90 0.0447 0.05084 0.0880 180 0.0402 0.05215 0.0771 300 0.0384 0.05224 0.0735 450 0.0394 0.05439 0.0724 750 0.0370 0.05172 0.0715 900 0.0370 0.05217 0.0710 1200 0.0369 0.05249 0.0703 1500 0.0367 0.05191 0.0707 1800 0.0372 0.05265 0.0706 2100 0.0367 0.05222 0.0702 6000 0.0365 0.05234 0.0698 δ = 3 90 0.0300 0.6047 0.0479 180 0.0264 0.5983 0.0442 300 0.0245 0.5940 0.0412 450 0.0239 0.5907 0.0495 750 0.0232 0.5780 0.0401 900 0.0230 0.5763 0.0400 1200 0.0226 0.5708 0.0396 1500 0.0225 0.5727 0.0393 1800 0.0223 0.5682 0.0393 2100 0.0226 0.5728 0.0395 6000 0.0224 0.5698 0.0393 δ = 4 90 0.0200 0.7509 0.0266 180 0.0159 0.7195 0.0221 300 0.0144 0.6727 0.0215 450 0.0139 0.6732 0.0207 750 0.0135 0.6574 0.0205 900 0.0131 0.6455 0.0203 1200 0.0124 0.6286 0.0198 1500 0.0126 0.6353 0.0199 1800 0.0128 0.6301 0.0203 2100 0.0128 0.6382 0.0201 6000 0.0125 0.6284 0.0200 δ = 5 90 0.0123 0.9798 0.0126 180 0.0094 0.8717 0.0108 300 0.0084 0.8252 0.0102 450 0.0078 0.7838 0.0100

Page 8 of 15 Table 1 continued Centroid Sample size (n) SD CV Mean error rate 750 0.0071 0.7489 0.0095 900 0.0069 0.7399 0.0093 1200 0.0070 0.7158 0.0098 1500 0.0066 0.7058 0.0093 1800 0.0066 0.7016 0.0093 2100 0.0065 0.7002 0.0093 6000 0.0064 0.6843 0.0094 of variation for δ = 4 as the total sample size increased asymptotically in variable 4. The coefficients of variation increased from total sample size 150 to 500 and from 180 to 360 for sample size ratios 1:2:2 and 1:2:3 respectively for variables 6 and 8 and then decreased as the total sample size increased asymptotically. From Fig. 4, for δ = 1, there was a sharp decrease in the coefficients of variation in sample size ratio 1:1:1 for all number of variables as the total sample size increased. Effect of number of variables on QDF (under correlated and uncorrelated normal distribution) The effect of number of variables on the QDF under the correlated and uncorrelated normal distribution are discussed under this subsection. The graphs of the results for sample size ratio 1:1:1 of the situations of 4, 6 and 8 variables are shown in Fig. 5. It was observed that as the number of variables increased, the average error rate reduced in the correlated normal distribution. The rate at which it reduces in δ = 1 for ratio 1:1:1 is better than that of the other δs. For increasing sample size ratio, as the number of variables increased, the decrease in the average error was marginal as δ increased. The coefficients of variation in this distribution for ratio 1:1:1 in Fig. 6 reveals that as the number of variables increased the coefficients of variation increased for variables 4, 6 and 8 from δ = 1 to 3 except δ = 4 and 5 in which it reduced. Yet the in the case of 8 variables the variabilities exhibited were higher than the rest in this case. For ratio 1:2:2 the coefficients of variation increased from total sample size of 150 2000 and stabilized for all δs as the number of variables increased except δ = 4 which showed a decline in the coefficients of variation for the case of 4 and 6 variables. In δ = 5, there was declination in the coefficients of variation as the number of variables increased. Sample size ratio 1:2:3 gave similar result as ratio 1:2:2 From Fig. 7, there was a sharp decline in the average error rate from total sample size 90 180 as the number of variables increase for all δs. It also revealed that as the number of variables increased the average error rate reduced for all sample size ratios. The coefficients of variation shown in Fig. 8 indicates that the variabilities increased exponentially for all δs with the exception of δ = 4 and 5 for which variable 4 declined. In the case of 8 variables, about 9.65 and 11.91 % increase in variations from total sample size of 90 180 for all δ = 1 and 2. For δ = 4 and 5, the coefficients of variation for variables 4 declined from total sample size of 90 6000 while variables 6 and 8 increased. For δ = 5, the coefficients of variations for 8 variables increased from 90 to 750 and declined

Page 9 of 15 Table 2 Effects of sample size on QDF for uncorrelated normal based on error rates, CV and SD Centroid Sample size (n) SD CV Mean error rate δ = 1 90 0.0681 0.3483 0.1955 180 0.0641 0.3920 0.1636 300 0.0603 0.3965 0.1522 450 0.0603 0.4092 0.1473 750 0.0575 0.4059 0.1417 900 0.0573 0.4065 0.1409 1200 0.1387 0.4096 0.1387 1500 0.0571 0.4114 0.1388 1800 0.0567 0.4123 0.1374 2100 0.0567 0.4129 0.1374 6000 0.0559 0.4116 0.1359 δ = 2 90 0.0524 0.3968 0.1321 180 0.0469 0.4222 0.1111 300 0.0421 0.4062 0.1037 450 0.0424 0.4251 0.0997 750 0.0409 0.4218 0.0970 900 0.0415 0.4313 0.0962 1200 0.0402 0.4209 0.0955 1500 0.0398 0.4219 0.0944 1800 0.0400 0.4230 0.0946 2100 0.0399 0.4239 0.0940 6000 0.0394 0.4223 0.0934 δ = 3 90 0.0345 0.4200 0.0823 180 0.0296 0.4341 0.0683 300 0.0277 0.4417 0.0028 450 0.0263 0.4395 0.0599 750 0.0256 0.4372 0.0586 900 0.0257 0.4385 0.0586 1200 0.0255 0.4393 0.0581 1500 0.0253 0.4386 0.0578 1800 0.0248 0.4330 0.0573 2100 0.0250 0.4361 0.0574 6000 0.0249 0.4380 0.0568 δ = 4 90 0.0236 0.4866 0.0486 180 0.0189 0.4991 0.0379 300 0.0173 0.4889 0.0354 450 0.0160 0.4802 0.0334 750 0.0158 0.4815 0.0327 900 0.0153 0.4744 0.0323 1200 0.0151 0.4725 0.0319 1500 0.0151 0.4728 0.0318 1800 0.0147 0.4660 0.0315 2100 0.0147 0.4663 0.0315 6000 0.0146 0.4656 0.0313 δ = 5 90 0.0156 0.5989 0.0260 180 0.0121 0.5892 0.0205 300 0.0098 0.5613 0.0174 450 0.0093 0.5367 0.0173

Page 10 of 15 Table 2 continued Centroid Sample size (n) SD CV Mean error rate 750 0.0090 0.5507 0.0163 900 0.0085 0.5262 0.0161 1200 0.0085 0.5260 0.0162 1500 0.0085 0.5272 0.0161 1800 0.0083 0.5182 0.0160 2100 0.0083 0.5221 0.0159 6000 0.0081 0.5109 0.0158 Fig. 2 Average error rates of uncorrelated normal distribution: δ = 1 Fig. 3 Coefficients of variation for uncorrelated normal distribution: δ = 1 to 6000. The coefficients of variation in general for this distribution increased as the number of variables increased. Effect of group centroid separator on QDF under correlated and uncorrelated normal distribution This section presents the results of our investigation on the effect of the Mahalanobis distance on QDF for correlated normal distribution. Considering the correlated normal distribution in Fig. 9, it was observed that with increasing total sample size, the average error rate reduces as the δ increased and also reduced as the number of variables

Page 11 of 15 Fig. 4 Coefficients of variation for correlated normal distribution: δ = 1 Fig. 5 Average error rate for correlated normal distribution: n 1 :n 2 :n 3 = 1:1:1 Fig. 6 Coefficient of variation of correlated normal distribution: n 1 :n 2 :n 3 = 1:1:1 increased. It can be observed that there was about 2.37 % drop in the average error rate from total sample size 90 180 for all δ = 1s in the case of 8 variables. The average error rate reduced as the total sample size increased for all sample size ratios with increasing δ. The coefficients of variation of sample size ratio 1:1:1 with increasing total sample size in Fig. 10, uniform behaviour of δ was not portrayed. As coefficients of variation

Page 12 of 15 Fig. 7 Average error rates of uncorrelated normal distribution: n 1 :n 2 :n 3 = 1:1:1 Fig. 8 Coefficients of variation for uncorrelated normal distribution: n 1 :n 2 :n 3 = 1:1:1 Fig. 9 Average error rates of correlated normal distribution for δ: n 1 :n 2 :n 3 = 1:1:1 for δ = 5 and 4 were declining, that of the rest of the δs may be increasing or reducing depending on the particular sample size ratio. Therefore, with increasing δ, δ = 5 gives higher coefficients of variation. From Fig. 11, we observed that the average error rates of the individual δs reduce as the sample size increases. There was about 3.19, 5.09, 6.81 % drop of the average error rate

Page 13 of 15 Fig. 10 Coefficients of variation for correlated normal distribution: n 1 :n 2 :n 3 = 1:1:1 Fig. 11 Average error rates of uncorrelated normal distribution for δ: n 1 :n 2 :n 3 = 1:1:1 for δ = 1, variables 4, 6 and 8 respectively. The average error rates of δ = 2 for variables 4 6 exhibited about 2.00, 3.99, 6.65 % drop in the average error rates. In general, the average error rates decreased as δ increased irrespective of the number of variables and sample size ratios. The coefficient of variation of this distribution of sample size ratio 1 : 1 : 1 in Fig. 12 did not show any uniform pattern in the variabilities as δ increased but in general as δ increased, the variabilities also increased. Fig. 12 Coefficients of variation for uncorrelated normal distribution: n 1 :n 2 :n 3 = 1:1:1

Page 14 of 15 Conclusion The study focussed on the asymptotic performance of the QDF under correlated and uncorrelated normally distributed training samples. Under this distribution, the performance of the QDF under varying sampling ratios, selected number of variables and different group centroid separators were extensively studied. The QDF recorded minimum misclassification error rates and high variability as the sample size increased asymptotically under correlated normal distribution, thereby increasing the accuracy of classification of observations with the function. The performance of the QDF deteriorated when the sample size ratio was 1:2:3 as δ increased with increasing sample size. However, the performance of the function was appreciably good under both correlated and uncorrelated normal distributions when their estimated average misclassification error rate decreased with increasing number of variables (from 4 to 8). This results shows some partial conformity with the study of Lawoko (1988) where the researcher found that the efficiency of the QDF and other classifiers are generally lowered by positively correlated training observations. Generally, the study found that, the QDF performed better resulting in the reduction in misclassification error rates as group centroid separator increases with non increasing sample size and under correlated training samples. This results therefore shows some partial conformity with the studies by Marks in 1974. Marks approached the problem of discrimination by comparing the performance of QDF with other classifiers. Although he considered only two populations, the QDF performance was abysmal under small sample size selection when covariance matrices were nearly equal with large dimensions. Authors contributions AA worked on the literature review and background of the study as well as suggestion of the appropriate method and software used and also supervised the entire research work. MAB prepared the manuscript and discussed part of the studies findings. OOT conducted the data analysis through simulation with the used of MATLAB software and also presented and discussed part of the findings. All authors read and approved the final manuscript. Author details 1 Department of Mathematics, Kwame Nkrumah University of Science and Technology, PMB KNUST, Kumasi, Ghana. 2 Institute of Research, Innovation and Development (IRID), Kumasi Polytechnic, Box 854, Kumasi, Ghana. Competing interests The authors declare that they have no competing interests. Received: 5 October 2015 Accepted: 13 January 2016 References Adebanji AO, Nokoe S (2004) Evaluating the quadratic classifier. In: Proceedings of the third international workshop on contemporary problems in mathematical physics Fisher RA (1936) The use of multiple measurements in taxonomic problems. Ann Eugen 7:179 188 Hyodo M, Kubokawa T (2014) A variable selection criterion for linear discriminant rule and its optimality in high dimensional and large sample data. J Multivar Anal 123:364 379 Johnson RA, Wichern DW (2007) Applied multivariate statistical analysis, sixth edn. Pearson Education, Inc, NJ Kakaï GR, Pelz DR (2010) Asymptotic error rate of linear, quadratic and logistic rules in multi-group discriminant analysis. Int J Appl Math Stat 18(10):70 81 Lawoko CR (1988) Discriminant analysis with correlated training samples. Bull Aust Math Soc 37:313 315 Mardia KV, Kent JT, Bibby JM (1995) Multivariate analysis. Academic Press, Harcourt Brace and Company, New York Marks S, Dunn OJ (1974) Discriminant functions when covariance matrices are unequal. J Am Stat Assoc 69(346):555 559 Murray GD (1977) A cautionary note on selection of variables in discriminant analysis. Appl Stat 26(3):246 250

Page 15 of 15 Smith C (1947) Some examples of discrimination. Ann Eugen 15:272 282 Wahl PW, Kronmal RA (1977) Discriminant functions when covariance matrices are unequal and sample sizes are moderate. Biometrics 33:479 484