Research Article Combining MLC and SVM Classifiers for Learning Based Decision Making: Analysis and Evaluations

Size: px
Start display at page:

Download "Research Article Combining MLC and SVM Classifiers for Learning Based Decision Making: Analysis and Evaluations"

Transcription

1 Computational Intelligene and Neurosiene Volume 2015, Artile ID , 8 pages Researh Artile Combining MC and SVM Classifiers for earning Based Deision Making: Analysis and Evaluations Yi Zhang, 1 Jinhang Ren, 2 and Jianmin Jiang 3 1 Shool of Computer Software, Tianjin University, Tianjin , China 2 Centre for Exellene in Signal and Image Proessing, University of Strathlyde, Glasgow G1 1XW, UK 3 Shool of Computer Siene and Software Engineering, Shenzhen University, Shenzhen 5160, China Correspondene should be addressed to Jinhang Ren; jinhang.ren@strath.a.uk Reeived 24 Marh 2015; Revised 8 May 2015; Aepted 11 May 2015 Aademi Editor: Pietro Ariò Copyright 2015 Yi Zhang et al. This is an open aess artile distributed under the Creative Commons Attribution iense, whih permits unrestrited use, distribution, and reprodution in any medium, provided the original work is properly ited. Maximum likelihood lassifier (MC) and support vetor mahines (SVM) are two ommonly used approahes in mahine learning. MC is based on Bayesian theory in estimating parameters of a probabilisti model, whilst SVM is an optimization based nonparametri method in this ontext. Reently, it is found that SVM in some ases is equivalent to MC in probabilistially modeling the learning proess. In this paper, MC and SVM are ombined in learning and lassifiation, whih helps to yield probabilisti output for SVM and failitate soft deision making. In total four groups of data are used for evaluations, overing sonar, vehile, breast aner, and DNA sequenes. The data samples are haraterized in terms of Gaussian/non-Gaussian distributed and balaned/unbalaned samples whih are then further used for performane assessment in omparing the SVM and the ombined SVM-MC lassifier. Interesting results are reported to indiate how the ombined lassifier may work under various onditions. 1. Introdution Maximum likelihood lassifiation (MC) is one of the most ommonly used approahes in signal lassifiation and identifiation, whih has been suessfully applied in a wide range of engineering appliations inluding lassifiation for digital amplitude-phase modulations [1], remote sensing [2], genes seletion for tissue lassifiation [3], nonnative speeh reognition [4], hemial analysis in arhaeologial appliations [5], and speaker reognition [6]. On the other hand, support vetor mahines (SVM) have attrated muh inreasing attention, whih an be found in almost all areas when predition and lassifiation of signal are required, suh as sour predition on grade-ontrol struture [7], fault diagnosis [8], EEG signal lassifiation [9], and fire detetion [10] as well as road sign detetion and reognition [11]. Based on the priniples of Bayesian statistis, MC provides a parametri approah in deision making where the model parameters need to be estimated before they are applied for lassifiation. On the ontrary, SVM is a nonparametri approah, where the theoreti bakground is supervised mahine learning. Due to the differenes of these two lassifiers, their performane appears to be muh different. Taking the appliation in remote sensing, for example, in Pal and Mather [12] andhuangetal.[13], it is found that SVM outperforms MC and several other lassifiers. In Waske and Benediktsson [14], SVM produes better results from SAR images, yet in most ases it generates worse results than MC from TM images. In Szuster et al. [15], SVM only yields slightly better results than MC for land over analysis. As a result, detailed assessments as on what onditions SVM outperforms or appears inferior to MC are worth further investigation. Furthermore, there beomes a trend to ombine the priniple of MC, Bayesian theory, with SVM for improved lassifiation. In Ren [16], Bayesian minimum error lassifiation is applied to the predited outputs of SVM for error-redued optimal deision making. Similarly, in Vong et al. [17], Bayesian deision theory is applied in SVM for imbalane measurement and feature optimization for improved performane. In Vega et al. [18], Bayesian statistis are ombined with SVM for parameter optimization.

2 2 Computational Intelligene and Neurosiene In Hsu et al. [19], Bayesian inferene is applied to estimate the hyperparameters used in SVM learning to speed up the training proess. In Foody [20], relevane support mahine (RVM), a Bayesian extension of SVM, is proposed whih enables an estimate of the posterior probability of lass membership where onventional SVM fail to do so. Consequently, in-depth analysis of the two lassifiers is desirable to disover their pros and ons in mahine learning. In this paper, analysis and evaluations of SVM and MC are emphasized, using data from various appliations. Sine theseleteddatasatisfyertainonditionsintermsofspeifi sample distributions, we aim to find out how the performane of the lassifiers is onneted to the partiular data distributions. As a onsequene, the work and the results shown in the paper are valuable for us to understand how these lassifiers work, whih an then provide insightful guidane as how to selet and ombine them in real appliations. The remaining parts of the paper are organized as follows. Setion2 introdues the priniples of the two lassifiers. Setion 3 desribes data and methods that have been used, where experimental results and evaluations are analyzed and disussed in Setion4. Conluding remarks are given in Setion MC and SVM Revisited In this setion, the priniples of the two lassifiers, SVM and MC, are disussed. By omparing their theoreti bakground and implementation details, the two lassifiers are haraterized in terms of their performanes during the training and testing proesses. This in turn has motivated our work in the following setions The Maximum ikelihood Classifier (MC). et x i =(x i,1, x i,2,...,x i,n ) T, i [1,M],beagroupofN-dimensional features, derived from M observed samples, and y i [1,C] denotes the lass label assoiated with x i ;thatis,intotalwe have C lasses denoted as ω, 2. The basi assumption of MC is that for eah lass of data the feature spae satisfies speified distributions, usually Gaussian, and also the samples are independent of eah other. To this end, the likelihood (probability) for samples within the kth lass, ω k, is given as follows: p(x ω ) = 1 (2π) N/2 S 1/2 exp 1 2 (x μ )T S 1 (x μ )}, where μ and S, respetively, denote the mean vetor and ovariane of all N samples within ω, whih an be determined using maximum likelihood estimation as (1) For a given sample x i, the probability it belongs to lass ω anbedenotedasp(ω x i ).Thelass that x i is determined to be within is then deided by f MC (x i )=argmax (p (ω x i )). (3) BasedonBayesiantheory,wehave p(ω x i )= p(ω )p(x i ω ). (4) p(x i ) Sine p(x i ) is a onstant in (4) when x i is given, (3) an be rewritten as f MC (x i )=argmax (p (ω )p(x i ω )). (5) Applying logarithm operation to the right side of (5),also letting g (x) =lnp(x ω )+lnp(ω ) be the disriminating funtion, (5) beomes f MC (x i )=argmax (g (x i )), (6) g (x) = 1 2 (x u ) T S 1 (x u ) N 2 ln2π 1 2 ln S + lnp(ω ). Again we an ignore the onstant in (7) and simplify the disriminating funtion as g (x) = 1 2 (x u ) T S 1 (x u ) 1 2 ln S where W = (2S ) 1, w = S 1 + lnp(ω )=x T W x + w T x +η, (7) (8) u,andη = 2 1 u T S 1 u 2 1 ln S +lnp(ω ). As an be seen, g (x) is now a quadrati funtion of x depending on three parameters, that is, u, S,and p(ω ). When the lass is speified, these parameters are determined; hene the quadrati funtion only depends on the lass and the input sample x. Also it is worth noting that the third item η is atually a onstant. In a partiular ase when p(ω ) is a onstant for all, that is, the prior probability that a sample belongs to one of the lasses is equal, lnp(ω ) in (8) an be ignored; hene the disriminating funtion is rewritten as g (x) = (x u ) T S 1 (x u ) ln S, (9) μ =N 1 S =N 1 N N x i, (x i μ )(x i μ ) T. (2) where the salar 1/2 is also ignored as it makes no differene when (6) is applied for deision making. However, suh simplifiation annot be made unless we have lear knowledge abouttheequaldistributionofthesamplesoverthec lasses. Based on (9), the deision funtion an be further simplified if the total number of lasses is redued to two,

3 Computational Intelligene and Neurosiene 3 where the two lasses are denoted as 1 and 1 and the sign funtion is introdued for simpliity: f MC (x i )=sign (g MC (x i )) = 1, if g MC (x i )<0, 1, if g MC (x i )>0, g MC (x) =(x u ) T S 1 (x u ) (x u + ) T S 1 + (x u +)+ln S + S. (10) Moreover, in a speial ase when S + = S = S, the quadrati deision funtion in (10) beomes a linear one as g MC (x) =(u u + ) T S 1 x 2 1 (u T S 1 u u T + S 1 u + ). (11) 2.2. The Support Vetor Mahine (SVM). SVM was originally developed for the lassifiation of two-lass problem. In Cortes and Vapnik [21],thepriniplesofSVMareomprehensivelydisussed.etthetwolassesbedenotedas1and 1, similar to the deision funtion for MC in (10); the deision funtionforlinearsvmisgivenby y i =f SVM (x i )= 1, if g SVM (x i ) 1, 1, if g SVM (x i ) 1, g SVM (x i )=w T x i +b, (12) where y i denotesthelabeledvaluefortheinputsamplex i ; w and b are parameters to be determined in the training proess. Note that the deision funtion in (12) is atually equivalent to the one in (10) if we adjust the salar for b, yet(12) is more feasible as it has inreased the deision margin between the two lasses from near zero to 2 w 1. By multiplying y i to both sides of the disriminating funtion g, this an be further simplified as y i g SVM (x i ) 1, that is, y i (w T x i +b) 1. (13) Hene, the optimal hyperplane to separate the training data with a maximal margin is defined by w T o x +b o = 0, (14) where w o and b o are the determined parameters, and the maximal distane beomes 2 w o 1. To determine this optimal hyperplane, we need to maximize 2 w 1, or equivalently to minimize 2 1 w 2,subjetto x i, y i (w T x i +b) 1. Using the agrangian multipliers, this optimization problem an be solved by Ω(w,b,λ i )= 1 2 w 2 λ i (y i (w T x i +b) 1) s.t. λ i 0. (15) Eventually, the parameters w o and b o are deided as w o = λ i y i x i, b=y i w T o x i, where λ i = 0. (16) For any nonzero λ i,theorrespondingx i is denoted as one support vetor whih naturally satisfies y i (w T x i +b)=1. Therefore, w o is atually the linear ombination of all support vetors. Also we have λ i y i = 0. Eventually if we ombine (16) with (12), the disrimination funtion for any test sample x beomes g SVM (x) = λ i y i x T i x +b (17) whih solely relies on the inner produt of the support vetor and the test sample. For nonlinear problems whih are not linearly separable, the disrimination funtion is extended as g SVM (x) = = λ i y i φ(x i ) T φ (x) +b λ i y i K(x i, x)+b, (18) where φ aims to map the input samples to another spae, thus making them linearly separable. Another important step is to introdue the kernel trik to alulate the inner produt of mapped samples, that is, φ(x i ), φ(x), whih avoids the diffiulty in determining the mapping funtion φ and also the ost for alulation of the mapped samples and their interprodut. Several typial kernels inluding linear, polynomial, and radial basis funtion (RBF) are summarized as follows: x T i x j linear (x T i K(x i, x j )= x j + 1) p, p > 0 polynomial exp ( x 2 i x j 2σ 2 ) RBF, (19) where optimal values for the assoiated parameters p and σ are determined automatially during the training proess. Though SVM is initially developed for two-lass problems, it has been extended to deal with multilass lassifiation based on either ombination of deision results from multiple two-lass lassifiations or optimization on multilass based learning. Some useful further readings an be found in [22 24] Analysis and Comparisons. MC and SVM are two useful tools for lassifiation problems, where both of them

4 4 Computational Intelligene and Neurosiene rely on supervised learning in determining the model and parameters. However, they are different in several ways as summarized below. Firstly, MC is a parametri approah whih has a basi assumption that the data satisfy Gaussian distribution. On the other ontrary, SVM is a nonparametri approah and it has no requirement on the prior distribution of the data, yet various kernels an be empirially seleted to deal with different problems. Seondly, for MC the model parameters, μ and S, an be diretly estimated using the training data before they are applied for testing and predition. However, SVM relies on supervised mahine learning, in an iterative way, to determine a large amount of parameters inluding w o, b o,all nonzero λ i, and their orresponding support vetors. Thirdly, MC an be straightforward applied to two-lass and multilass problems, yet additional extension is needed for SVM to deal with multilass problem as it is initially developed for two-lass lassifiation. Finally, a posterior lass probabilisti output for the predited results an be intuitively generated from MC, whih is a valuable indiator for lassifiation to show how likely a sample belongs to a given lass. For SVM, however, this is not an easy task though some extensions have been introdued toprovidesuhanoutputbasedonthepreditedvalue from SVM. In Platt [25], a posterior lass probability p i is estimated by a sigmoid funtion as follows: 1 p i =P(y=1 x i ) 1 + exp (Ag SVM (x i )+B). (20) The parameters A and B are determined by solving a regularized maximum likelihood problem as follows: (A,B ) = argmin (A,B) ( (1 +N 1 ) (2 +N t i = 1 ), if y i = 1, 1 (2 +N 1 ), if y i = 1, (t i log (p i )+(1 t i ) log (1 p i ))), (21) where N 1 and N 1 denote the number of support vetors labeled in lasses 1 and 1, respetively. In addition, in in et al. [26] Platt sapproahisfurther improved to avoid any numerial diffiulty, that is, overflow or underflow, in determining p i in ase E i =Ag SVM (x i )+Bis either too large or too small: p i = (1 +e E i ) 1, if E i 0, (22) e E i (1 +e E i ) 1, otherwise. Although there are signifiant differenes between SVM and MC, the probabilisti model above has unovered the onnetion between these two lassifiers. Atually, in Fran et al. [27] MC and SVM are found to be equivalent to eah otherinlinearases,andthisanalsobeonvinedbysimilar deision funtions in (10) and (12). 3. Data and Methods In this paper, analysis and evaluations of SVM and MC are emphasized, using data from various appliations. Sine the seleted data satisfy ertain onditions in terms of speifi sampledistributions,weaimtofindouthowtheperformane of the lassifiers is onneted to the partiular data distributions. As a onsequene, the work and the results shown in the paper are valuable for us to understand how these lassifiers work, whih an then provide insightful guidane as how to selet and ombine them in real appliations The Datasets. In our experiments, four different datasets, SamplesNew, svmguide3, sonar, and splie, are used. Among these four datasets, SamplesNew is a dataset of suspiious mirolassifiation lusters extrated from [16] and svmguide3 is a demo dataset of pratial SVM guide [28], whilst sonar and splie datasets ome from the UCI repository of mahine learning databases [29]. Atually, two priniples are applied in seleting these datasets: the first is how balaned the samples are distributed over two lasses, and the seond is whether the feature distributions are Gaussianalike. As an be seen, the first two datasets are severely imbalaned, espeially the first one, as there are far more data samples in one lass than those in another lass. On the other hand, the last two datasets are quite balaned. Regarding feature distributions, SamplesNew and svmguide3 are apparently non-gaussian distributed, yet the other two, sonar and splie, show approximately Gaussian harateristis when the variables are separately observed. This is also validated by the determined Pearson s moment oeffiient of skewness below [30], where μ i and σ i are the mean and standard deviation for the ith dimension of the dataset and E( ) refers to mathematial expetation. When the skewness oeffiients are determined for eah data dimension, the maximum, the minimum, and the average skewness oeffiients are obtained and shown in Table 1 for omparisons: S i = E(x i μ i ) 3. (23) 3.2. The Approah. In our approah, a ombined lassifier using SVM and MC is applied, whih ontains the following three stages. In Stage 1, SVM is used for initial training and lassifiation. For the orretly lassified results in SVM, these are employed in Stage 2, wheremcisappliedfor probability-based modeling. The probability-based models are then utilized in Stage 3 for improved deision making and better lassifiation. Details of these three stages are disussed as follows. Stage 1 (SVM for initial training and lassifiation). The open soure library libsvm [28] is used for initial training and lassifiation of the aforementioned four datasets, and both the linear and the Gaussian radial basis (RBF) kernels are tested. For eah group of datasets, all the data are normalized to [ 1, 1] before SVM is applied. Through 5-fold ross validation, the best group of parameters, inluding the ost and the gamma value, are optimally determined. Eventually, σ 3 i

5 Computational Intelligene and Neurosiene 5 Table1:Fourdatasetsusedinourexperiments. Dataset Features Balane status Distribution of feature values Number of samples (lass 0/lass 1) Skewness oeffiients Max Min Mean SamplesNew 39 Unbalaned Non-Gaussian Approx. 748 (115/633) svmguide3 21 Unbalaned Non-Gaussian Approx (947/337) Sonar 31 Balaned Approx. Gaussian 209 (97/102) Splie 60 Balaned Approx. Gaussian 1269 (653/616) Training auray (%) SampleNew Sonar Splie svmguide3 Testing auray (%) SampleNew Sonar Splie svmguide3 % SVM % SVM + MC 65% SVM 65% SVM + MC 50% SVM 50% SVM + MC % SVM % SVM + MC 65% SVM 65% SVM + MC 50% SVM 50% SVM + MC (a) (b) Figure 1: Comparing training (a) and testing results (b) using linear SVM and the ombined lassifier for the four datasets under three different training ratios. the optimal parameters are used for lassifiation of our datasets. In our experiments, the training ratios are set at three differentlevels,thatis,%,65%,and50%.basially,there is no overlap between training data and testing data. At a given training ratio, the training data is randomly seleted and repeated five times, whih leads to 5 groups of test results generated. Finally, the average performane over these five experiments is used for omparisons. Stage 2 (using MC for probability-based modeling). For those orretly lassified samples, whih lie in two lasses, thatis,lass0andlass1,theyaretakentodeidetwo probability-based models, in a way as disussed in MC. In other words, for samples orretly lassified in lass 0, they areusedtodeterminethemeanvetorandtheorresponding ovariane matrix within lass 0. On the other hand, samples whih are orretly lassified in lass 1 are used to determine the mean vetor and the orresponding ovariane matrix within lass 1. Note that not all samples in lass 0 or lass 1 are used in alulating the related MC models, as those whih annot be orretly lassified by SVM are treated as outliers and ignored in MC modeling for robustness. After MC modeling, for eah sample x, theassoiated likelihoods that it belongs to the two lasses are realulated and denoted as p 0 (x) and p 1 (x). As a result, the deision for lassifiation is simplified as f MC (x) = 1, if p 1 (x) p 0 (x) >τ, 0, otherwise, (24) where τ is a threshold to be optimally determined to generate the best lassified results. Please note that the likelihoods (or probability values) here an also be taken as a probabilisti output of the SVM. Stage 3 (improved lassifiation). With the estimated MC models and the optimal threshold τ, all samples are then reheked for improved lassifiation, using (23) and the determined likelihoods p 0 (x) and p 1 (x), aordingly. Interestingresultsonthesefourdatasetsaregivenandanalyzedin detail in the next setion. 4. Results and Evaluations Forthe fourdatasets disussed insetion 3, the experimental results are reported and analyzed in this setion. Firstly, we disuss results from a ombined lassifier of MC and a linear SVM. Then, results from MC and RBF based SVM are ompared. In addition, how different rebalaning strategies affet the performane of unbalaned datasets is also disussed. 4.1.ResultsfromainearSVMandtheMC. In this group of experiments, a ombined lassifier using a linear SVM and themcisemployed,andtherelevantresultsarepresented in Figure 1. InFigure 1, we plot the lassifiation rate as the predition auray with the hange of training ratio, that is, the perentage of data used for training. Three training ratios, %, 65%, and 50%, are used. Please note that, due to degradation of the ovariane matrix, the MC

6 6 Computational Intelligene and Neurosiene Training auray (%) SampleNew Sonar Splie svmguide3 Testing auray (%) SampleNew Sonar Splie svmguide3 % SVM % SVM + MC 65% SVM 65% SVM + MC 50% SVM 50% SVM + MC % SVM % SVM + MC 65% SVM 65% SVM + MC 50% SVM 50% SVM + MC (a) (b) Figure 2: Comparing training (a) and testing results (b) using RBF-kernelled SVM and the ombined lassifier for the four datasets under three different training ratios. annot be used to improve the results for the SampleNew dataset. Consequently, the results from the SVM are taken as the output of the ombined lassifier. For the other three datasets, the results are summarized and ompared as follows. Firstly, for the three datasets, sonar, splie, and svmguide3, apparently we an see that the ombined solution yield signifiantly improved results in training, espeially for the first two datasets. This demonstrates that the ombined lassifier an indeed ahieve more aurate modeling of the datasets. In addition, possibly due to overfitting, the experimental results show that a larger training ratio does not neessarily improve the training performane. However, the testing results are somehow different. For the sonar dataset, whih is balaned and appears nearly Gaussian distributed, the ombined lassifier yields muh improved results in testing, espeially when the training ratios are % and 50%. Suh results are not surprising as the MC is ideal to model Gaussian-alike distributed datasets. For the splie dataset, whih is balaned and also nearly Gaussian distributed, slightly improved testing results are also produed by the ombined lassifier at training ratios at % and 50%, but the testing results at the training ratio of 65% beome slightly worse than those from the SVM. For the more hallenging svmguide3 dataset, whih is unbalaned and non-gaussian distributed, although the ombined lassifier yields improved testing results at the training ratio of 50%, the results at the other two training ratios, perhaps due to overfitting, seem inferior to the results from the SVM. Atually, in nature the MC has diffiulty in modeling non-gaussian distributed datasets, and this explains where the ombined lassifier makes less ontribution to these datasets Results from a RBF-Kernelled SVM and the MC. In this group of experiments, the RBF kernel is used for the SVM in the ombined lassifier as it is popularly used in various lassifiation problems [16, 22]. Forthefourdatasets we used, again the training results and the testing results under three different training ratios are summarized and given in Figure 2 for omparisons. Firstofall,RBF-kernelledSVM(R-SVM)produesmuh improved results ompared to those using linear SVM, espeially for the training results. In fat, the ombined lassifier generates better results than the SVM only in the SampleNew dataset, slightly worse results in sonar and splie datasets, and muh degraded results in the svmguide3 dataset. Regarding testing results, although the ombined lassifier generates omparable or slightly worse results in the SampleNew dataset and the svmguide3 dataset, R-SVM yields better results in splie dataset and sonar dataset. The reason behind that is that results from the nonlinear kernel in R- SVM annot be diretly refined using MC. Also, oasionally the results from the ombined lassifier seem more sensitive to the training ratio, espeially for the splie dataset, whih is perhaps due to the threshold to be determined whih depends more or less on the training data used Testing on Rebalaned Data. In this group of experiments, using the hallenging dataset svmguide3, how various strategies to rebalane the unbalaned data may affet the lassifiation performane is analyzed. For the unbalaned dataset, samples from one lass may be overrepresented ompared to those in another lass. As a result, we an either oversamplethedataofminorityorsubsamplethedataof majority to balane the number of samples represented in the training set for better modeling of the data. On the other hand,thetestsamplesremaintobeunbalanedasitis assumedwehavenolabelinformationforthetestsamples. For oversampling, data samples whih are in minority lass are randomly dupliated and inserted into the dataset. The repliation of data items ontinues until the entire training set beomes balaned. Different from oversampling, subsampling randomly disards samples from the majority lass until the training set ahieves balaned. Sine the performane may be affeted by samples dupliated or disarded, this proess is repeated for over 10 times and the average performane is then reorded for omparisons. Using three different training ratios at %, 65%, and 50%,resultsofbalanedlearningforthesvmguide3dataset are summarized in Figure 3. Under a given training ratio, both training results and testing results are presented in

7 Computational Intelligene and Neurosiene svmguide svmguide3 % training % testing 65% training 65% testing 50% training 50% testing % training % testing 65% training 65% testing 50% training 50% testing SVM_Unbalaned SVM_Balaned_OverSampling SVM_Balaned_SubSampling SVM + MC_Unbalaned SVM + MC_Balaned_OverSampling SVM + MC_Balaned_SubSampling (a) SVM_Unbalaned SVM_Balaned_OverSampling SVM_Balaned_SubSampling SVM + MC_Unbalaned SVM + MC_Balaned_OverSampling SVM + MC_Balaned_SubSampling (b) Figure 3: Results of balaned learning for the svmguide3 dataset, using linear SVM (a) and R-SVM (b). groups, where eah group ontains results from 6 different experimental senarios. In addition, the results from liner SVM and RBF-kernelled SVM are shown for omparisons as well. WhenlinearSVMisused,asshowninthefirstrow of Figure 3, surprisingly, the results from unbalaned data aremuhbetterthanthosefrombalaneddata.alsoin majority ases, the ombined lassifier outperforms the SVM lassifier in both training and testing, even with balaned learning introdued. The testing results from SVM for balaned learning via oversampling seem better than those from subsampling, yet it seems that the ombined lassifier produes better results from subsampling based balaned learning. For RBF-kernelled SVM, apparently, the training results from SVM via oversampling are among the best, though the testing results are inferior to those from unbalaned training. This indiates that the training proess has been overfitting in this ontext. In fat, testing results from the ombined lassifier are slightly worse than those from the SVM lassifier, that is, some degradation. Again, this is aused by the inonsisteny of the nonlinear SVM and the linear nature of the MC. 5. Conlusions SVM and MC are two typial lassifiers ommonly used in many engineering appliations. Although there is a trend to ombine MC with SVM to provide a probabilisti output for SVM, under what onditions the ombined lassifier may work effetively needs to be explored. In this paper, omprehensive results are demonstrated to answer the question above, using four different datasets. First of all, it is found that the ombined lassifier works under ertain onstraints, suh as a linear SVM, balaned dataset, and near Gaussiandistributed data. When a RBF-kernelled SVM is used, the ombined lassifier may produe degraded results due to the inonsisteny between the nonlinear kernel in SVM and linear nature of MC. In addition, for a hallenging dataset, balaned learning may improve the results of training but not neessarily the testing results. The reason behind that is that the ombined SVM-MC lassifier works on three assumptions, that is, Gaussian distributed, interlass separable, and model onsisteny between training data and testing data. Although the third assumption is true in most ases, the preondition of separable Gaussian distributed data is rather a strit onstraint for data and is rarely satisfied. As a result, this introdues a fundamental diffiulty in ombining these two lassifiers. However, under ertain irumstanes, the ombined lassifier indeed an signifiantly improve the lassifiation performane. It is worth noting that when more groups are introdued in modelling a given dataset the effiay an be severely degraded due to the inonsisteny of statistial distribution between groups. Future work will fous on ombining other lassifiers suh as neural network for appliations in medial imaging [31 33] andreognition and lassifiation tasks [34, 35]. Conflit of Interests The authors delare that there is no onflit of interests regarding the publiation of this paper. Referenes [1] W. Wei and J. M. Mendel, Maximum-likelihood lassifiation for digital amplitude-phase modulations, IEEE Transations on Communiations,vol.48,no.2,pp ,2000. [2] K.iu,W.Shi,andH.Zhang, Afuzzytopology-basedmaximum likelihood lassifiation, ISPRS Journal of Photogrammetry and Remote Sensing,vol.66,no.1,pp ,2011.

8 8 Computational Intelligene and Neurosiene [3] H.-. Huang, C.-C. ee, and S.-Y. Ho, Seleting a minimal number of relevant genes from miroarray data to design aurate tissue lassifiers, BioSystems,vol.90,no.1,pp.78 86, [4] X. He and Y. Zhao, Prior knowledge guided maximum expeted likelihood based model seletion and adaptation for nonnative speeh reognition, Computer Speeh and anguage, vol. 21, no. 2, pp , [5] M. Hall and S. Minyaev, Chemial analyses of Xiong-nu pottery: a preliminary study of exhange and trade on the inner Asian steepes, JournalofArhaeologialSiene,vol.29,no.2, pp , [6] Q. Y. Hong and S. Kwong, A geneti lassifiation method for speaker reognition, Engineering Appliations of Artifiial Intelligene,vol.18,no.1,pp.13 19,2005. [7] A. Goel and M. Pal, Appliation of support vetor mahines in sour predition on grade-ontrol strutures, Engineering Appliations of Artifiial Intelligene,vol.22,no.2,pp , [8] I. Yélamos, G. Esudero, M. Graells, and. Puigjaner, Performane assessment of a novel fault diagnosis system based on support vetor mahines, Computers and Chemial Engineering,vol.33,no.1,pp ,2009. [9] A. Subasi and M. I. Gursoy, EEG signal lassifiation using PCA, ICA, DA and support vetor mahines, Expert Systems with Appliations,vol.37,no.12,pp ,2010. [10] B. C. Ko, K.-H. Cheong, and J.-Y. Nam, Fire detetion based on vision sensor and support vetor mahines, Fire Safety Journal, vol.44,no.3,pp ,2009. [11] S. Maldonado-Basón, S. afuente-arroyo, P. Gil-Jiménez, H. Gómez-Moreno, and F. ópez-ferreras, Road-sign detetion and reognition based on support vetor mahines, IEEE Transations on Intelligent Transportation Systems,vol.8,no.2, pp ,2007. [12] M. Pal and P. M. Mather, Assessment of the effetiveness of support vetor mahines for hyperspetral data, Future Generation Computer Systems,vol.20,no.7,pp ,2004. [13] C.Huang,.S.Davis,andJ.R.G.Townshend, Anassessment of support vetor mahines for land over lassifiation, International Journal of Remote Sensing, vol.23,no.4,pp , [14] B. Waske and J. A. Benediktsson, Fusion of support vetor mahines for lassifiation of multisensor data, IEEE Transations on Geosiene and Remote Sensing,vol.45,no.12,pp , [15] B. W. Szuster, Q. Chen, and M. Borger, A omparison of lassifiation tehniques to support land over and land use analysis in tropial oastal zones, Applied Geography, vol.31, no. 2, pp , [16] J. Ren, ANN vs. SVM: whih one performs better in lassifiation of MCCs in mammogram imaging, Knowledge-Based Systems,vol.26,no.2,pp ,2012. [17] C.-M. Vong, P.-K. Wong, and Y.-P. i, Predition of automotive engine power and torque using least squares support vetor mahines and Bayesian inferene, Engineering Appliations of Artifiial Intelligene,vol.19,no.3,pp ,2006. [18] J. Vega, A. Murari, G. Vagliasindi, and G. A. Ratt, Automated estimation of /H transition times at JET by ombining Bayesian statistis and support vetor mahines, Nulear Fusion,vol.49,no.8,ArtileID085023,11pages,2009. [19] C. C. Hsu, K. S. Wang, and S. H. Chang, Bayesian deision theory for support vetor mahines: imbalane measurement and feature optimization, Expert Systems with Appliations,vol. 38,no.5,pp ,2011. [20] G. M. Foody, RVM-based multi-lass lassifiation of remotely sensed data, International Journal of Remote Sensing,vol.29,no. 6, pp , [21] C. Cortes and V. Vapnik, Support-vetor networks, Mahine earning,vol.20,no.3,pp ,1995. [22] C.-W. Hsu and C.-J. in, A omparison of methods for multilass support vetor mahines, IEEE Transations on Neural Networks,vol.13,no.2,pp ,2002. [23] Y. ee, Y. in, and G. Wahba, Multiategory support vetor mahines: theory and appliation to the lassifiation of miroarraydataandsatelliteradianedata, Journalofthe Amerian Statistial Assoiation, vol.99,no.465,pp.67 81, [24] K. Crammer and Y. Singer, On the algorithmi implementation of multilass Kernel-based vetor mahines, Journal of Mahine earning Researh, vol. 2, pp , [25] J. Platt, Probabilisti outputs for support Vetor mahines and omparisons to regularized likelihood methods, in Advanes in arge Margin Classifiers,A.Smola,P.Bartlett,B.Sholkopf,and D. Shuurmans, Eds., MIT, Cambridge, Mass, USA, [26] H. T. in, C. J. in, and R. C. Weng, A note on Platt s probabilisti outputs for support vetor mahines, Mahine earning, vol. 68, no. 3, pp , [27] V. Fran, A. Zien, and B. Sholkopf, Support vetor mahines as probabilisti models, in Proeedings of the 28th International Conferene on Mahine earning (ICM 11), Bellevue,Wash, USA, June-July [28] C.-C. Chang and C.-J. in, IBSVM: a library for support vetor mahines, ACM Transations on Intelligent Systems and Tehnology,vol.2,no.3,artile27,pp.1 27,2011, jlin/libsvm. [29] A. Frank and A. Asunion, UCI Mahine earning Repository, [30] D. P. Doane and. E. Seward, Measuring skewness: a forgotten statisti? JournalofStatistisEduation,vol.19,no.2,pp.1 18, [31] J. Jiang, P. Trundle, and J. Ren, Medial image analysis with artifiial neural networks, Computerized Medial Imaging and Graphis,vol.34,no.8,pp ,2010. [32] J. Ren, ANN vs. SVM: whih one performs better in lassifiation of MCCs in mammogram imaging, Knowledge-Based Systems,vol.26,pp ,2012. [33] J. Ren, D. Wang, and J. Jiang, Effetive reognition of MCCs in mammograms using an improved neural lassifier, Engineering Appliations of Artifiial Intelligene,vol.24,no.4,pp , [34]C.Zhao,X.i,J.Ren,andS.Marshall, Improvedsparse representation using adaptive spatial support for effetive target detetion in hyperspetral imagery, International Journal of Remote Sensing, vol. 34, no. 24, pp , [35] T. Kelman, J. Ren, and S. Marshall, Effetive lassifiation of Chinese tea samples in hyperspetral imaging, Artifiial Intelligene Researh,vol.2,no.4,2013.

9 Journal of Advanes in Industrial Engineering Multimedia The Sientifi World Journal Applied Computational Intelligene and Soft Computing International Journal of Distributed Sensor Networks Advanes in Fuzzy Systems Modelling & Simulation in Engineering Submit your manusripts at Journal of Computer Networks and Communiations Advanes in Artifiial Intelligene International Journal of Biomedial Imaging Advanes in Artifiial Neural Systems International Journal of Computer Engineering Computer Games Tehnology Advanes in Advanes in Software Engineering International Journal of Reonfigurable Computing Robotis Computational Intelligene and Neurosiene Advanes in Human-Computer Interation Journal of Journal of Eletrial and Computer Engineering

Complexity of Regularization RBF Networks

Complexity of Regularization RBF Networks Complexity of Regularization RBF Networks Mark A Kon Department of Mathematis and Statistis Boston University Boston, MA 02215 mkon@buedu Leszek Plaskota Institute of Applied Mathematis University of Warsaw

More information

Model-based mixture discriminant analysis an experimental study

Model-based mixture discriminant analysis an experimental study Model-based mixture disriminant analysis an experimental study Zohar Halbe and Mayer Aladjem Department of Eletrial and Computer Engineering, Ben-Gurion University of the Negev P.O.Box 653, Beer-Sheva,

More information

Bilinear Formulated Multiple Kernel Learning for Multi-class Classification Problem

Bilinear Formulated Multiple Kernel Learning for Multi-class Classification Problem Bilinear Formulated Multiple Kernel Learning for Multi-lass Classifiation Problem Takumi Kobayashi and Nobuyuki Otsu National Institute of Advaned Industrial Siene and Tehnology, -- Umezono, Tsukuba, Japan

More information

Weighted K-Nearest Neighbor Revisited

Weighted K-Nearest Neighbor Revisited Weighted -Nearest Neighbor Revisited M. Biego University of Verona Verona, Italy Email: manuele.biego@univr.it M. Loog Delft University of Tehnology Delft, The Netherlands Email: m.loog@tudelft.nl Abstrat

More information

Danielle Maddix AA238 Final Project December 9, 2016

Danielle Maddix AA238 Final Project December 9, 2016 Struture and Parameter Learning in Bayesian Networks with Appliations to Prediting Breast Caner Tumor Malignany in a Lower Dimension Feature Spae Danielle Maddix AA238 Final Projet Deember 9, 2016 Abstrat

More information

Nonreversibility of Multiple Unicast Networks

Nonreversibility of Multiple Unicast Networks Nonreversibility of Multiple Uniast Networks Randall Dougherty and Kenneth Zeger September 27, 2005 Abstrat We prove that for any finite direted ayli network, there exists a orresponding multiple uniast

More information

Assessing the Performance of a BCI: A Task-Oriented Approach

Assessing the Performance of a BCI: A Task-Oriented Approach Assessing the Performane of a BCI: A Task-Oriented Approah B. Dal Seno, L. Mainardi 2, M. Matteui Department of Eletronis and Information, IIT-Unit, Politenio di Milano, Italy 2 Department of Bioengineering,

More information

Variation Based Online Travel Time Prediction Using Clustered Neural Networks

Variation Based Online Travel Time Prediction Using Clustered Neural Networks Variation Based Online Travel Time Predition Using lustered Neural Networks Jie Yu, Gang-Len hang, H.W. Ho and Yue Liu Abstrat-This paper proposes a variation-based online travel time predition approah

More information

Millennium Relativity Acceleration Composition. The Relativistic Relationship between Acceleration and Uniform Motion

Millennium Relativity Acceleration Composition. The Relativistic Relationship between Acceleration and Uniform Motion Millennium Relativity Aeleration Composition he Relativisti Relationship between Aeleration and niform Motion Copyright 003 Joseph A. Rybzyk Abstrat he relativisti priniples developed throughout the six

More information

10.5 Unsupervised Bayesian Learning

10.5 Unsupervised Bayesian Learning The Bayes Classifier Maximum-likelihood methods: Li Yu Hongda Mao Joan Wang parameter vetor is a fixed but unknown value Bayes methods: parameter vetor is a random variable with known prior distribution

More information

Feature Selection by Independent Component Analysis and Mutual Information Maximization in EEG Signal Classification

Feature Selection by Independent Component Analysis and Mutual Information Maximization in EEG Signal Classification Feature Seletion by Independent Component Analysis and Mutual Information Maximization in EEG Signal Classifiation Tian Lan, Deniz Erdogmus, Andre Adami, Mihael Pavel BME Department, Oregon Health & Siene

More information

A Unified View on Multi-class Support Vector Classification Supplement

A Unified View on Multi-class Support Vector Classification Supplement Journal of Mahine Learning Researh??) Submitted 7/15; Published?/?? A Unified View on Multi-lass Support Vetor Classifiation Supplement Ürün Doğan Mirosoft Researh Tobias Glasmahers Institut für Neuroinformatik

More information

Optimization of Statistical Decisions for Age Replacement Problems via a New Pivotal Quantity Averaging Approach

Optimization of Statistical Decisions for Age Replacement Problems via a New Pivotal Quantity Averaging Approach Amerian Journal of heoretial and Applied tatistis 6; 5(-): -8 Published online January 7, 6 (http://www.sienepublishinggroup.om/j/ajtas) doi:.648/j.ajtas.s.65.4 IN: 36-8999 (Print); IN: 36-96 (Online)

More information

Research Article Approximation of Analytic Functions by Solutions of Cauchy-Euler Equation

Research Article Approximation of Analytic Functions by Solutions of Cauchy-Euler Equation Funtion Spaes Volume 2016, Artile ID 7874061, 5 pages http://d.doi.org/10.1155/2016/7874061 Researh Artile Approimation of Analyti Funtions by Solutions of Cauhy-Euler Equation Soon-Mo Jung Mathematis

More information

Adaptive neuro-fuzzy inference system-based controllers for smart material actuator modelling

Adaptive neuro-fuzzy inference system-based controllers for smart material actuator modelling Adaptive neuro-fuzzy inferene system-based ontrollers for smart material atuator modelling T L Grigorie and R M Botez Éole de Tehnologie Supérieure, Montréal, Quebe, Canada The manusript was reeived on

More information

DIGITAL DISTANCE RELAYING SCHEME FOR PARALLEL TRANSMISSION LINES DURING INTER-CIRCUIT FAULTS

DIGITAL DISTANCE RELAYING SCHEME FOR PARALLEL TRANSMISSION LINES DURING INTER-CIRCUIT FAULTS CHAPTER 4 DIGITAL DISTANCE RELAYING SCHEME FOR PARALLEL TRANSMISSION LINES DURING INTER-CIRCUIT FAULTS 4.1 INTRODUCTION Around the world, environmental and ost onsiousness are foring utilities to install

More information

A new method of measuring similarity between two neutrosophic soft sets and its application in pattern recognition problems

A new method of measuring similarity between two neutrosophic soft sets and its application in pattern recognition problems Neutrosophi Sets and Systems, Vol. 8, 05 63 A new method of measuring similarity between two neutrosophi soft sets and its appliation in pattern reognition problems Anjan Mukherjee, Sadhan Sarkar, Department

More information

Hankel Optimal Model Order Reduction 1

Hankel Optimal Model Order Reduction 1 Massahusetts Institute of Tehnology Department of Eletrial Engineering and Computer Siene 6.245: MULTIVARIABLE CONTROL SYSTEMS by A. Megretski Hankel Optimal Model Order Redution 1 This leture overs both

More information

Supplementary Materials

Supplementary Materials Supplementary Materials Neural population partitioning and a onurrent brain-mahine interfae for sequential motor funtion Maryam M. Shanehi, Rollin C. Hu, Marissa Powers, Gregory W. Wornell, Emery N. Brown

More information

INTERNATIONAL JOURNAL OF CIVIL AND STRUCTURAL ENGINEERING Volume 2, No 4, 2012

INTERNATIONAL JOURNAL OF CIVIL AND STRUCTURAL ENGINEERING Volume 2, No 4, 2012 INTERNATIONAL JOURNAL OF CIVIL AND STRUCTURAL ENGINEERING Volume, No 4, 01 Copyright 010 All rights reserved Integrated Publishing servies Researh artile ISSN 0976 4399 Strutural Modelling of Stability

More information

The Influences of Smooth Approximation Functions for SPTSVM

The Influences of Smooth Approximation Functions for SPTSVM The Influenes of Smooth Approximation Funtions for SPTSVM Xinxin Zhang Liaoheng University Shool of Mathematis Sienes Liaoheng, 5059 P.R. China ldzhangxin008@6.om Liya Fan Liaoheng University Shool of

More information

Neuro-Fuzzy Modeling of Heat Recovery Steam Generator

Neuro-Fuzzy Modeling of Heat Recovery Steam Generator International Journal of Mahine Learning and Computing, Vol. 2, No. 5, Otober 202 Neuro-Fuzzy Modeling of Heat Reovery Steam Generator A. Ghaffari, A. Chaibakhsh, and S. Shahhoseini represented in a network

More information

Error Bounds for Context Reduction and Feature Omission

Error Bounds for Context Reduction and Feature Omission Error Bounds for Context Redution and Feature Omission Eugen Bek, Ralf Shlüter, Hermann Ney,2 Human Language Tehnology and Pattern Reognition, Computer Siene Department RWTH Aahen University, Ahornstr.

More information

Sensitivity Analysis in Markov Networks

Sensitivity Analysis in Markov Networks Sensitivity Analysis in Markov Networks Hei Chan and Adnan Darwihe Computer Siene Department University of California, Los Angeles Los Angeles, CA 90095 {hei,darwihe}@s.ula.edu Abstrat This paper explores

More information

ONLINE APPENDICES for Cost-Effective Quality Assurance in Crowd Labeling

ONLINE APPENDICES for Cost-Effective Quality Assurance in Crowd Labeling ONLINE APPENDICES for Cost-Effetive Quality Assurane in Crowd Labeling Jing Wang Shool of Business and Management Hong Kong University of Siene and Tehnology Clear Water Bay Kowloon Hong Kong jwang@usthk

More information

Control Theory association of mathematics and engineering

Control Theory association of mathematics and engineering Control Theory assoiation of mathematis and engineering Wojieh Mitkowski Krzysztof Oprzedkiewiz Department of Automatis AGH Univ. of Siene & Tehnology, Craow, Poland, Abstrat In this paper a methodology

More information

JAST 2015 M.U.C. Women s College, Burdwan ISSN a peer reviewed multidisciplinary research journal Vol.-01, Issue- 01

JAST 2015 M.U.C. Women s College, Burdwan ISSN a peer reviewed multidisciplinary research journal Vol.-01, Issue- 01 JAST 05 M.U.C. Women s College, Burdwan ISSN 395-353 -a peer reviewed multidisiplinary researh journal Vol.-0, Issue- 0 On Type II Fuzzy Parameterized Soft Sets Pinaki Majumdar Department of Mathematis,

More information

The Effectiveness of the Linear Hull Effect

The Effectiveness of the Linear Hull Effect The Effetiveness of the Linear Hull Effet S. Murphy Tehnial Report RHUL MA 009 9 6 Otober 009 Department of Mathematis Royal Holloway, University of London Egham, Surrey TW0 0EX, England http://www.rhul.a.uk/mathematis/tehreports

More information

Development of Fuzzy Extreme Value Theory. Populations

Development of Fuzzy Extreme Value Theory. Populations Applied Mathematial Sienes, Vol. 6, 0, no. 7, 58 5834 Development of Fuzzy Extreme Value Theory Control Charts Using α -uts for Sewed Populations Rungsarit Intaramo Department of Mathematis, Faulty of

More information

The Laws of Acceleration

The Laws of Acceleration The Laws of Aeleration The Relationships between Time, Veloity, and Rate of Aeleration Copyright 2001 Joseph A. Rybzyk Abstrat Presented is a theory in fundamental theoretial physis that establishes the

More information

LOGISTIC REGRESSION IN DEPRESSION CLASSIFICATION

LOGISTIC REGRESSION IN DEPRESSION CLASSIFICATION LOGISIC REGRESSIO I DEPRESSIO CLASSIFICAIO J. Kual,. V. ran, M. Bareš KSE, FJFI, CVU v Praze PCP, CS, 3LF UK v Praze Abstrat Well nown logisti regression and the other binary response models an be used

More information

Measuring & Inducing Neural Activity Using Extracellular Fields I: Inverse systems approach

Measuring & Inducing Neural Activity Using Extracellular Fields I: Inverse systems approach Measuring & Induing Neural Ativity Using Extraellular Fields I: Inverse systems approah Keith Dillon Department of Eletrial and Computer Engineering University of California San Diego 9500 Gilman Dr. La

More information

Research Letter Distributed Source Localization Based on TOA Measurements in Wireless Sensor Networks

Research Letter Distributed Source Localization Based on TOA Measurements in Wireless Sensor Networks Researh Letters in Eletronis Volume 2009, Artile ID 573129, 4 pages doi:10.1155/2009/573129 Researh Letter Distributed Soure Loalization Based on TOA Measurements in Wireless Sensor Networks Wanzhi Qiu

More information

UPPER-TRUNCATED POWER LAW DISTRIBUTIONS

UPPER-TRUNCATED POWER LAW DISTRIBUTIONS Fratals, Vol. 9, No. (00) 09 World Sientifi Publishing Company UPPER-TRUNCATED POWER LAW DISTRIBUTIONS STEPHEN M. BURROUGHS and SARAH F. TEBBENS College of Marine Siene, University of South Florida, St.

More information

QCLAS Sensor for Purity Monitoring in Medical Gas Supply Lines

QCLAS Sensor for Purity Monitoring in Medical Gas Supply Lines DOI.56/sensoren6/P3. QLAS Sensor for Purity Monitoring in Medial Gas Supply Lines Henrik Zimmermann, Mathias Wiese, Alessandro Ragnoni neoplas ontrol GmbH, Walther-Rathenau-Str. 49a, 7489 Greifswald, Germany

More information

A New Version of Flusser Moment Set for Pattern Feature Extraction

A New Version of Flusser Moment Set for Pattern Feature Extraction A New Version of Flusser Moment Set for Pattern Feature Extration Constantin-Iulian VIZITIU, Doru MUNTEANU, Cristian MOLDER Communiations and Eletroni Systems Department Military Tehnial Aademy George

More information

Modeling Probabilistic Measurement Correlations for Problem Determination in Large-Scale Distributed Systems

Modeling Probabilistic Measurement Correlations for Problem Determination in Large-Scale Distributed Systems 009 9th IEEE International Conferene on Distributed Computing Systems Modeling Probabilisti Measurement Correlations for Problem Determination in Large-Sale Distributed Systems Jing Gao Guofei Jiang Haifeng

More information

A Functional Representation of Fuzzy Preferences

A Functional Representation of Fuzzy Preferences Theoretial Eonomis Letters, 017, 7, 13- http://wwwsirporg/journal/tel ISSN Online: 16-086 ISSN Print: 16-078 A Funtional Representation of Fuzzy Preferenes Susheng Wang Department of Eonomis, Hong Kong

More information

23.1 Tuning controllers, in the large view Quoting from Section 16.7:

23.1 Tuning controllers, in the large view Quoting from Section 16.7: Lesson 23. Tuning a real ontroller - modeling, proess identifiation, fine tuning 23.0 Context We have learned to view proesses as dynami systems, taking are to identify their input, intermediate, and output

More information

Is classical energy equation adequate for convective heat transfer in nanofluids? Citation Advances In Mechanical Engineering, 2010, v.

Is classical energy equation adequate for convective heat transfer in nanofluids? Citation Advances In Mechanical Engineering, 2010, v. Title Is lassial energy equation adequate for onvetive heat transfer in nanofluids? Authors Wang, L; Fan, J Citation Advanes In Mehanial Engineering, 200, v. 200 Issued Date 200 URL http://hdl.handle.net/0722/24850

More information

Wavetech, LLC. Ultrafast Pulses and GVD. John O Hara Created: Dec. 6, 2013

Wavetech, LLC. Ultrafast Pulses and GVD. John O Hara Created: Dec. 6, 2013 Ultrafast Pulses and GVD John O Hara Created: De. 6, 3 Introdution This doument overs the basi onepts of group veloity dispersion (GVD) and ultrafast pulse propagation in an optial fiber. Neessarily, it

More information

A Queueing Model for Call Blending in Call Centers

A Queueing Model for Call Blending in Call Centers A Queueing Model for Call Blending in Call Centers Sandjai Bhulai and Ger Koole Vrije Universiteit Amsterdam Faulty of Sienes De Boelelaan 1081a 1081 HV Amsterdam The Netherlands E-mail: {sbhulai, koole}@s.vu.nl

More information

Planning with Uncertainty in Position: an Optimal Planner

Planning with Uncertainty in Position: an Optimal Planner Planning with Unertainty in Position: an Optimal Planner Juan Pablo Gonzalez Anthony (Tony) Stentz CMU-RI -TR-04-63 The Robotis Institute Carnegie Mellon University Pittsburgh, Pennsylvania 15213 Otober

More information

Likelihood-confidence intervals for quantiles in Extreme Value Distributions

Likelihood-confidence intervals for quantiles in Extreme Value Distributions Likelihood-onfidene intervals for quantiles in Extreme Value Distributions A. Bolívar, E. Díaz-Franés, J. Ortega, and E. Vilhis. Centro de Investigaión en Matemátias; A.P. 42, Guanajuato, Gto. 36; Méxio

More information

Case I: 2 users In case of 2 users, the probability of error for user 1 was earlier derived to be 2 A1

Case I: 2 users In case of 2 users, the probability of error for user 1 was earlier derived to be 2 A1 MUTLIUSER DETECTION (Letures 9 and 0) 6:33:546 Wireless Communiations Tehnologies Instrutor: Dr. Narayan Mandayam Summary By Shweta Shrivastava (shwetash@winlab.rutgers.edu) bstrat This artile ontinues

More information

IMPEDANCE EFFECTS OF LEFT TURNERS FROM THE MAJOR STREET AT A TWSC INTERSECTION

IMPEDANCE EFFECTS OF LEFT TURNERS FROM THE MAJOR STREET AT A TWSC INTERSECTION 09-1289 Citation: Brilon, W. (2009): Impedane Effets of Left Turners from the Major Street at A TWSC Intersetion. Transportation Researh Reord Nr. 2130, pp. 2-8 IMPEDANCE EFFECTS OF LEFT TURNERS FROM THE

More information

A Spatiotemporal Approach to Passive Sound Source Localization

A Spatiotemporal Approach to Passive Sound Source Localization A Spatiotemporal Approah Passive Sound Soure Loalization Pasi Pertilä, Mikko Parviainen, Teemu Korhonen and Ari Visa Institute of Signal Proessing Tampere University of Tehnology, P.O.Box 553, FIN-330,

More information

Supplementary information for: All-optical signal processing using dynamic Brillouin gratings

Supplementary information for: All-optical signal processing using dynamic Brillouin gratings Supplementary information for: All-optial signal proessing using dynami Brillouin gratings Maro Santagiustina, Sanghoon Chin 2, Niolay Primerov 2, Leonora Ursini, Lu Thévena 2 Department of Information

More information

Array Design for Superresolution Direction-Finding Algorithms

Array Design for Superresolution Direction-Finding Algorithms Array Design for Superresolution Diretion-Finding Algorithms Naushad Hussein Dowlut BEng, ACGI, AMIEE Athanassios Manikas PhD, DIC, AMIEE, MIEEE Department of Eletrial Eletroni Engineering Imperial College

More information

Design and Development of Three Stages Mixed Sampling Plans for Variable Attribute Variable Quality Characteristics

Design and Development of Three Stages Mixed Sampling Plans for Variable Attribute Variable Quality Characteristics International Journal of Statistis and Systems ISSN 0973-2675 Volume 12, Number 4 (2017), pp. 763-772 Researh India Publiations http://www.ripubliation.om Design and Development of Three Stages Mixed Sampling

More information

Maximum Entropy and Exponential Families

Maximum Entropy and Exponential Families Maximum Entropy and Exponential Families April 9, 209 Abstrat The goal of this note is to derive the exponential form of probability distribution from more basi onsiderations, in partiular Entropy. It

More information

Multicomponent analysis on polluted waters by means of an electronic tongue

Multicomponent analysis on polluted waters by means of an electronic tongue Sensors and Atuators B 44 (1997) 423 428 Multiomponent analysis on polluted waters by means of an eletroni tongue C. Di Natale a, *, A. Maagnano a, F. Davide a, A. D Amio a, A. Legin b, Y. Vlasov b, A.

More information

UTC. Engineering 329. Proportional Controller Design. Speed System. John Beverly. Green Team. John Beverly Keith Skiles John Barker.

UTC. Engineering 329. Proportional Controller Design. Speed System. John Beverly. Green Team. John Beverly Keith Skiles John Barker. UTC Engineering 329 Proportional Controller Design for Speed System By John Beverly Green Team John Beverly Keith Skiles John Barker 24 Mar 2006 Introdution This experiment is intended test the variable

More information

Performing Two-Way Analysis of Variance Under Variance Heterogeneity

Performing Two-Way Analysis of Variance Under Variance Heterogeneity Journal of Modern Applied Statistial Methods Volume Issue Artile 3 5--003 Performing Two-Way Analysis of Variane Under Variane Heterogeneity Sott J. Rihter University of North Carolina at Greensboro, sjriht@ung.edu

More information

An I-Vector Backend for Speaker Verification

An I-Vector Backend for Speaker Verification An I-Vetor Bakend for Speaker Verifiation Patrik Kenny, 1 Themos Stafylakis, 1 Jahangir Alam, 1 and Marel Kokmann 2 1 CRIM, Canada, {patrik.kenny, themos.stafylakis, jahangir.alam}@rim.a 2 VoieTrust, Canada,

More information

Relativistic Dynamics

Relativistic Dynamics Chapter 7 Relativisti Dynamis 7.1 General Priniples of Dynamis 7.2 Relativisti Ation As stated in Setion A.2, all of dynamis is derived from the priniple of least ation. Thus it is our hore to find a suitable

More information

On maximal inequalities via comparison principle

On maximal inequalities via comparison principle Makasu Journal of Inequalities and Appliations (2015 2015:348 DOI 10.1186/s13660-015-0873-3 R E S E A R C H Open Aess On maximal inequalities via omparison priniple Cloud Makasu * * Correspondene: makasu@uw.a.za

More information

Developing Excel Macros for Solving Heat Diffusion Problems

Developing Excel Macros for Solving Heat Diffusion Problems Session 50 Developing Exel Maros for Solving Heat Diffusion Problems N. N. Sarker and M. A. Ketkar Department of Engineering Tehnology Prairie View A&M University Prairie View, TX 77446 Abstrat This paper

More information

A simple expression for radial distribution functions of pure fluids and mixtures

A simple expression for radial distribution functions of pure fluids and mixtures A simple expression for radial distribution funtions of pure fluids and mixtures Enrio Matteoli a) Istituto di Chimia Quantistia ed Energetia Moleolare, CNR, Via Risorgimento, 35, 56126 Pisa, Italy G.

More information

SINCE Zadeh s compositional rule of fuzzy inference

SINCE Zadeh s compositional rule of fuzzy inference IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 14, NO. 6, DECEMBER 2006 709 Error Estimation of Perturbations Under CRI Guosheng Cheng Yuxi Fu Abstrat The analysis of stability robustness of fuzzy reasoning

More information

Physical Laws, Absolutes, Relative Absolutes and Relativistic Time Phenomena

Physical Laws, Absolutes, Relative Absolutes and Relativistic Time Phenomena Page 1 of 10 Physial Laws, Absolutes, Relative Absolutes and Relativisti Time Phenomena Antonio Ruggeri modexp@iafria.om Sine in the field of knowledge we deal with absolutes, there are absolute laws that

More information

A Novel Process for the Study of Breakage Energy versus Particle Size

A Novel Process for the Study of Breakage Energy versus Particle Size Geomaterials, 2013, 3, 102-110 http://dx.doi.org/10.4236/gm.2013.33013 Published Online July 2013 (http://www.sirp.org/journal/gm) A Novel Proess for the Study of Breakage Energy versus Partile Size Elias

More information

Coding for Random Projections and Approximate Near Neighbor Search

Coding for Random Projections and Approximate Near Neighbor Search Coding for Random Projetions and Approximate Near Neighbor Searh Ping Li Department of Statistis & Biostatistis Department of Computer Siene Rutgers University Pisataay, NJ 8854, USA pingli@stat.rutgers.edu

More information

Einstein s Three Mistakes in Special Relativity Revealed. Copyright Joseph A. Rybczyk

Einstein s Three Mistakes in Special Relativity Revealed. Copyright Joseph A. Rybczyk Einstein s Three Mistakes in Speial Relativity Revealed Copyright Joseph A. Rybzyk Abstrat When the evidene supported priniples of eletromagneti propagation are properly applied, the derived theory is

More information

RESEARCH ON RANDOM FOURIER WAVE-NUMBER SPECTRUM OF FLUCTUATING WIND SPEED

RESEARCH ON RANDOM FOURIER WAVE-NUMBER SPECTRUM OF FLUCTUATING WIND SPEED The Seventh Asia-Paifi Conferene on Wind Engineering, November 8-1, 9, Taipei, Taiwan RESEARCH ON RANDOM FORIER WAVE-NMBER SPECTRM OF FLCTATING WIND SPEED Qi Yan 1, Jie Li 1 Ph D. andidate, Department

More information

Applying CIECAM02 for Mobile Display Viewing Conditions

Applying CIECAM02 for Mobile Display Viewing Conditions Applying CIECAM2 for Mobile Display Viewing Conditions YungKyung Park*, ChangJun Li*, M.. Luo*, Youngshin Kwak**, Du-Sik Park **, and Changyeong Kim**; * University of Leeds, Colour Imaging Lab, UK*, **

More information

Evaluation of effect of blade internal modes on sensitivity of Advanced LIGO

Evaluation of effect of blade internal modes on sensitivity of Advanced LIGO Evaluation of effet of blade internal modes on sensitivity of Advaned LIGO T0074-00-R Norna A Robertson 5 th Otober 00. Introdution The urrent model used to estimate the isolation ahieved by the quadruple

More information

Sensor management for PRF selection in the track-before-detect context

Sensor management for PRF selection in the track-before-detect context Sensor management for PRF seletion in the tra-before-detet ontext Fotios Katsilieris, Yvo Boers, and Hans Driessen Thales Nederland B.V. Haasbergerstraat 49, 7554 PA Hengelo, the Netherlands Email: {Fotios.Katsilieris,

More information

On the Bit Error Probability of Noisy Channel Networks With Intermediate Node Encoding I. INTRODUCTION

On the Bit Error Probability of Noisy Channel Networks With Intermediate Node Encoding I. INTRODUCTION 5188 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 11, NOVEMBER 2008 [8] A. P. Dempster, N. M. Laird, and D. B. Rubin, Maximum likelihood estimation from inomplete data via the EM algorithm, J.

More information

A new initial search direction for nonlinear conjugate gradient method

A new initial search direction for nonlinear conjugate gradient method International Journal of Mathematis Researh. ISSN 0976-5840 Volume 6, Number 2 (2014), pp. 183 190 International Researh Publiation House http://www.irphouse.om A new initial searh diretion for nonlinear

More information

Centroid Detection by Gaussian Pattern Matching In Adaptive Optics

Centroid Detection by Gaussian Pattern Matching In Adaptive Optics 010 International Journal of Computer Appliations (0975-8887) Centroid Detetion by Gaussian Pattern Mathing In Adaptive Optis Akondi Vyas Indian Institute of Astrophysis Indian Institute of Siene M B Roopashree

More information

A model for measurement of the states in a coupled-dot qubit

A model for measurement of the states in a coupled-dot qubit A model for measurement of the states in a oupled-dot qubit H B Sun and H M Wiseman Centre for Quantum Computer Tehnology Centre for Quantum Dynamis Griffith University Brisbane 4 QLD Australia E-mail:

More information

Research Article MPPT Algorithm for Photovoltaic Panel Based on Augmented Takagi-Sugeno Fuzzy Model

Research Article MPPT Algorithm for Photovoltaic Panel Based on Augmented Takagi-Sugeno Fuzzy Model SRN Renewable Energy, Artile D 253146, 1 pages http://dx.doi.org/1.1155/214/253146 Researh Artile MPPT Algorithm for Photovoltai Panel Based on Augmented Takagi-Sugeno Fuzzy Model Hafedh Abid, Ahmed Toumi,

More information

Simplified Buckling Analysis of Skeletal Structures

Simplified Buckling Analysis of Skeletal Structures Simplified Bukling Analysis of Skeletal Strutures B.A. Izzuddin 1 ABSRAC A simplified approah is proposed for bukling analysis of skeletal strutures, whih employs a rotational spring analogy for the formulation

More information

Distributed Gaussian Mixture Model for Monitoring Multimode Plant-wide Process

Distributed Gaussian Mixture Model for Monitoring Multimode Plant-wide Process istributed Gaussian Mixture Model for Monitoring Multimode Plant-wide Proess Jinlin Zhu, Zhiqiang Ge, Zhihuan Song. State ey Laboratory of Industrial Control Tehnology, Institute of Industrial Proess Control,

More information

Agnostic Selective Classification

Agnostic Selective Classification Agnosti Seletive Classifiation Ran El-Yaniv and Yair Wiener Computer Siene Department Tehnion Israel Institute of Tehnology rani,wyair}@s,tx}.tehnion.a.il Abstrat For a learning problem whose assoiated

More information

COMBINED PROBE FOR MACH NUMBER, TEMPERATURE AND INCIDENCE INDICATION

COMBINED PROBE FOR MACH NUMBER, TEMPERATURE AND INCIDENCE INDICATION 4 TH INTERNATIONAL CONGRESS OF THE AERONAUTICAL SCIENCES COMBINED PROBE FOR MACH NUMBER, TEMPERATURE AND INCIDENCE INDICATION Jiri Nozika*, Josef Adame*, Daniel Hanus** *Department of Fluid Dynamis and

More information

Inter-fibre contacts in random fibrous materials: experimental verification of theoretical dependence on porosity and fibre width

Inter-fibre contacts in random fibrous materials: experimental verification of theoretical dependence on porosity and fibre width J Mater Si (2006) 41:8377 8381 DOI 10.1007/s10853-006-0889-7 LETTER Inter-fibre ontats in random fibrous materials: experimental verifiation of theoretial dependene on porosity and fibre width W. J. Bathelor

More information

Non-Markovian study of the relativistic magnetic-dipole spontaneous emission process of hydrogen-like atoms

Non-Markovian study of the relativistic magnetic-dipole spontaneous emission process of hydrogen-like atoms NSTTUTE OF PHYSCS PUBLSHNG JOURNAL OF PHYSCS B: ATOMC, MOLECULAR AND OPTCAL PHYSCS J. Phys. B: At. Mol. Opt. Phys. 39 ) 7 85 doi:.88/953-75/39/8/ Non-Markovian study of the relativisti magneti-dipole spontaneous

More information

Advances in Radio Science

Advances in Radio Science Advanes in adio Siene 2003) 1: 99 104 Copernius GmbH 2003 Advanes in adio Siene A hybrid method ombining the FDTD and a time domain boundary-integral equation marhing-on-in-time algorithm A Beker and V

More information

A NORMALIZED EQUATION OF AXIALLY LOADED PILES IN ELASTO-PLASTIC SOIL

A NORMALIZED EQUATION OF AXIALLY LOADED PILES IN ELASTO-PLASTIC SOIL Journal of Geongineering, Vol. Yi-Chuan 4, No. 1, Chou pp. 1-7, and April Yun-Mei 009 Hsiung: A Normalized quation of Axially Loaded Piles in lasto-plasti Soil 1 A NORMALIZD QUATION OF AXIALLY LOADD PILS

More information

The Second Postulate of Euclid and the Hyperbolic Geometry

The Second Postulate of Euclid and the Hyperbolic Geometry 1 The Seond Postulate of Eulid and the Hyperboli Geometry Yuriy N. Zayko Department of Applied Informatis, Faulty of Publi Administration, Russian Presidential Aademy of National Eonomy and Publi Administration,

More information

22.54 Neutron Interactions and Applications (Spring 2004) Chapter 6 (2/24/04) Energy Transfer Kernel F(E E')

22.54 Neutron Interactions and Applications (Spring 2004) Chapter 6 (2/24/04) Energy Transfer Kernel F(E E') 22.54 Neutron Interations and Appliations (Spring 2004) Chapter 6 (2/24/04) Energy Transfer Kernel F(E E') Referenes -- J. R. Lamarsh, Introdution to Nulear Reator Theory (Addison-Wesley, Reading, 1966),

More information

Some recent developments in probability distributions

Some recent developments in probability distributions Proeedings 59th ISI World Statistis Congress, 25-30 August 2013, Hong Kong (Session STS084) p.2873 Some reent developments in probability distributions Felix Famoye *1, Carl Lee 1, and Ayman Alzaatreh

More information

Remark 4.1 Unlike Lyapunov theorems, LaSalle s theorem does not require the function V ( x ) to be positive definite.

Remark 4.1 Unlike Lyapunov theorems, LaSalle s theorem does not require the function V ( x ) to be positive definite. Leture Remark 4.1 Unlike Lyapunov theorems, LaSalle s theorem does not require the funtion V ( x ) to be positive definite. ost often, our interest will be to show that x( t) as t. For that we will need

More information

A Heuristic Approach for Design and Calculation of Pressure Distribution over Naca 4 Digit Airfoil

A Heuristic Approach for Design and Calculation of Pressure Distribution over Naca 4 Digit Airfoil IOSR Journal of Engineering (IOSRJEN) ISSN (e): 2250-3021, ISSN (p): 2278-8719 PP 11-15 www.iosrjen.org A Heuristi Approah for Design and Calulation of Pressure Distribution over Naa 4 Digit Airfoil G.

More information

Armenian Theory of Special Relativity (Illustrated) Robert Nazaryan 1 and Haik Nazaryan 2

Armenian Theory of Special Relativity (Illustrated) Robert Nazaryan 1 and Haik Nazaryan 2 29606 Robert Nazaryan Haik Nazaryan/ Elixir Nulear & Radiation Phys. 78 (205) 29606-2967 Available online at www.elixirpublishers.om (Elixir International Journal) Nulear Radiation Physis Elixir Nulear

More information

OPPORTUNISTIC SENSING FOR OBJECT RECOGNITION A UNIFIED FORMULATION FOR DYNAMIC SENSOR SELECTION AND FEATURE EXTRACTION

OPPORTUNISTIC SENSING FOR OBJECT RECOGNITION A UNIFIED FORMULATION FOR DYNAMIC SENSOR SELECTION AND FEATURE EXTRACTION OPPORTUNISTIC SENSING FOR OBJECT RECOGNITION A UNIFIED FORMULATION FOR DYNAMIC SENSOR SELECTION AND FEATURE EXTRACTION Zhaowen Wang, Jianhao Yang, Nasser Nasrabadi and Thomas Huang Bekman Institute, University

More information

Modeling of discrete/continuous optimization problems: characterization and formulation of disjunctions and their relaxations

Modeling of discrete/continuous optimization problems: characterization and formulation of disjunctions and their relaxations Computers and Chemial Engineering (00) 4/448 www.elsevier.om/loate/omphemeng Modeling of disrete/ontinuous optimization problems: haraterization and formulation of disjuntions and their relaxations Aldo

More information

estimated pulse sequene we produe estimates of the delay time struture of ripple-red events. Formulation of the Problem We model the k th seismi trae

estimated pulse sequene we produe estimates of the delay time struture of ripple-red events. Formulation of the Problem We model the k th seismi trae Bayesian Deonvolution of Seismi Array Data Using the Gibbs Sampler Eri A. Suess Robert Shumway, University of California, Davis Rong Chen, Texas A&M University Contat: Eri A. Suess, Division of Statistis,

More information

Normative and descriptive approaches to multiattribute decision making

Normative and descriptive approaches to multiattribute decision making De. 009, Volume 8, No. (Serial No.78) China-USA Business Review, ISSN 57-54, USA Normative and desriptive approahes to multiattribute deision making Milan Terek (Department of Statistis, University of

More information

Neuro-Fuzzy Control of Chemical Reactor with Disturbances

Neuro-Fuzzy Control of Chemical Reactor with Disturbances Neuro-Fuzzy Control of Chemial Reator with Disturbanes LENK BLHOÁ, JÁN DORN Department of Information Engineering and Proess Control, Institute of Information Engineering, utomation and Mathematis Faulty

More information

UNCERTAINTY RELATIONS AS A CONSEQUENCE OF THE LORENTZ TRANSFORMATIONS. V. N. Matveev and O. V. Matvejev

UNCERTAINTY RELATIONS AS A CONSEQUENCE OF THE LORENTZ TRANSFORMATIONS. V. N. Matveev and O. V. Matvejev UNCERTAINTY RELATIONS AS A CONSEQUENCE OF THE LORENTZ TRANSFORMATIONS V. N. Matveev and O. V. Matvejev Joint-Stok Company Sinerta Savanoriu pr., 159, Vilnius, LT-315, Lithuania E-mail: matwad@mail.ru Abstrat

More information

Phase Diffuser at the Transmitter for Lasercom Link: Effect of Partially Coherent Beam on the Bit-Error Rate.

Phase Diffuser at the Transmitter for Lasercom Link: Effect of Partially Coherent Beam on the Bit-Error Rate. Phase Diffuser at the Transmitter for Laserom Link: Effet of Partially Coherent Beam on the Bit-Error Rate. O. Korotkova* a, L. C. Andrews** a, R. L. Phillips*** b a Dept. of Mathematis, Univ. of Central

More information

Singular Event Detection

Singular Event Detection Singular Event Detetion Rafael S. Garía Eletrial Engineering University of Puerto Rio at Mayagüez Rafael.Garia@ee.uprm.edu Faulty Mentor: S. Shankar Sastry Researh Supervisor: Jonathan Sprinkle Graduate

More information

Modeling of Threading Dislocation Density Reduction in Heteroepitaxial Layers

Modeling of Threading Dislocation Density Reduction in Heteroepitaxial Layers A. E. Romanov et al.: Threading Disloation Density Redution in Layers (II) 33 phys. stat. sol. (b) 99, 33 (997) Subjet lassifiation: 6.72.C; 68.55.Ln; S5.; S5.2; S7.; S7.2 Modeling of Threading Disloation

More information

An Integrated Architecture of Adaptive Neural Network Control for Dynamic Systems

An Integrated Architecture of Adaptive Neural Network Control for Dynamic Systems An Integrated Arhiteture of Adaptive Neural Network Control for Dynami Systems Robert L. Tokar 2 Brian D.MVey2 'Center for Nonlinear Studies, 2Applied Theoretial Physis Division Los Alamos National Laboratory,

More information

Aircraft CAS Design with Input Saturation Using Dynamic Model Inversion

Aircraft CAS Design with Input Saturation Using Dynamic Model Inversion International Journal of Control, Automation, and Systems Vol., No. 3, September 003 35 Airraft CAS Design with Input Saturation sing Dynami Model Inversion Sangsoo Lim and Byoung Soo Kim Abstrat: This

More information

Two Points Hybrid Block Method for Solving First Order Fuzzy Differential Equations

Two Points Hybrid Block Method for Solving First Order Fuzzy Differential Equations Journal of Soft Computing and Appliations 2016 No.1 (2016) 43-53 Available online at www.ispas.om/jsa Volume 2016, Issue 1, Year 2016 Artile ID jsa-00083, 11 Pages doi:10.5899/2016/jsa-00083 Researh Artile

More information

Optimal Control of Air Pollution

Optimal Control of Air Pollution Punjab University Journal of Mathematis (ISSN 1016-2526) Vol. 49(1)(2017) pp. 139-148 Optimal Control of Air Pollution Y. O. Aderinto and O. M. Bamigbola Mathematis Department, University of Ilorin, Ilorin,

More information