Research Article An Optimized Classification Algorithm by Neural Network Ensemble Based on PLS and OLS

Size: px
Start display at page:

Download "Research Article An Optimized Classification Algorithm by Neural Network Ensemble Based on PLS and OLS"

Transcription

1 Mathematical Problems i Egieerig, Article ID , 8 pages Research Article A Optimized Classificatio Algorithm by Neural Network Esemble Based o PLS ad OLS Weikua Jia, 1 Dea Zhao, 1 Yuyag Tag, 1 Chali Hu, 1 ad Yuya Zhao 2 1 School of Electrical ad Iformatio Egieerig, Jiagsu Uiversity, Zhejiag , Chia 2 Chagzhou College of Iformatio Techology, Chagzhou , Chia Correspodece should be addressed to Dea Zhao; dazhao@ujseduc Received 19 May 2014; Accepted 30 Jue 2014; Published 20 July 2014 Academic Editor: Shifei Dig Copyright 2014 Weikua Jia et al This is a ope access article distributed uder the Creative Commos Attributio Licese, which permits urestricted use, distributio, ad reproductio i ay medium, provided the origial work is properly cited Usig the eural etwork to classify the data which has higher dimesio ad fewer samples meas overmuch feature iputs ifluece the structure desig of eural etwork ad fewer samples will geerate icomplete or overfittig pheomeo durig the eural etwork traiig All of the above will restrict the recogitio precisio obviously It is eve better to use eural etwork to classify ad, therefore, propose a eural etwork esemble optimized classificatio algorithm based o PLS ad OLS i this paper The ew algorithm takes some advatages of partial least squares (PLS) algorithm to reduce the feature dimesio of small sample data, which obtais the low-dimesioal ad stroger illustrative data; usig ordiary least squares (OLS) theory determies the weights of each eural etwork i esemble learig system Feature dimesio reductio is applied to simplify the eural etwork s structure ad improve the operatio efficiecy; esemble learig ca compesate for the iformatio loss caused by the dimesio reductio; o the other had, it improves the recogitio precisio of classificatio system Fially, through the case aalysis, the experimet results suggest that the operatig efficiecy ad recogitio precisio of ew algorithm are greatly improved, which is worthy of further promotio 1 Itroductio Neural etwork (NN) is based o the itelliget computig, which makes use of the computer etwork to imitate the biologicaleuraletworkitshowspowerfulfuctioidealigwitholiearprocessadlarge-scalecomputigthe essece of classificatio system is regarded as a iput/output system The trasformatioal relatios are maily about three aspects which icludes umerical fittig, logical reasoig, ad fuzzy trasformig; all of these ca be well expressed by eural etwork The research of eural etwork classificatio has obtaied widespread applicatio i may fields [1] However, while dealig with the small sample, it is a big challege to use eural etwork to classify Overmuch feature iputs will ifluece the structure desig of the eural etwork ad restrict the operatig efficiecy However, fewer umbers of samples easily lead to icomplete traiig or overfittig, which restricts the fial classificatio precisio Thus, i this paper, we itroduce the feature dimesio reductio ad esemble learig ito eural etwork algorithm The priciple of feature dimesio reductio theory [2, 3] is to extract most useful feature from may complex feature variables, which elimiates the ifluece of repetitio or correlatio factors It meas that reducig the feature dimesio as much as possible o the premise of solvig the problems ormally Because feature dimesio reductio will cause the loss of iformatio, but it ca t ifluece the problem solvig To reduce the iputs of the eural etwork, we will desig ad simplify structure coveietly Optimized eural etwork algorithm based o feature dimesio reductio has obtaied gratifyig achievemet i may fields [4 6] The traditioal feature extractio algorithm which depeds o the measure of variace (such as pricipal compoet aalysis, factor aalysis, etc) is hard to get ideal low dimesioal data for small sample, which is with higher dimesio ad fewer samples I this paper, we adopt partial least square (PLS) to feature dimesio reductio algorithm [7, 8],which ca take

2 2 Mathematical Problems i Egieerig the level of correlatio feature variables ad the depedet variables ito cosideratio ad obtai the relatively ideal low-dimesioal data with strog iterpretatio It has a uique advatage for dealig with small sample data Usig PLS to optimize the classificatio algorithm ad eural etwork, it has got some progress [9 11] Hase ad Salamo first proposed eural etwork esemble learig [12] They proved the performace of a series of itegrated eural etwork is better tha the best sigle eural etwork by the experimet; the geeralizatio ability ad recogitio precisio of multiple classifier itegratio system have bee improved obviously I order to improve the classificatio precisio, the method of multiple classifiers itegratio has bee cosidered Meawhile, the esemble learig reaches a climax, a series of cocepts ad algorithms have bee proposed [13 15], ad applied i may fields The history of eural etwork has more tha 70 years; hudreds of models are proposed ad differet models have their ow advatage i dealig with differet problemshowever,itisotperfectfortheatureofeural etwork I this paper, choose BP, RBF, Elma three eural etworks as subclassifier to study The itegrator desig is the key poit of esemble optimizatio algorithm, which is to determie the fial recogitio precisio of itegrated classificatio system The key of esemble learig is how to determie the weights of each subclassifier; it attracts may scholars research iterests [16 18] This study is usig ordiary least squares (OLS) priciple to esemble learig for classificatio system, through establishig the regressio equatio to determie the weights of three subclassifiers Briefadtothepoit,iviewoftheatureofsmall sample ad eural etwork, i this paper, a esemble optimized classificatio algorithm by eural etwork based o PLS ad OLS is proposed The ew algorithm by PLS dimesio reductio is to improve the operatig efficiecy ad the recogitio precisio through OLS esemble learig The ew algorithm aims at providig a high efficiecy ad precisio classificatio system Fially, through the test of two data sets, oe from the actual productio, the other oe form UCI stadard data sets, the experimetal results suggest that the ew algorithm is valid ad worthy of further popularizatio ad applicatio 2 PLS Dimesio Reductio Algorithm Partialleastsquaresisthecharacteristicdevelopmetof ordiary least squares (OLS), its basic idea is that the feature variable matrix X is reduced; at the same time, it gives cosideratio to the correlatio of the depedet variable matrix YSupposethereare feature variables, x 1,x 2,,x, p depedet variables, y 1,y 2,,y p,afterpreprocessig, matrix X is reduced ito X=TP T +E (1) Here, T is the score matrix, P is the load matrix, ad E is the residual error matrix Matrix multiplicatio of TP T ca be expressed as the sum products of score vector t i (the ith colum of matrix T) ad load vector p i (the ith colum of matrix P); the the above formulaca be writte as X= t i p T i +E,,2,, (2) Similarly, matrix Y is decomposed ito Y=UQ T +F (3) Here, U is the score matrix, Q is the load matrix, ad F is the residual error matrix Matrix multiplicatio of UQ T ca be expressed as the sum products of score vector u j (the jth colum of matrix U) adloadvectorq j (the jth colum of matrix Q); the the above formula ca be writte as Y= p j=1 u j q T j +F, j=1,2,,r (4) PLS aalysis separately extracted the scores t ad u form correspodig X ad Y; they are the liear combiatio of feature variables ad depedet variables Ad both scores satisfied the maximum load of variatio iformatio of feature variables ad the depedet variables; the covariace betwee them is the largest Establishmet of the regressio equatio is u j =b k t i (5) Here, b k is regressio coefficiet; the formula ca be expressed i matrix form as Here, B is coefficiets matrix; Y=BX (6) B=W(P T W) 1 Q T (7) Here, W is the weights matrix PLS aims to each dimesio iterative calculatio by usig each other s iformatio; each iteratio cotiuously accordig to residual iformatio of X, Y to adjust t i, u j for the secod roud extracted, util the residual matrix elemet of absolute value approximate to zero The precisio satisfied the requiremets; the the algorithm stops I the iteratio process, t i, u j ca maximize the expressio of variace of X ad Y simultaeously PLS regressio does ot eed to use all the compoets to establish the regressio equatio, which just eed to select the frot l compoets (0 l ) ad the get better regressio equatio Geerally, K-fold cross-validatio method is used to calculate predictio residual sum of squares ad determie theumberofcompoetsextracted,reachigthepurposeof dimesio reductio 3 Neural Network Esemble Optimizatio 31 Subclassifier (Idividual Neural Network) Select BP, RBF, ad Elma regressio three differet types of eural etworks

3 Mathematical Problems i Egieerig 3 Iput layer Hidde layer Output layer Iput layer Hidde layer Output layer Iput layer Hidde layer Output layer w ij wjk w ij w jk w x x u1 y 1 y 1 w Σ y1 x 1 1 x(k 1) y(k) x 2 y 2 x 2 x ym x Figure 1: The topology structures of BP, RBF, ad Elma eural etwork Σ Σ y 2 y m x w 2 u r u c1 u cr Coectig layer ym as subclassifier i this paper These three subclassifiers, to a certai extet, play a complemetary role The topology structure of three eural etworks is as show i Figure 1 BP eural etwork is deoted by subclassifier I; it adopts the error back propagatio learig A three-layer BP eural etwork ca imitate ay oliear fuctio with arbitrary precisio It has good adaptive robustess ad geeralizatio ability With the expadig scope of applicatio, the defects of BP eural etwork emerge gradually, which icludes fixed learig rate that caused log traiig time, easily trapped i local miimum, ad the overfittig illusio, hidde layer, ad its ucertai euros umbers All these above become the iheret defects of BP etwork RBF eural etwork is deoted by subclassifier II; it is a kid of feed forward model, which ca adjust the hidde layer adaptively i the traiig process accordig to the specific problems The distributio of hidde layer uits is determied by the traiig sample s capacity, category, ad distributio It ca dyamically determie the hidde layer, eve its ceter ad width; meawhile, the covergece speed is fast The biggest advatage is to use liear learig algorithm to complete the work doe by oliear learig algorithm ad maitai high precisio of oliear algorithm; it has the characteristics of the best approximatio ad global optimal It ca use the sum of local approximatio to attai the global approximatio of traiig data, so usig the sum of low-order local approximatio ca be fiished the traiig data fittig Durig the traiig process, it is easily appear over-fittig pheomeo, low learig ability, ad so o All of these isufficiet will lead the bigger predictio error, further ifluece the recogitio precisio of RBF eural etwork Elma regressio eural etwork is deoted by subclassifier III; it is a kid of feedback model that adds a cotext layer based o hidde layer of BP eural etwork The cotext layer is regarded as a delay operator; it ca delay ad store the output of hidde layer ad achieve the memory That meas the system has the ability of adaptig the timevaryig dyamic characteristics ad strog global stability Thus, Elma eural etwork optimizatio is always based o BP eural etwork, aturally; it ievitably iherits the iheret defects of BP eural etwork which will lead to the usatisfactory operatig efficiecy 32 Costruct of Neural Network Esemble Algorithm Cosider the complex patter classificatio problems; sigle classifier is usually difficult to achieve the ideal recogitio precisio ad has some of its ow deficiecies The geeralizatio ability ad recogitio precisio of multiple classifier itegrated system will be more outstadig obviously Each classifier is regarded as subclassifier of itegrated system; the mai idea of esemble learig is maily about usig may subclassifiers to solve the same problem ad itegratig the outputs of each subclassifier, fially obtaiig the results of itegrated classificatio system The purpose is to improve the ability of geeralizatio ad recogitio of the learig algorithm Neural etwork esemble is a patter classificatio system, which itegrates a series of sigle eural etworks The performace of the itegrated system is better tha ay sigle eural etwork The mai purpose of eural etwork esemble is to improve the recogitio precisio of classificatio system Obviously, determiig the weights of each subclassifier is the key of esemble algorithm The mai task of eural etwork esemble algorithm is seekig the weights of each subclassifier based o the characteristics of subclassifier ad reducig the recogitio error of itegrated classificatio system There are three subclassifiers for esemble learig, i order to deal with the small sample, adopt dimesio reductio before esemble learig ad optimizig origial data, ad the we further attempt to establish eural etwork esemble learig models Figure 2 presets the established esemble learig model Its operatig priciple cosists of dimesio reductio for small sample by PLS algorithm, gettig the low-dimesioal data as the iput of each subclassifier ad regardig the output of each subclassifier as the iput of itegrator The outputs of sub-classifiers weighted learig by itegrator, fially the system gets relatively optimal classificatio results Figure 2 presets the basic flow chart of eural etwork esemble algorithm; this study applies three subclassifiers, for example Respectively, we use three subclassifiers to recogize the pedig sample data; the cogitio results of subclassifiers I are deoted by A, the cogitio results of subclassifiers II are deoted by B, ad the cogitio results of subclassifiers III are deoted by C Three subclassifiers are idepedet of each other, so are the results Respectively, the weights of three subclassifiers are deoted by w 1, w 2,adw 3,theoutputofitegratedsystem is deoted by Y, adthevalueofy is obtaied by sum

4 4 Mathematical Problems i Egieerig Origial data BP Output 1 Subclassifier 1 w 1 PLS RBF Output 2 w 2 Low-dimesioal data Elma Subclassifier 2 Output 3 w 3 OLS itegrator Subclassifier 3 NN Output w Output result Subclassifier Figure 2: Optimized classificatio algorithm assembled with eural etwork of weighted A, B, C, adtheesemblelearigmodelis established as Y=Aw 1 +Bw 2 +Cw 3 (8) Model (8) reflects the follow discussio obviously While w 1 = 1, w 2 = 0, w 3 = 0, it meas that oly subclassifier I works, the system output results are determied by the subclassifier I While w 1 = 0, w 2 = 1, w 3 = 0,itmeas that oly subclassifier II works; the system output results are determied by the subclassifier II While w 1 = 0, w 2 = 0, w 3 = 1, it meas that oly subclassifier III works, the system output results are determied by subclassifier III The optimized itegrated system, that is, fid out the optimal weights of each sub-classifier, ad the makes the recogitio results of model (8)to achieve optimal state 33 Optimized Weights The study of eural etwork esemble learig, that is, the optimized the model (8), is to determie the value of three idividual eural etworks weights w 1, w 2, w 3 Weusethethoughtofmultipleregressiosto establish terary regressio equatio for the output of three subclassifiers ad determie the optimal estimated value of weights w 1, w 2, w 3 by OLS algorithm Suppose the recogized results of three subclassifiers are deoted by a i, b i, c i,respectively,fortheith traiig sample; the output of itegrated system is deoted by y i ad the actual value of the sample is y o i,wherei = 1,2,,Bytheactual value Y o of traiig sample, the terary regressio equatio is established as follows: Y=θ+αA+βB+γC+ε, ε N(0,σ 2 ) (9) Here, α, β, γ preset the partial regressio coefficiet; usig the method of maximum likelihood estimatio to estimate ukow parameters α, β, γ, θ, it eeds the miimum residual error value Q: Satisfy, amely, mi Q= Q= (y o i y i ) 2 (10) (y 0 i θ αa i βb i γc i ) 2 (11) If we make Q miimum, α, β, γ, θ should satisfy the followig equatio sets: Set Q α = (y 0 i θ αa i βb i γc i )a i =0, Q β = (y 0 i θ αa i βb i γc i )b i =0, Q γ = (y 0 i θ a i α βb i γc i )c i =0, Q θ = (y 0 i θ a i α βb i γc i )=0 1 a 1 b 1 c 1 1 a 2 b 2 c 2 X= [ ] [ 1 a b c ] 4, η = [ θ α [ β] [ γ ]4 1 (12) (13)

5 Mathematical Problems i Egieerig 5 The equatio set (12)traslatesitomatrixform X T Xη = X T Y o (14) Obtai the least squares estimate of coefficiet matrix η: η=(x T X) 1 X T Y o (15) By the coefficiet matrix η, get the values of regressio coefficiets α, β, γ, adthevaluesofweightsw 1,w 2,adw 3 cabefurtherobtaied: w 1 = α α+β+γ, w 2 = β α+β+γ, w 3 = γ α+β+γ (16) Whe values of w 1, w 2, w 3 as formula (16), the substitute them ito formula (8), will makes the output value Y is closer to the actual value Y o, so the residual error Q reach the miimum Cosider mi Q= (y o i y i ) 2 = (a i w 1 +b i w 2 +c i w 3 y o i ) (17) Formula (16) istheoptimalweightsofthethreesubclassifiers by traiig; model (8) achieves the most optimal ad further establish esemble optimizatio algorithm 34 Elemetary Parameters of Esemble Optimized Algorithm Tosumuptheabove,usigPLSforsmallsampledimesio reductio realized the prelimiary optimizatio of the classificatio system ad the optimized the weights of each subclassifier by OLS For the high dimesioal ad small sample problems, we may as well try to establish esemble optimized classificatio algorithm by eural etwork based o PLS ad OLS Some parameters settig of the ew algorithm are as follows (1) Data Preprocessig Data preprocessig elimiates the icommesurability by differet data idex distributio ad umerical differeces, which esures the quality of the applicatio of data from the source Usig stadardized trasformatio obtais the data which is i accordace with the distributio of N(0, 1) The stadardized trasformatio formula is x ij = (x ij x j ) (1/) (x ij x j ) 2 (18) (2) Parameters of PLS PLS feature dimesio reductio is usig K-fold cross validatio (K-CV) method to calculate predictio residual sum of squares This method ca effectively avoid overlearig or uder-fittig ad get more persuasive result (3) Parameters of BP Neural Network The umber of euros usiggao sempiricalformula [19] to determie s = (043m + 012m m + 035) 1/ (19) I the formula, s,, m, respectively, are o behalf of the umber of euros of hidde layer, iput layer, ad output layer The eural etwork traiig uses trailm (LM) algorithm, which is the combiatio of gradiet descet ad quasi-ewto algorithm Its advatage is give full play to the gradiet descet algorithm ca rapid covergece at the begiig traiig, ad the quasi-ewto algorithm ca quickly produce a ideal search directio ear the extremum Coectio weights ad threshold learig use the Leargdm algorithm (4) Set the Ceter of RBF Neural Network Set up the ceter of the RBF etwork ad choose the ceter of the basis fuctio empirically; as log as the distributio of the traiig samples ca represet the give problem, the s ceters of the uiform distributio ca be chose accordig to experiece; the distace is d; choose the width of Gaussia basis fuctio σ= d 2s (20) Select the basis fuctio by K-clusterig method; exploit the ceter of clustered class as the ceter of the basis fuctio (5) Parameters of Elma Neural Network Elma eural etwork, which is optimized by BP eural etwork, the umber of cotext layer is the same as the hidde layer Parameter set is equal to BP eural etwork 35 Steps of Esemble Optimized Algorithm The basic steps of eural etwork esemble algorithm based o PLS ad OLS are as follows Step 1 Normalize the origial data accordig to formula (18); get the characteristic variables matrix X ad depedet variable matrix Y Step 2 Respectively, extract the first pair compoet t 1, u 1 from X, Y ad make up to the maximum correlatio, respectively; establish the regressio equatio of X o t 1 ad Y o u 1 Step 3 Usig residual error matrix E ad F istead of X ad Y, repetitio Step 2, util the absolute value of the residual matrix elemets is close to zero Step 4 With K-CV method, by the priciples of crossvalidatio ad residual sum of squares to determie the umber of compoets extracted Step 5 From the perspective of iformatio feature compressio, get the compressio matrix X ad Y,asewsamples

6 6 Mathematical Problems i Egieerig Table 1: Compariso of every classifier s performace for Test 1 Network model Weights Traiig steps SSE Accuracy, % Subclassifier I (BP NN) Subclassifier II (EBF NN) Subclassifier III (Elma NN) PLS-Elma Assemblig system Step 6 Divide the ew samples ito two parts as traiig samples ad simulatio samples accordig to the eed of the problem Step 7 Set up three subclassifiers, respectively; classificatio traiig, the output of the three subclassifiers are A, B, C; Step 8 By the output of three subclassifiers, establish the terary regressio model (9) Step 9 Trasform (9) ito equatios set(12) by maximum likelihood estimatio Step 10 Get the formula (13)bysolvigequatiosset(12)ad usig OLS theory Step 11 By regressio coefficiet ad formula (16), the weight of three subclassifiers ca be obtaied Step 12 Get the optimal solutio of itegrated model (8)ad termiate the algorithm 4 Case Aalysis Respectively, use three subclassifiers, PLS-Elma eural etwork ad eural etwork esemble algorithm to test the data setadcotrastthetestresults From the follow three aspects to evaluate the performace of each algorithm (model), which icludes covergece steps, sum of squared errors ad recogitio accuracy rate Covergece steps, we test 10 times, the experimet tests 10 times, werecordthebestoce,adlistitablethesumofsquared errors, the sum of squares of the differece of predicted value ad actual value, is usually used to estimate the degree of closeess betwee recogitio value ad actual value I the circumstaces of the same accuracy simulatio, the smaller the error sum of squares is, the higher the precisio of the algorithm is Accuracy rate, the ratio of correct recogized samples ad simulatio samples, which reflects the recogize accuracy of each algorithm I order to illustrate the validity of ew algorithm better, we use two data sets for testig Oe data set is agricultural pests forecastig data, which is from the actual productio [20], ad the other data set is the ioosphere data subset of radar, which is from the UCI machie learig stadard data sets [21] 41 Test 1 This data comes from agricultural productio, usig the meteorological factor to predict the occurrece degree of wheat midge The data set icludes 60 samples from 1941 to 2000, which regards 14 feature variables (meteorological factors) as the iput of eural etwork The sigle output presets the occurrece degree of wheat midge 411 Algorithm Performace Select the last 30 samples to test amog 15 traiig samples ad 15 simulatio samples, i accordace with the characteristics of small sample By PLS dimesio reductio, the data extracts 6 features, which meas the dimesios of data from reduce 14 to 6 The simulatio test results are listed i Table Recogize Precisio of Algorithm I order to better illustrate the classificatio ability of the ew algorithm, we cotiue to do the followig test; divide the selected samples ito 25 traiig samples ad 5 simulatio samples Compared with the simulatio results ad the actual value, the test results are listed i Table 2 42 Test 2 We usig aother UCI data set to test the ew algorithm, the radar data icludes 351 samples that have 34 characteristics ad each sample is used to predict the quality of the radar, which meas 34 iputs ad 1 output 421 Performace of Each Algorithm We selected the frot 40 samples, amog 20 traiig samples ad 20 simulatio samples It is i accordace with the characteristics of small sample By PLS dimesio reductio, this data extracts 19 features; that is, the dimesios of data reduce from 34 to 19 The simulatio test results are listed i Table Every Algorithm s Recogize Precisio I order to better illustrate the classificatio ability of the ew algorithm, we cotiue to do the followig test ad divide the selected samples ito 35 traiig samples ad 5 simulatio samples Comparig the simulatio results with the actual value, the test results are listed i Table 4 43 Results Accordig to the results of Tables 1 to 4, compared amog the three subclassifiers, RBF eural etwork traiig is fastest, but the recogitio accuracy is the worst; Elma eural etwork s recogitio accuracy is the best; BP

7 Mathematical Problems i Egieerig 7 Table 2: Compariso of every classifier s accuracy for Test 1 Network model Weights Traiigsteps SSE Actual value Subclassifier I (BP NN) Subclassifier II (EBF NN) Subclassifier III (Elma NN) PLS-Elma Assemblig system Table 3: Compariso of every classifier s performace for Test 2 Network model Weights Traiig steps SSE Accuracy, % Subclassifier I (BP NN) Subclassifier II (EBF NN) Subclassifier III (Elma NN) PLS-Elma Assemblig system Table 4: Compariso of every classifier s accuracy for Test 2 Network model Weights Traiigsteps SSE Actual value Subclassifier I (BP NN) Subclassifier II (EBF NN) Subclassifier III (Elma NN) PLS-Elma Assemblig system eural etwork s traiig is the slowest; o the traiig speed ad recogitio accuracy, Elma is slightly better tha BP eural etwork Comparig PLS-Elma algorithm with traditioal Elma eural etwork, it has better traiig speed ad is slightly better i recogitio precisio It shows that the recogitio precisio of classifier is ot iflueced; o the cotrary, the operatig efficiecy has bee improved after PLS optimizatio The classificatio ability of esemble optimizatio is the best The data of Tables 1 ad 3 prove that Comparig with other traditioal eural etwork, the esemble learig algorithm with the highest recogitio accuracy rate ad miimum error, it ca suggest that the ew algorithm with the highest recogitio precisio The experimetal results reflect that the recogitio precisio of esemble algorithm is obviously higher tha ay subclassifier, to a certai degree, compesatig for the iformatio loss caused by data dimesio reductio Recogitio results meet the ideal requiremets, which meas the ew algorithm is effective 5 Coclusio ad Discussio From the two groups of experimetal results above, The optimized classificatio algorithm by PLS, algorithm, improves the traiig speed of subclassifier, ad ad the recogitio accuracy is ot reduced It shows that the ew algorithm reduces the iputs of eural etwork by PLS feature dimesio reductio, which is coveiet for desigig the etwork structure ad improvig the operatig efficiecy The recogitio accuracy rate ad test error of itegrated system have bee greatly improved, which shows that the classificatio precisio of the esemble algorithm has bee greatly improved ad higher tha ay subclassifier The purpose of eural etwork esemble is to improve the recogitio precisio of patter classificatio ad the poit of esemble learig is to determie the weights of each

8 8 Mathematical Problems i Egieerig sub,classifier effectively I this paper, usig OLS priciple to desig itegrator, establish multiple regressio model ad further determie the weight of each subclassifier I view of the small sample classificatio problem, this paper proposes a esemble optimized eural etwork classificatio algorithm based o PLS ad OLS PLS has uique advatage to reduce dimesio for small sample data, which obtais ideal low-dimesioal data By OLS algorithm, do eural etwork esemble learig, to determie the weights of each subclassifier The ew algorithm has higher operatig efficiecy ad classificatio precisio, which shows the worthess of further popularizatio ad applicatio Coflict of Iterests The authors declare that there is o coflict of iterests regardig the publicatio of this paper Ackowledgmets ThisworkissupportedbytheNatioalNaturalScieceFoudatio of Chia (os , , ad ); Priority Academic Program Developmet of Jiagsu Higher Educatio Istitutios; Major Projects i the Natioal Sciece & Techology Pillar Program durig the Twelfth Fiveyear Pla Period (o 2011BAD20B06); Major Projects i Jiagsu Provice Sciece & Techology Pillar Program Pla (Idustrial) (o BE ); The Specialized Research Fud for the Doctoral Program of Higher Educatio of Chia (o ), Ordiary Uiversity Graduate Studet Research Iovatio Projects of Jiagsu Provice (o KYLX ) Refereces [1] SFDig,WKJia,CYSu,LWZhag,adZZShi, Neural etwork research progress ad applicatios i forecast, i Advaces i Neural Networks ISNN 2008,vol5264ofLecture Notes i Computer Sciece, pp , Spriger, New York, NY, USA, 2008 [2]SFDig,HZhu,WKJia,adCYSu, Asurveyo feature extractio for patter recogitio, Artificial Itelligece Review,vol37,o3,pp ,2012 [3] FXSog,XMGao,SHLiu,adJYYag, Dimesioality reductio i statistical patter recogitio ad low loss dimesioality reductio, Chiese Joural of Computers, vol 28, o 11, pp , 2005 [4] B Garlik ad M Křiva, Idetificatio of type daily diagrams of electric cosumptio based o cluster aalysis of multidimesioal data by eural etwork, Neural Network World, vol 23, o 3, pp , 2013 [5]KRJaes,SXYag,adRRHacker, Porkfarmodour modellig usig multiple-compoet multiple-factor aalysis adeuraletworks, Applied Soft Computig Joural, vol6, o 1, pp 53 61, 2005 [6]JZhou,AHGuo,BCeller,adSSu, Faultdetectioad idetificatio spaig multiple processes by itegratig PCA with eural etwork, Applied Soft Computig A, vol 14, pp 4 11, 2014 [7] H W Wag, Partial Least Squares Regressio Method ad Applicatio, Natioal Defese Idustry Press, Beijig, Chia, 2000 [8] GZLi,RWZhao,HNQu,adMYYou, Modelselectio for partial least squares based dimesio reductio, Patter Recogitio Letters,vol33,o5,pp ,2012 [9] J Marques ad D Erik, Texture aalysis by a PLS based method for combied feature extractio ad selectio, i Machie Learig i Medical Imagig, vol7009oflecture Notes i Computer Sciece, pp , 2011 [10] SFDig,WKJia,XZXu,adCSu, Elmaeuraletwork algorithm based o PLS, Acta Electroica Siica,vol38,o2, pp71 75,2010 [11] X G Tuo, M Z Liu, ad L Wag, A PLS-based weighted artificial eural etwork approach for alpha radioactivity predictio iside cotamiated pipes, Mathematical Problems i Egieerig,vol2014,ArticleID517605,5pages,2014 [12] LKHaseadPSalamo, Neuraletworkesembles, IEEE Trasactios o Patter Aalysis ad Machie Itelligece, vol 12, o 10, pp , 1990 [13] D Opitz ad R Macli, Popular esemble methods: a empirical study, Joural of Artificial Itelligece Research, vol 11, pp , 1999 [14] Z H Zhou ad S F Che, Neural etwork esemble, Chiese Joural of Computers,vol25,o1,pp1 8,2002 [15] NKouretzes,DKBarrow,adSFCroe, Neuraletwork esemble operators for time series forecastig, Expert Systems with Applicatios,vol41,o9,pp ,2014 [16] C X Zhag ad J S Zhag, A survey of selective esemble learig algorithms, Chiese Joural of Computers,vol34, o 8, pp , 2011 [17]ZZhou,JWu,adWTag, Esembligeuraletworks: may could be better tha all, Artificial Itelligece, vol 137, o 1-2, pp , 2002 [18] M Alhamdoosh ad D Wag, Fast decorrelated eural etwork esembles with radom weights, Iformatio Scieces, vol 264, pp , 2014 [19] D Q Gao, O structures of supervised liear basis fuctio feed forward three-layered eural etworks, Chiese Joural of Computers,vol21,o1,pp80 86,1998 [20] Y M Zhag, The Applicatio of Artificial Neural Network i the Forecastig of Wheat Midge, Northwest A&F Uiversity, Shaaxi Provice, Chia, 2003 [21]

9 Advaces i Operatios Research Advaces i Decisio Scieces Joural of Applied Mathematics Algebra Joural of Probability ad Statistics The Scietific World Joural Iteratioal Joural of Differetial Equatios Submit your mauscripts at Iteratioal Joural of Advaces i Combiatorics Mathematical Physics Joural of Complex Aalysis Iteratioal Joural of Mathematics ad Mathematical Scieces Mathematical Problems i Egieerig Joural of Mathematics Discrete Mathematics Joural of Discrete Dyamics i Nature ad Society Joural of Fuctio Spaces Abstract ad Applied Aalysis Iteratioal Joural of Joural of Stochastic Aalysis Optimizatio

Research Article A Unified Weight Formula for Calculating the Sample Variance from Weighted Successive Differences

Research Article A Unified Weight Formula for Calculating the Sample Variance from Weighted Successive Differences Discrete Dyamics i Nature ad Society Article ID 210761 4 pages http://dxdoiorg/101155/2014/210761 Research Article A Uified Weight Formula for Calculatig the Sample Variace from Weighted Successive Differeces

More information

Research Article Research on Application of Regression Least Squares Support Vector Machine on Performance Prediction of Hydraulic Excavator

Research Article Research on Application of Regression Least Squares Support Vector Machine on Performance Prediction of Hydraulic Excavator Joural of Cotrol Sciece ad Egieerig, Article ID 686130, 4 pages http://dx.doi.org/10.1155/2014/686130 Research Article Research o Applicatio of Regressio Least Squares Support Vector Machie o Performace

More information

Research Article A New Second-Order Iteration Method for Solving Nonlinear Equations

Research Article A New Second-Order Iteration Method for Solving Nonlinear Equations Abstract ad Applied Aalysis Volume 2013, Article ID 487062, 4 pages http://dx.doi.org/10.1155/2013/487062 Research Article A New Secod-Order Iteratio Method for Solvig Noliear Equatios Shi Mi Kag, 1 Arif

More information

Study on Coal Consumption Curve Fitting of the Thermal Power Based on Genetic Algorithm

Study on Coal Consumption Curve Fitting of the Thermal Power Based on Genetic Algorithm Joural of ad Eergy Egieerig, 05, 3, 43-437 Published Olie April 05 i SciRes. http://www.scirp.org/joural/jpee http://dx.doi.org/0.436/jpee.05.34058 Study o Coal Cosumptio Curve Fittig of the Thermal Based

More information

The prediction of cement energy demand based on support vector machine Xin Zhang a, *, Shaohong Jing b

The prediction of cement energy demand based on support vector machine Xin Zhang a, *, Shaohong Jing b 4th Iteratioal Coferece o Computer, Mechatroics, Cotrol ad Electroic Egieerig (ICCMCEE 05) The predictio of cemet eergy demad based o support vector machie Xi Zhag a,, Shaohog Jig b School of Electrical

More information

Electricity consumption forecasting method based on MPSO-BP neural network model Youshan Zhang 1, 2,a, Liangdong Guo2, b,qi Li 3, c and Junhui Li2, d

Electricity consumption forecasting method based on MPSO-BP neural network model Youshan Zhang 1, 2,a, Liangdong Guo2, b,qi Li 3, c and Junhui Li2, d 4th Iteratioal Coferece o Electrical & Electroics Egieerig ad Computer Sciece (ICEEECS 2016) Electricity cosumptio forecastig method based o eural etwork model Yousha Zhag 1, 2,a, Liagdog Guo2, b,qi Li

More information

Open Access Research of Strip Flatness Control Based on Legendre Polynomial Decomposition

Open Access Research of Strip Flatness Control Based on Legendre Polynomial Decomposition Sed Orders for Reprits to reprits@bethamsciece.ae The Ope Automatio ad Cotrol Systems Joural, 215, 7, 23-211 23 Ope Access Research of Strip Flatess Cotrol Based o Legedre Polyomial Decompositio Ya Liu,

More information

Evapotranspiration Estimation Using Support Vector Machines and Hargreaves-Samani Equation for St. Johns, FL, USA

Evapotranspiration Estimation Using Support Vector Machines and Hargreaves-Samani Equation for St. Johns, FL, USA Evirometal Egieerig 0th Iteratioal Coferece eissn 2029-7092 / eisbn 978-609-476-044-0 Vilius Gedimias Techical Uiversity Lithuaia, 27 28 April 207 Article ID: eviro.207.094 http://eviro.vgtu.lt DOI: https://doi.org/0.3846/eviro.207.094

More information

REGRESSION (Physics 1210 Notes, Partial Modified Appendix A)

REGRESSION (Physics 1210 Notes, Partial Modified Appendix A) REGRESSION (Physics 0 Notes, Partial Modified Appedix A) HOW TO PERFORM A LINEAR REGRESSION Cosider the followig data poits ad their graph (Table I ad Figure ): X Y 0 3 5 3 7 4 9 5 Table : Example Data

More information

Research Article Approximate Riesz Algebra-Valued Derivations

Research Article Approximate Riesz Algebra-Valued Derivations Abstract ad Applied Aalysis Volume 2012, Article ID 240258, 5 pages doi:10.1155/2012/240258 Research Article Approximate Riesz Algebra-Valued Derivatios Faruk Polat Departmet of Mathematics, Faculty of

More information

Information-based Feature Selection

Information-based Feature Selection Iformatio-based Feature Selectio Farza Faria, Abbas Kazeroui, Afshi Babveyh Email: {faria,abbask,afshib}@staford.edu 1 Itroductio Feature selectio is a topic of great iterest i applicatios dealig with

More information

Interval Intuitionistic Trapezoidal Fuzzy Prioritized Aggregating Operators and their Application to Multiple Attribute Decision Making

Interval Intuitionistic Trapezoidal Fuzzy Prioritized Aggregating Operators and their Application to Multiple Attribute Decision Making Iterval Ituitioistic Trapezoidal Fuzzy Prioritized Aggregatig Operators ad their Applicatio to Multiple Attribute Decisio Makig Xia-Pig Jiag Chogqig Uiversity of Arts ad Scieces Chia cqmaagemet@163.com

More information

Week 1, Lecture 2. Neural Network Basics. Announcements: HW 1 Due on 10/8 Data sets for HW 1 are online Project selection 10/11. Suggested reading :

Week 1, Lecture 2. Neural Network Basics. Announcements: HW 1 Due on 10/8 Data sets for HW 1 are online Project selection 10/11. Suggested reading : ME 537: Learig-Based Cotrol Week 1, Lecture 2 Neural Network Basics Aoucemets: HW 1 Due o 10/8 Data sets for HW 1 are olie Proect selectio 10/11 Suggested readig : NN survey paper (Zhag Chap 1, 2 ad Sectios

More information

Research Article Two Expanding Integrable Models of the Geng-Cao Hierarchy

Research Article Two Expanding Integrable Models of the Geng-Cao Hierarchy Abstract ad Applied Aalysis Volume 214, Article ID 86935, 7 pages http://d.doi.org/1.1155/214/86935 Research Article Two Epadig Itegrable Models of the Geg-Cao Hierarchy Xiurog Guo, 1 Yufeg Zhag, 2 ad

More information

10-701/ Machine Learning Mid-term Exam Solution

10-701/ Machine Learning Mid-term Exam Solution 0-70/5-78 Machie Learig Mid-term Exam Solutio Your Name: Your Adrew ID: True or False (Give oe setece explaatio) (20%). (F) For a cotiuous radom variable x ad its probability distributio fuctio p(x), it

More information

Reliability model of organization management chain of South-to-North Water Diversion Project during construction period

Reliability model of organization management chain of South-to-North Water Diversion Project during construction period Water Sciece ad Egieerig, Dec. 2008, Vol. 1, No. 4, 107-113 ISSN 1674-2370, http://kkb.hhu.edu.c, e-mail: wse@hhu.edu.c Reliability model of orgaizatio maagemet chai of South-to-North Water Diversio Project

More information

Similarity Solutions to Unsteady Pseudoplastic. Flow Near a Moving Wall

Similarity Solutions to Unsteady Pseudoplastic. Flow Near a Moving Wall Iteratioal Mathematical Forum, Vol. 9, 04, o. 3, 465-475 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/0.988/imf.04.48 Similarity Solutios to Usteady Pseudoplastic Flow Near a Movig Wall W. Robi Egieerig

More information

ME 539, Fall 2008: Learning-Based Control

ME 539, Fall 2008: Learning-Based Control ME 539, Fall 2008: Learig-Based Cotrol Neural Network Basics 10/1/2008 & 10/6/2008 Uiversity Orego State Neural Network Basics Questios??? Aoucemet: Homework 1 has bee posted Due Friday 10/10/08 at oo

More information

Comparison Study of Series Approximation. and Convergence between Chebyshev. and Legendre Series

Comparison Study of Series Approximation. and Convergence between Chebyshev. and Legendre Series Applied Mathematical Scieces, Vol. 7, 03, o. 6, 3-337 HIKARI Ltd, www.m-hikari.com http://d.doi.org/0.988/ams.03.3430 Compariso Study of Series Approimatio ad Covergece betwee Chebyshev ad Legedre Series

More information

Research on Dependable level in Network Computing System Yongxia Li 1, a, Guangxia Xu 2,b and Shuangyan Liu 3,c

Research on Dependable level in Network Computing System Yongxia Li 1, a, Guangxia Xu 2,b and Shuangyan Liu 3,c Applied Mechaics ad Materials Olie: 04-0-06 ISSN: 66-748, Vols. 53-57, pp 05-08 doi:0.408/www.scietific.et/amm.53-57.05 04 Tras Tech Publicatios, Switzerlad Research o Depedable level i Network Computig

More information

Research Article Nonexistence of Homoclinic Solutions for a Class of Discrete Hamiltonian Systems

Research Article Nonexistence of Homoclinic Solutions for a Class of Discrete Hamiltonian Systems Abstract ad Applied Aalysis Volume 203, Article ID 39868, 6 pages http://dx.doi.org/0.55/203/39868 Research Article Noexistece of Homocliic Solutios for a Class of Discrete Hamiltoia Systems Xiaopig Wag

More information

Properties and Hypothesis Testing

Properties and Hypothesis Testing Chapter 3 Properties ad Hypothesis Testig 3.1 Types of data The regressio techiques developed i previous chapters ca be applied to three differet kids of data. 1. Cross-sectioal data. 2. Time series data.

More information

Outline. Linear regression. Regularization functions. Polynomial curve fitting. Stochastic gradient descent for regression. MLE for regression

Outline. Linear regression. Regularization functions. Polynomial curve fitting. Stochastic gradient descent for regression. MLE for regression REGRESSION 1 Outlie Liear regressio Regularizatio fuctios Polyomial curve fittig Stochastic gradiet descet for regressio MLE for regressio Step-wise forward regressio Regressio methods Statistical techiques

More information

RAINFALL PREDICTION BY WAVELET DECOMPOSITION

RAINFALL PREDICTION BY WAVELET DECOMPOSITION RAIFALL PREDICTIO BY WAVELET DECOMPOSITIO A. W. JAYAWARDEA Departmet of Civil Egieerig, The Uiversit of Hog Kog, Hog Kog, Chia P. C. XU Academ of Mathematics ad Sstem Scieces, Chiese Academ of Scieces,

More information

Intermittent demand forecasting by using Neural Network with simulated data

Intermittent demand forecasting by using Neural Network with simulated data Proceedigs of the 011 Iteratioal Coferece o Idustrial Egieerig ad Operatios Maagemet Kuala Lumpur, Malaysia, Jauary 4, 011 Itermittet demad forecastig by usig Neural Network with simulated data Nguye Khoa

More information

Mathematical Modeling of Optimum 3 Step Stress Accelerated Life Testing for Generalized Pareto Distribution

Mathematical Modeling of Optimum 3 Step Stress Accelerated Life Testing for Generalized Pareto Distribution America Joural of Theoretical ad Applied Statistics 05; 4(: 6-69 Published olie May 8, 05 (http://www.sciecepublishiggroup.com/j/ajtas doi: 0.648/j.ajtas.05040. ISSN: 6-8999 (Prit; ISSN: 6-9006 (Olie Mathematical

More information

Session 5. (1) Principal component analysis and Karhunen-Loève transformation

Session 5. (1) Principal component analysis and Karhunen-Loève transformation 200 Autum semester Patter Iformatio Processig Topic 2 Image compressio by orthogoal trasformatio Sessio 5 () Pricipal compoet aalysis ad Karhue-Loève trasformatio Topic 2 of this course explais the image

More information

The Sampling Distribution of the Maximum. Likelihood Estimators for the Parameters of. Beta-Binomial Distribution

The Sampling Distribution of the Maximum. Likelihood Estimators for the Parameters of. Beta-Binomial Distribution Iteratioal Mathematical Forum, Vol. 8, 2013, o. 26, 1263-1277 HIKARI Ltd, www.m-hikari.com http://d.doi.org/10.12988/imf.2013.3475 The Samplig Distributio of the Maimum Likelihood Estimators for the Parameters

More information

Expectation-Maximization Algorithm.

Expectation-Maximization Algorithm. Expectatio-Maximizatio Algorithm. Petr Pošík Czech Techical Uiversity i Prague Faculty of Electrical Egieerig Dept. of Cyberetics MLE 2 Likelihood.........................................................................................................

More information

ECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015

ECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015 ECE 8527: Itroductio to Machie Learig ad Patter Recogitio Midterm # 1 Vaishali Ami Fall, 2015 tue39624@temple.edu Problem No. 1: Cosider a two-class discrete distributio problem: ω 1 :{[0,0], [2,0], [2,2],

More information

Direction of Arrival Estimation Method in Underdetermined Condition Zhang Youzhi a, Li Weibo b, Wang Hanli c

Direction of Arrival Estimation Method in Underdetermined Condition Zhang Youzhi a, Li Weibo b, Wang Hanli c 4th Iteratioal Coferece o Advaced Materials ad Iformatio Techology Processig (AMITP 06) Directio of Arrival Estimatio Method i Uderdetermied Coditio Zhag Youzhi a, Li eibo b, ag Hali c Naval Aeroautical

More information

Research Article Some E-J Generalized Hausdorff Matrices Not of Type M

Research Article Some E-J Generalized Hausdorff Matrices Not of Type M Abstract ad Applied Aalysis Volume 2011, Article ID 527360, 5 pages doi:10.1155/2011/527360 Research Article Some E-J Geeralized Hausdorff Matrices Not of Type M T. Selmaogullari, 1 E. Savaş, 2 ad B. E.

More information

Clustering. CM226: Machine Learning for Bioinformatics. Fall Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar.

Clustering. CM226: Machine Learning for Bioinformatics. Fall Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar. Clusterig CM226: Machie Learig for Bioiformatics. Fall 216 Sriram Sakararama Ackowledgmets: Fei Sha, Ameet Talwalkar Clusterig 1 / 42 Admiistratio HW 1 due o Moday. Email/post o CCLE if you have questios.

More information

Support vector machine revisited

Support vector machine revisited 6.867 Machie learig, lecture 8 (Jaakkola) 1 Lecture topics: Support vector machie ad kerels Kerel optimizatio, selectio Support vector machie revisited Our task here is to first tur the support vector

More information

Research Article Robust Linear Programming with Norm Uncertainty

Research Article Robust Linear Programming with Norm Uncertainty Joural of Applied Mathematics Article ID 209239 7 pages http://dx.doi.org/0.55/204/209239 Research Article Robust Liear Programmig with Norm Ucertaity Lei Wag ad Hog Luo School of Ecoomic Mathematics Southwester

More information

Random Variables, Sampling and Estimation

Random Variables, Sampling and Estimation Chapter 1 Radom Variables, Samplig ad Estimatio 1.1 Itroductio This chapter will cover the most importat basic statistical theory you eed i order to uderstad the ecoometric material that will be comig

More information

Nonlinear regression

Nonlinear regression oliear regressio How to aalyse data? How to aalyse data? Plot! How to aalyse data? Plot! Huma brai is oe the most powerfull computatioall tools Works differetly tha a computer What if data have o liear

More information

THE KALMAN FILTER RAUL ROJAS

THE KALMAN FILTER RAUL ROJAS THE KALMAN FILTER RAUL ROJAS Abstract. This paper provides a getle itroductio to the Kalma filter, a umerical method that ca be used for sesor fusio or for calculatio of trajectories. First, we cosider

More information

Four-dimensional Vector Matrix Determinant and Inverse

Four-dimensional Vector Matrix Determinant and Inverse I.J. Egieerig ad Maufacturig 013 30-37 Published Olie Jue 01 i MECS (http://www.mecs-press.et) DOI: 10.5815/iem.01.03.05 vailable olie at http://www.mecs-press.et/iem Four-dimesioal Vector Matrix Determiat

More information

Research Article Nonautonomous Discrete Neuron Model with Multiple Periodic and Eventually Periodic Solutions

Research Article Nonautonomous Discrete Neuron Model with Multiple Periodic and Eventually Periodic Solutions Discrete Dyamics i Nature ad Society Volume 21, Article ID 147282, 6 pages http://dx.doi.org/1.11/21/147282 Research Article Noautoomous Discrete Neuro Model with Multiple Periodic ad Evetually Periodic

More information

TMA4205 Numerical Linear Algebra. The Poisson problem in R 2 : diagonalization methods

TMA4205 Numerical Linear Algebra. The Poisson problem in R 2 : diagonalization methods TMA4205 Numerical Liear Algebra The Poisso problem i R 2 : diagoalizatio methods September 3, 2007 c Eiar M Røquist Departmet of Mathematical Scieces NTNU, N-749 Trodheim, Norway All rights reserved A

More information

Mixtures of Gaussians and the EM Algorithm

Mixtures of Gaussians and the EM Algorithm Mixtures of Gaussias ad the EM Algorithm CSE 6363 Machie Learig Vassilis Athitsos Computer Sciece ad Egieerig Departmet Uiversity of Texas at Arligto 1 Gaussias A popular way to estimate probability desity

More information

A collocation method for singular integral equations with cosecant kernel via Semi-trigonometric interpolation

A collocation method for singular integral equations with cosecant kernel via Semi-trigonometric interpolation Iteratioal Joural of Mathematics Research. ISSN 0976-5840 Volume 9 Number 1 (017) pp. 45-51 Iteratioal Research Publicatio House http://www.irphouse.com A collocatio method for sigular itegral equatios

More information

Discrete Orthogonal Moment Features Using Chebyshev Polynomials

Discrete Orthogonal Moment Features Using Chebyshev Polynomials Discrete Orthogoal Momet Features Usig Chebyshev Polyomials R. Mukuda, 1 S.H.Og ad P.A. Lee 3 1 Faculty of Iformatio Sciece ad Techology, Multimedia Uiversity 75450 Malacca, Malaysia. Istitute of Mathematical

More information

Modified Decomposition Method by Adomian and. Rach for Solving Nonlinear Volterra Integro- Differential Equations

Modified Decomposition Method by Adomian and. Rach for Solving Nonlinear Volterra Integro- Differential Equations Noliear Aalysis ad Differetial Equatios, Vol. 5, 27, o. 4, 57-7 HIKARI Ltd, www.m-hikari.com https://doi.org/.2988/ade.27.62 Modified Decompositio Method by Adomia ad Rach for Solvig Noliear Volterra Itegro-

More information

1 Duality revisited. AM 221: Advanced Optimization Spring 2016

1 Duality revisited. AM 221: Advanced Optimization Spring 2016 AM 22: Advaced Optimizatio Sprig 206 Prof. Yaro Siger Sectio 7 Wedesday, Mar. 9th Duality revisited I this sectio, we will give a slightly differet perspective o duality. optimizatio program: f(x) x R

More information

AN OPEN-PLUS-CLOSED-LOOP APPROACH TO SYNCHRONIZATION OF CHAOTIC AND HYPERCHAOTIC MAPS

AN OPEN-PLUS-CLOSED-LOOP APPROACH TO SYNCHRONIZATION OF CHAOTIC AND HYPERCHAOTIC MAPS http://www.paper.edu.c Iteratioal Joural of Bifurcatio ad Chaos, Vol. 1, No. 5 () 119 15 c World Scietific Publishig Compay AN OPEN-PLUS-CLOSED-LOOP APPROACH TO SYNCHRONIZATION OF CHAOTIC AND HYPERCHAOTIC

More information

BIOINF 585: Machine Learning for Systems Biology & Clinical Informatics

BIOINF 585: Machine Learning for Systems Biology & Clinical Informatics BIOINF 585: Machie Learig for Systems Biology & Cliical Iformatics Lecture 14: Dimesio Reductio Jie Wag Departmet of Computatioal Medicie & Bioiformatics Uiversity of Michiga 1 Outlie What is feature reductio?

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 5

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 5 CS434a/54a: Patter Recogitio Prof. Olga Veksler Lecture 5 Today Itroductio to parameter estimatio Two methods for parameter estimatio Maimum Likelihood Estimatio Bayesia Estimatio Itroducto Bayesia Decisio

More information

10/2/ , 5.9, Jacob Hays Amit Pillay James DeFelice

10/2/ , 5.9, Jacob Hays Amit Pillay James DeFelice 0//008 Liear Discrimiat Fuctios Jacob Hays Amit Pillay James DeFelice 5.8, 5.9, 5. Miimum Squared Error Previous methods oly worked o liear separable cases, by lookig at misclassified samples to correct

More information

Optimally Sparse SVMs

Optimally Sparse SVMs A. Proof of Lemma 3. We here prove a lower boud o the umber of support vectors to achieve geeralizatio bouds of the form which we cosider. Importatly, this result holds ot oly for liear classifiers, but

More information

Research Article The PVC Stripping Process Predictive Control Based on the Implicit Algorithm

Research Article The PVC Stripping Process Predictive Control Based on the Implicit Algorithm Mathematical Problems i Egieerig, Article ID 838404, 8 pages http://dx.doi.org/10.1155/2014/838404 Research Article The PVC Strippig Process Predictive Cotrol Based o the Implicit Algorithm Shuzhi Gao

More information

Introduction to Machine Learning DIS10

Introduction to Machine Learning DIS10 CS 189 Fall 017 Itroductio to Machie Learig DIS10 1 Fu with Lagrage Multipliers (a) Miimize the fuctio such that f (x,y) = x + y x + y = 3. Solutio: The Lagragia is: L(x,y,λ) = x + y + λ(x + y 3) Takig

More information

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss ECE 90 Lecture : Complexity Regularizatio ad the Squared Loss R. Nowak 5/7/009 I the previous lectures we made use of the Cheroff/Hoeffdig bouds for our aalysis of classifier errors. Hoeffdig s iequality

More information

A statistical method to determine sample size to estimate characteristic value of soil parameters

A statistical method to determine sample size to estimate characteristic value of soil parameters A statistical method to determie sample size to estimate characteristic value of soil parameters Y. Hojo, B. Setiawa 2 ad M. Suzuki 3 Abstract Sample size is a importat factor to be cosidered i determiig

More information

Since X n /n P p, we know that X n (n. Xn (n X n ) Using the asymptotic result above to obtain an approximation for fixed n, we obtain

Since X n /n P p, we know that X n (n. Xn (n X n ) Using the asymptotic result above to obtain an approximation for fixed n, we obtain Assigmet 9 Exercise 5.5 Let X biomial, p, where p 0, 1 is ukow. Obtai cofidece itervals for p i two differet ways: a Sice X / p d N0, p1 p], the variace of the limitig distributio depeds oly o p. Use the

More information

TRACEABILITY SYSTEM OF ROCKWELL HARDNESS C SCALE IN JAPAN

TRACEABILITY SYSTEM OF ROCKWELL HARDNESS C SCALE IN JAPAN HARDMEKO 004 Hardess Measuremets Theory ad Applicatio i Laboratories ad Idustries - November, 004, Washigto, D.C., USA TRACEABILITY SYSTEM OF ROCKWELL HARDNESS C SCALE IN JAPAN Koichiro HATTORI, Satoshi

More information

11 Correlation and Regression

11 Correlation and Regression 11 Correlatio Regressio 11.1 Multivariate Data Ofte we look at data where several variables are recorded for the same idividuals or samplig uits. For example, at a coastal weather statio, we might record

More information

Modeling and Estimation of a Bivariate Pareto Distribution using the Principle of Maximum Entropy

Modeling and Estimation of a Bivariate Pareto Distribution using the Principle of Maximum Entropy Sri Laka Joural of Applied Statistics, Vol (5-3) Modelig ad Estimatio of a Bivariate Pareto Distributio usig the Priciple of Maximum Etropy Jagathath Krisha K.M. * Ecoomics Research Divisio, CSIR-Cetral

More information

Research Article Moment Inequality for ϕ-mixing Sequences and Its Applications

Research Article Moment Inequality for ϕ-mixing Sequences and Its Applications Hidawi Publishig Corporatio Joural of Iequalities ad Applicatios Volume 2009, Article ID 379743, 2 pages doi:0.55/2009/379743 Research Article Momet Iequality for ϕ-mixig Sequeces ad Its Applicatios Wag

More information

Research Article A Note on Ergodicity of Systems with the Asymptotic Average Shadowing Property

Research Article A Note on Ergodicity of Systems with the Asymptotic Average Shadowing Property Discrete Dyamics i Nature ad Society Volume 2011, Article ID 360583, 6 pages doi:10.1155/2011/360583 Research Article A Note o Ergodicity of Systems with the Asymptotic Average Shadowig Property Risog

More information

PAijpam.eu ON TENSOR PRODUCT DECOMPOSITION

PAijpam.eu ON TENSOR PRODUCT DECOMPOSITION Iteratioal Joural of Pure ad Applied Mathematics Volume 103 No 3 2015, 537-545 ISSN: 1311-8080 (prited versio); ISSN: 1314-3395 (o-lie versio) url: http://wwwijpameu doi: http://dxdoiorg/1012732/ijpamv103i314

More information

Study the bias (due to the nite dimensional approximation) and variance of the estimators

Study the bias (due to the nite dimensional approximation) and variance of the estimators 2 Series Methods 2. Geeral Approach A model has parameters (; ) where is ite-dimesioal ad is oparametric. (Sometimes, there is o :) We will focus o regressio. The fuctio is approximated by a series a ite

More information

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample.

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample. Statistical Iferece (Chapter 10) Statistical iferece = lear about a populatio based o the iformatio provided by a sample. Populatio: The set of all values of a radom variable X of iterest. Characterized

More information

Statistical Pattern Recognition

Statistical Pattern Recognition Statistical Patter Recogitio Classificatio: No-Parametric Modelig Hamid R. Rabiee Jafar Muhammadi Sprig 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2/ Ageda Parametric Modelig No-Parametric Modelig

More information

The Method of Least Squares. To understand least squares fitting of data.

The Method of Least Squares. To understand least squares fitting of data. The Method of Least Squares KEY WORDS Curve fittig, least square GOAL To uderstad least squares fittig of data To uderstad the least squares solutio of icosistet systems of liear equatios 1 Motivatio Curve

More information

Machine Learning for Data Science (CS 4786)

Machine Learning for Data Science (CS 4786) Machie Learig for Data Sciece CS 4786) Lecture & 3: Pricipal Compoet Aalysis The text i black outlies high level ideas. The text i blue provides simple mathematical details to derive or get to the algorithm

More information

Linear Regression Demystified

Linear Regression Demystified Liear Regressio Demystified Liear regressio is a importat subject i statistics. I elemetary statistics courses, formulae related to liear regressio are ofte stated without derivatio. This ote iteds to

More information

New Particle Swarm Neural Networks Model Based Long Term Electrical Load Forecasting in Slovakia

New Particle Swarm Neural Networks Model Based Long Term Electrical Load Forecasting in Slovakia New Particle Swarm Neural Networks Model Based Log Term Electrical Load Forecastig i Slovakia S.H. OUDJANA 1, A. HELLAL 2 1, 2 Uiversity of Laghouat, Departmet of Electrical Egieerig, Laboratory of Aalysis

More information

Estimation of Population Mean Using Co-Efficient of Variation and Median of an Auxiliary Variable

Estimation of Population Mean Using Co-Efficient of Variation and Median of an Auxiliary Variable Iteratioal Joural of Probability ad Statistics 01, 1(4: 111-118 DOI: 10.593/j.ijps.010104.04 Estimatio of Populatio Mea Usig Co-Efficiet of Variatio ad Media of a Auxiliary Variable J. Subramai *, G. Kumarapadiya

More information

A RANK STATISTIC FOR NON-PARAMETRIC K-SAMPLE AND CHANGE POINT PROBLEMS

A RANK STATISTIC FOR NON-PARAMETRIC K-SAMPLE AND CHANGE POINT PROBLEMS J. Japa Statist. Soc. Vol. 41 No. 1 2011 67 73 A RANK STATISTIC FOR NON-PARAMETRIC K-SAMPLE AND CHANGE POINT PROBLEMS Yoichi Nishiyama* We cosider k-sample ad chage poit problems for idepedet data i a

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2016 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

Review Article Incomplete Bivariate Fibonacci and Lucas p-polynomials

Review Article Incomplete Bivariate Fibonacci and Lucas p-polynomials Discrete Dyamics i Nature ad Society Volume 2012, Article ID 840345, 11 pages doi:10.1155/2012/840345 Review Article Icomplete Bivariate Fiboacci ad Lucas p-polyomials Dursu Tasci, 1 Mirac Ceti Firegiz,

More information

Live Line Measuring the Parameters of 220 kv Transmission Lines with Mutual Inductance in Hainan Power Grid

Live Line Measuring the Parameters of 220 kv Transmission Lines with Mutual Inductance in Hainan Power Grid Egieerig, 213, 5, 146-151 doi:1.4236/eg.213.51b27 Published Olie Jauary 213 (http://www.scirp.org/joural/eg) Live Lie Measurig the Parameters of 22 kv Trasmissio Lies with Mutual Iductace i Haia Power

More information

1 Review of Probability & Statistics

1 Review of Probability & Statistics 1 Review of Probability & Statistics a. I a group of 000 people, it has bee reported that there are: 61 smokers 670 over 5 960 people who imbibe (drik alcohol) 86 smokers who imbibe 90 imbibers over 5

More information

ECON 3150/4150, Spring term Lecture 3

ECON 3150/4150, Spring term Lecture 3 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step ECON 3150/4150, Sprig term 2014. Lecture 3 Ragar Nymoe Uiversity of Oslo 21 Jauary 2014 1 / 30 Itroductio

More information

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d Liear regressio Daiel Hsu (COMS 477) Maximum likelihood estimatio Oe of the simplest liear regressio models is the followig: (X, Y ),..., (X, Y ), (X, Y ) are iid radom pairs takig values i R d R, ad Y

More information

Chapter 7. Support Vector Machine

Chapter 7. Support Vector Machine Chapter 7 Support Vector Machie able of Cotet Margi ad support vectors SVM formulatio Slack variables ad hige loss SVM for multiple class SVM ith Kerels Relevace Vector Machie Support Vector Machie (SVM)

More information

POWER SERIES SOLUTION OF FIRST ORDER MATRIX DIFFERENTIAL EQUATIONS

POWER SERIES SOLUTION OF FIRST ORDER MATRIX DIFFERENTIAL EQUATIONS Joural of Applied Mathematics ad Computatioal Mechaics 4 3(3) 3-8 POWER SERIES SOLUION OF FIRS ORDER MARIX DIFFERENIAL EQUAIONS Staisław Kukla Izabela Zamorska Istitute of Mathematics Czestochowa Uiversity

More information

Lecture 4. Hw 1 and 2 will be reoped after class for every body. New deadline 4/20 Hw 3 and 4 online (Nima is lead)

Lecture 4. Hw 1 and 2 will be reoped after class for every body. New deadline 4/20 Hw 3 and 4 online (Nima is lead) Lecture 4 Homework Hw 1 ad 2 will be reoped after class for every body. New deadlie 4/20 Hw 3 ad 4 olie (Nima is lead) Pod-cast lecture o-lie Fial projects Nima will register groups ext week. Email/tell

More information

1 Inferential Methods for Correlation and Regression Analysis

1 Inferential Methods for Correlation and Regression Analysis 1 Iferetial Methods for Correlatio ad Regressio Aalysis I the chapter o Correlatio ad Regressio Aalysis tools for describig bivariate cotiuous data were itroduced. The sample Pearso Correlatio Coefficiet

More information

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE Geeral e Image Coder Structure Motio Video (s 1,s 2,t) or (s 1,s 2 ) Natural Image Samplig A form of data compressio; usually lossless, but ca be lossy Redudacy Removal Lossless compressio: predictive

More information

First, note that the LS residuals are orthogonal to the regressors. X Xb X y = 0 ( normal equations ; (k 1) ) So,

First, note that the LS residuals are orthogonal to the regressors. X Xb X y = 0 ( normal equations ; (k 1) ) So, 0 2. OLS Part II The OLS residuals are orthogoal to the regressors. If the model icludes a itercept, the orthogoality of the residuals ad regressors gives rise to three results, which have limited practical

More information

OBJECTIVES. Chapter 1 INTRODUCTION TO INSTRUMENTATION FUNCTION AND ADVANTAGES INTRODUCTION. At the end of this chapter, students should be able to:

OBJECTIVES. Chapter 1 INTRODUCTION TO INSTRUMENTATION FUNCTION AND ADVANTAGES INTRODUCTION. At the end of this chapter, students should be able to: OBJECTIVES Chapter 1 INTRODUCTION TO INSTRUMENTATION At the ed of this chapter, studets should be able to: 1. Explai the static ad dyamic characteristics of a istrumet. 2. Calculate ad aalyze the measuremet

More information

Analytical solutions for multi-wave transfer matrices in layered structures

Analytical solutions for multi-wave transfer matrices in layered structures Joural of Physics: Coferece Series PAPER OPEN ACCESS Aalytical solutios for multi-wave trasfer matrices i layered structures To cite this article: Yu N Belyayev 018 J Phys: Cof Ser 109 01008 View the article

More information

THE SYSTEMATIC AND THE RANDOM. ERRORS - DUE TO ELEMENT TOLERANCES OF ELECTRICAL NETWORKS

THE SYSTEMATIC AND THE RANDOM. ERRORS - DUE TO ELEMENT TOLERANCES OF ELECTRICAL NETWORKS R775 Philips Res. Repts 26,414-423, 1971' THE SYSTEMATIC AND THE RANDOM. ERRORS - DUE TO ELEMENT TOLERANCES OF ELECTRICAL NETWORKS by H. W. HANNEMAN Abstract Usig the law of propagatio of errors, approximated

More information

Sequences of Definite Integrals, Factorials and Double Factorials

Sequences of Definite Integrals, Factorials and Double Factorials 47 6 Joural of Iteger Sequeces, Vol. 8 (5), Article 5.4.6 Sequeces of Defiite Itegrals, Factorials ad Double Factorials Thierry Daa-Picard Departmet of Applied Mathematics Jerusalem College of Techology

More information

Machine Learning Regression I Hamid R. Rabiee [Slides are based on Bishop Book] Spring

Machine Learning Regression I Hamid R. Rabiee [Slides are based on Bishop Book] Spring Machie Learig Regressio I Hamid R. Rabiee [Slides are based o Bishop Book] Sprig 015 http://ce.sharif.edu/courses/93-94//ce717-1 Liear Regressio Liear regressio: ivolves a respose variable ad a sigle predictor

More information

Principle Of Superposition

Principle Of Superposition ecture 5: PREIMINRY CONCEP O RUCUR NYI Priciple Of uperpositio Mathematically, the priciple of superpositio is stated as ( a ) G( a ) G( ) G a a or for a liear structural system, the respose at a give

More information

Non-negative Matrix Factorization for Filtering Chinese Document *

Non-negative Matrix Factorization for Filtering Chinese Document * No-egative Matrix Factorizatio for Filterig Chiese Documet * Jiaiag Lu,,3, Baowe Xu,, Jixiag Jiag, ad Dazhou Kag Departmet of Computer Sciece ad Egieerig, Southeast Uiversity, Naig, 0096, Chia Jiagsu Istitute

More information

Vector Quantization: a Limiting Case of EM

Vector Quantization: a Limiting Case of EM . Itroductio & defiitios Assume that you are give a data set X = { x j }, j { 2,,, }, of d -dimesioal vectors. The vector quatizatio (VQ) problem requires that we fid a set of prototype vectors Z = { z

More information

Chapter 12 EM algorithms The Expectation-Maximization (EM) algorithm is a maximum likelihood method for models that have hidden variables eg. Gaussian

Chapter 12 EM algorithms The Expectation-Maximization (EM) algorithm is a maximum likelihood method for models that have hidden variables eg. Gaussian Chapter 2 EM algorithms The Expectatio-Maximizatio (EM) algorithm is a maximum likelihood method for models that have hidde variables eg. Gaussia Mixture Models (GMMs), Liear Dyamic Systems (LDSs) ad Hidde

More information

Because it tests for differences between multiple pairs of means in one test, it is called an omnibus test.

Because it tests for differences between multiple pairs of means in one test, it is called an omnibus test. Math 308 Sprig 018 Classes 19 ad 0: Aalysis of Variace (ANOVA) Page 1 of 6 Itroductio ANOVA is a statistical procedure for determiig whether three or more sample meas were draw from populatios with equal

More information

Lecture 22: Review for Exam 2. 1 Basic Model Assumptions (without Gaussian Noise)

Lecture 22: Review for Exam 2. 1 Basic Model Assumptions (without Gaussian Noise) Lecture 22: Review for Exam 2 Basic Model Assumptios (without Gaussia Noise) We model oe cotiuous respose variable Y, as a liear fuctio of p umerical predictors, plus oise: Y = β 0 + β X +... β p X p +

More information

CALCULATION OF FIBONACCI VECTORS

CALCULATION OF FIBONACCI VECTORS CALCULATION OF FIBONACCI VECTORS Stuart D. Aderso Departmet of Physics, Ithaca College 953 Daby Road, Ithaca NY 14850, USA email: saderso@ithaca.edu ad Dai Novak Departmet of Mathematics, Ithaca College

More information

OPTIMAL PIECEWISE UNIFORM VECTOR QUANTIZATION OF THE MEMORYLESS LAPLACIAN SOURCE

OPTIMAL PIECEWISE UNIFORM VECTOR QUANTIZATION OF THE MEMORYLESS LAPLACIAN SOURCE Joural of ELECTRICAL EGIEERIG, VOL. 56, O. 7-8, 2005, 200 204 OPTIMAL PIECEWISE UIFORM VECTOR QUATIZATIO OF THE MEMORYLESS LAPLACIA SOURCE Zora H. Perić Veljo Lj. Staović Alesadra Z. Jovaović Srdja M.

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 9 Multicolliearity Dr Shalabh Departmet of Mathematics ad Statistics Idia Istitute of Techology Kapur Multicolliearity diagostics A importat questio that

More information

Optimal Design of Accelerated Life Tests with Multiple Stresses

Optimal Design of Accelerated Life Tests with Multiple Stresses Optimal Desig of Accelerated Life Tests with Multiple Stresses Y. Zhu ad E. A. Elsayed Departmet of Idustrial ad Systems Egieerig Rutgers Uiversity 2009 Quality & Productivity Research Coferece IBM T.

More information

Algorithm of Superposition of Boolean Functions Given with Truth Vectors

Algorithm of Superposition of Boolean Functions Given with Truth Vectors IJCSI Iteratioal Joural of Computer Sciece Issues, Vol 9, Issue 4, No, July ISSN (Olie: 694-84 wwwijcsiorg 9 Algorithm of Superpositio of Boolea Fuctios Give with Truth Vectors Aatoly Plotikov, Aleader

More information

Research Article Global Exponential Stability of Discrete-Time Multidirectional Associative Memory Neural Network with Variable Delays

Research Article Global Exponential Stability of Discrete-Time Multidirectional Associative Memory Neural Network with Variable Delays Iteratioal Scholarly Research Network ISRN Discrete Mathematics Volume 202, Article ID 8375, 0 pages doi:0.5402/202/8375 Research Article Global Expoetial Stability of Discrete-Time Multidirectioal Associative

More information