Combining Dialectical Optimization and Gradient Descent Methods for Improving the Accuracy of Straight Line Segment Classifiers
|
|
- Felicity Washington
- 5 years ago
- Views:
Transcription
1 Cmbining Dialectical Optimizatin and Gradient Descent Methds fr Imprving the Accuracy f Straight Line Segment Classifiers Rsari A. Medina Rdriguez and Rnald Fumi Hashimt University f Sa Paul Institute f Mathematics and Statistics, IME-USP Department f Cmputer Science Rua d Mata, 00 - Sa Paul, Brazil rsarir@ime.usp.br and rnald@ime.usp.br Abstract A recent published pattern recgnitin technique called Straight Line Segment (SLS) uses tw sets f straight line segments t classify a set f pints frm tw different classes and it is based n distances between these pints and each set f straight line segments. It has been demnstrated that, using this technique, it is pssible t generate classifiers which can reach high accuracy rates fr supervised pattern classificatin. Hwever, a critical issue in this technique is t find the ptimal psitins f the straight line segments given a training data set. This paper prpses a cmbining methd f the dialectical ptimizatin methd (DOM) and the gradient descent technique fr slving this ptimizatin prblem. The main advantage f DOM, such as any evlutinary algrithm, is the capability f escaping frm lcal ptimum by multi-pint stchastic searching. On the ther hand, the strength f gradient descent methd is the ability f finding lcal ptimum by pinting the directin that maximizes the bjective functin. Our hybrid methd cmbines the main characteristics f these tw methds. We have applied ur cmbining apprach t several data sets btained frm artificial distributins and UCI databases. These experiments shw that the prpsed algrithm in mst cases has higher classificatin rates with respect t single gradient descent methd and the cmbinatin f gradient descent with genetic algrithms. Keywrds- straight line segments; gradient descent technique; dialectical ptimizatin; genetic algrithms; pattern recgnitin. I. INTRODUCTION In Image Prcessing and Cmputer Visin areas we frequently find many Pattern Recgnitin prblems such as ptical character recgnitin (OCR), handwritten recgnitin, human face image recgnitin, industrial inspectin and medicine diagnses [], [2]. Recently a new technique fr Pattern Recgnitin (fcused n supervised binary classificatin),called as Straight Line Segment (SLS) classifier [3], [4], [5] was published. The key issue in this technique is t find ptimal psitins f the straight line segments given a training data set. An algrithm fr finding these ptimal psitins is called training algrithm. In this paper, we cmbine the traditinal gradient descent methd with a nvel evlutinary algrithm [6], [7] called Dialectical Optimizatin Methd (DOM), fr slving this ptimizatin prblem. On ne hand, the main advantage f DOM is the capability f escaping frm lcal ptimum by multi-pint stchastic searching. On the ther hand, the strength f gradient descent methd [8] is the ability f finding lcal ptimum by pinting the directin that maximizes the bjective functin. Thus, based n these characteristics, in this paper we prpse a technique that cmbines the gradient descent and dialectical ptimizatin methds fr slving ptimizatin prblem in the training algrithm f the SLS classifier. In rder t verify the viability f ur algrithm, we have applied it t several data sets which were btained frm bth artificial prbability distributins (with knwn prbability density functins) and UCI databases. Based n the results f these experiments, the prpsed algrithm btained in mst f the cases higher classificatin rates when cmpared t: the slutins btained frm using just ne single gradient descent methd and the slutins btained frm the cmbinatin f gradient descent with genetic algrithms. As a cnclusin, this hybrid technique imprves the accuracy f the SLS classifiers in rder t prvide better slutins fr supervised Pattern Recgnitin prblems that appear in Image Prcessing r Cmputer Visin areas. Fllwing this brief intrductin, Sectins II and III briefly recall, respectively, the SLS classifier and the Dialectical Optimizatin Methd. In Sectin IV, we present ur hybrid apprach. Sectin V shws the experimental results we have cnducted. Finally, we give sme cnclusins and directins fr future wrk in Sectin VI. II. CLASSIFIER BASED ON STRAIGHT LINE SEGMENTS A recent publicatin n Pattern Recgnitin presents a new technique based n straight line segments [3], [4], [5]. Its main cntributin is t intrduce a new type f classifier based n distances between a set f pints and tw sets f straight line segments. S far, this technique is fcused n binary classificatin, s there is nly tw pssible classes in the data
2 set. Fr the sake f cmpleteness, we briefly describe in this sectin the SLS classifier. A. Basic Definitins fr SLS Classifiers Let p and q R d+. The straight line segment (SLS) with extremities p and q is defined as: L p,q = {x R d+ : x = p + λ (q p), 0 λ } () Given a pint x R d, let x e = (x, 0) dente the extensin f x t R d+. Given a pint x R d and L p,q R d+. The pseuddistance between x and L is given by: distp (x, L) = dist(x e, p) + dist(x e, q) dist(p, q) (2) 2 where dist(a, b) dentes the Euclidean distance between tw pints a, b R d+. Let L dente a set f SLSs, defined as: L = {L pi,q i : p i, q i R d+, i =,..., m} (3) where m represents the number f SLSs fr each class. Given a pint x R d, the discriminative functin is defined as: T L0,L (x) = distp (x, L) + ε distp (x, L) + ε L L L L 0 (4) where ε is a small psitive cnstant t avid divisin by zer. Cnsidering the discriminative functin, the classificatin functin is defined as: { 0, if SL0,L F L0,L (x) (x) < 0.5; (5), therwise where S L0,L (x) is a sigmid functin dented by: S L0,L (x) = (6) + e g(t L 0,L (x)) The fllwing relatinships can be fund amng the discriminative, sigmid and classificatin functins: If x has apprximately the same distance frm bth L 0 and L (that is, L L 0, L L : distp (x, L) distp (x, L ) and vice-versa) then, by Eq. 4, T L0,L (x) 0 and cnsequently, by Eq. 6, S L0,L (x) 0.5. S, the mre x is equally separated frm L 0 and L, the mre it is difficult t discriminate whether x belngs t class 0 r. The clser x is t L (that is, L L : distp (x, L) 0) and, at the same time, x is farther frm L 0 (that is, L L 0 : distp (x, L) + ) then, by Eq. 4, the bigger is T L0,L (x) (that is, it tends t + ) and, cnsequently, by Eq. 6, the clser S L0,L (x) is t, and, therefre, by Eq. 5, the clser is mre x can be discriminated as belnging t class (that is, the clser the classificatin functin F L0,L (x) is t ). Analgusly, the clser x is t L 0 and (at the same time) farther frm L, the clser the classificatin functin F L0,L (x) is t 0. B. Training Algrithm Given a set f n examples E n = {(x i, y i ) R d {0, } : i =, 2,.., n}, the main purpse f the training algrithm in the cntext f SLS classifiers, is t find tw sets f SLSs L 0 and L in rder t minimize the mean square errr functin defined as: E(F L0,L ) = n [F L0,L n (x i ) y i ] 2 (7) i= This algrithm can be divided int tw phases [4]: ) Placing: This phase cnsists f pre-allcating (finding initial psitins f) the SLSs (in L 0 and L ) based n the fact that pints x clser t L 0 (r L, respectively) and farther frm L (r L 0, respectively) lead the classificatin functin F L0,L (x) t 0 (r, respectively). T achieve this gal, the set f examples E n is split int tw grups, X i = {x R d : (x, y) E n and y = i} (fr i = 0, ), and then the clustering algrithm k-means is applied t each grup. Later, with the bjective t btain the initial extremities f the SLSs fr each cluster, the k-means algrithm (with k = 2) is applied again, but at this time t each cluster btained frm the previus k-means applicatin. 2) Tuning: The purpse f this phase is t minimize the mean square errr functin. One pssible way t accmplish this task is t use the gradient descent technique [8] t find the psitins f the SLSs in L 0 and L such that the mean square functin derivate is equal t zer. Despite f the gradient descent methd des nt guarantee the glbal minimum and the final slutin (psitins f the SLSs) depends n the initial placing phase, this methd was successfully applied in [4]. III. DIALECTICAL OPTIMIZATION METHOD The Dialectical Optimizatin Methd (DOM) was intrduced in [6], [7] and it is an evlutinary methd based n the materialist dialectics fr slving search and ptimizatin prblems. DOM is based n the dynamics f cntradictins between their integrating dialectical ples [7], where each ple is cnsidered as a pssible slutin fr the prblem. These ples are invlved in a ple struggle and they are affected by revlutinary crises where sme ples may disappear r may be absrbed by anther ples and new ples may cme up frm new perids f revlutinary crises. These tw prcesses make the system tend t stability [6]. Since this methd is based n philsphical cncepts, their definitins shwn in [6], are briefly described in this sectin. A ple is the fundamental integrating unit f a dialectical system and crrespnds a candidate slutin t the prblem and it is represented by a vectr f cnditins with n inputs. Given a set f ples Ω = {w, w 2,..., w m }, each ple w i is assciated t a vectr f weights defined as: w i = (w i,, w i,2,..., w i,n ) T (8)
3 where m is the number f ples and n is the system dimensinality. The scial frce f each ple w i is assciated t the bjective functin f the ptimizatin prblem and frm nw n, it will be represented by f(w i ). This is the fundamental idea f the dialectical methd. The cntradictin between tw ples w p and w q is defined as: δ p,q = dist(w p, w q ) (9) where dist(a, b) dentes the Euclidean distance between tw pints a and b. A histrical phase cnsists f tw stages: (i) evlutin and (ii) revlutinary crisis. These states crrespnd t the dynamics between ples thrugh cntradictin evaluatins. At the ple struggle stage, the ple w i has the hegemny at instant t when f P (t) = f(w i (t)) = max j m f(w j(t)) (0) and f P (t) is knwn as the present hegemny at time t. In the same way, the histric hegemny up t time t, dented by f H (t), and defined as the maximum scial frce value btained in each histrical phase up t time t. Given a ple w i such that a w i b, where a, b R, the ppsite r the abslute antithesis f w i is defined as: w i = b w i + a () Accrding t the dialectical methd, the synthesis f tw ples is the reslutin f the cntradictin between them (thesis and antithesis). As a result, it is btained ne new ple with characteristics inherited frm bth f them. The set f parameters that must be prvided by the user is: the number f ples (m), the number f histrical phases (η P ) and the duratin f each histrical phase (η H ). As can bee seen in the pseudcde presented n Algrithm, the parameters η P and η H wrk as threshlds fr the lps described at Lines 2 and 9. In additin, the tw phases f DOM (evlutin and revlutinary crisis) are represented at Lines 3 8 and Lines 0 5, respectively. IV. HYBRID OF DIALECTICAL OPTIMIZATION AND GRADIENT DESCENT METHODS On ne hand, the main advantage f DOM is the capability f escaping frm lcal ptimum by multi-pint stchastic searching. On the ther hand, the strength f gradient descent methd [8] is the ability f finding lcal ptimum by pinting the directin that maximizes the bjective functin. Thus, based n these prperties, the cmbinatin f gradient descent and dialectical ptimizatin methds may imprve the slutin quality f ptimizatin prblems. The idea t build a hybrid methd is shwed in Fig., where the main gal f DOM is t assist the gradient descent methd by prviding t it a new set f initial psitins (the utput f the dialectical ptimizatin methd) fr the tw sets f SLSs L 0 and L. The fllwing steps resume the idea f this hybrid methd: Algrithm Dialectical Optimizatin Methd Algrithm Require: m, η P, η H : set f initial parameters. Ensure: dglbalw inv alue: ptimized value. : Generatin f m initial ples (w[0 : m]) 2: while (inump hases < η P ) d 3: if (inump hases > 0) then 4: deletesimilarp les() 5: generatesynthesis() 6: generateantithesis() 7: addexternalef f ects() 8: end if 9: while (ip haseduratin < η H ) d 0: while (inump les < m) d : iw inp le 0 2: if (f(w inump les ) < f(w iw inp le )) then 3: iw inp le inump les 4: dglbalw inv alue f(w iw inp le ) 5: end if 6: end while 7: addcrisisef f ect() 8: end while 9: end while ) Generate initial ples as described in DOM. 2) Fr each phase f DOM, apply the gradient descent t each ple in the ppulatin in rder t btain ne ptimum lcal fr each ple. 3) Prceed with the next steps f DOM (i.e.ple struggle, synthesis, etc.) as referred in Sectin III. Our apprach uses an evlutinary methd t scape frm the lcal ptimal prvided by the gradient descent technique t find anther value (even better t the previus ne) which may be the new glbal ptimum. Then, at the mment when the gradient descent stps at the minimum lcal, the DOM prvides a new start pint helping the gradient t scape frm that valley. It is imprtant t emphasize that the initial ppulatin (set f ples) includes the slutin btained frm ne single applicatin f gradient descent s that ur hybrid apprach can generate a slutin that is equal t r better than the ne btained by just using the gradient descent methd. In this sense, ur apprach is cnservative. A. Adaptatins t DOM cncepts Sme mdificatins have been dne t the previus DOM cncepts shwn in Sectin III t adapt the prblem f finding the best the psitins f the SLSs in L 0 and L. Since a ple crrespnds t a pssible slutin, we define a ple as a vectr cnsisting f the extremities f the SLSs belnging t bth L 0 and L. Thus, a ple can be represented by a vectr btained frm the cncatenatin f L 0 with L (i.e. [L 0 L ]). The initial ppulatin is built in the fllwing way: half f the initial ppulatin is randmly generated with data between the range f the minimum and maximum values frm the
4 Training data set Training Dialectical Optimizatin Gradient Descent N Nrmalizatin frm ples int SLSs Generatin f ples (thesis/antithesis) N End f Phase Yes Remval & Synthesis f Ples End f Phases Yes Nrmalizatin frm ples int SLSs Applicatin f Grad. Descent Find the Hegemnies Evaluatin f cntradictins Nrmalizatin frm SLSs int ples Update ples Add crisis effect SLSs_0 and SLSs_ Testing Testing data set Estimatin f distances frm pints t SLSs Estimatin f Objective Functin Value Calculatin f Mean Square Errr Fig.. Data flw representatin f the prpsed hybrid methd, where each blck represents a prcess. The system input is a training data set and the system utput is a set f tw SLSs and the classified data set. training data set. The ther half part f the initial ppulatin is generated with the ppsite ples (antithesis). The antithesis f a ple [L 0 L ] (described in Sectin III) is redefined as [L L 0 ]. Since ur prblem deals with a set f SLSs which divides tw classes f pints, the ppsite vectr is equivalent t invert classes f the SLSs psitins, meaning that the SLSs in L 0 and L, nw represents class e 0, respectively. The scial frce is represented by the MSE defined as R En (L 0, L ) = n (y i F L0,L n (x i )) 2 (2) i= where F L0,L (x i ) is the classificatin functin frm the SLS classifier (defined in Eq. 5) and y i is the class value. The cntradictin f tw ples was defined in Sectin III as their Euclidian distance. In ur wrk, the cntradictin is redefined as the difference in abslute value between the MSE btained fr each invlved ple. Therefre, the cntradictin means the difference f classificatin errrs between tw ples. The pseudcde fr this hybrid methd is the same as presented in Algrithm, with the nly difference that right befre Line 3, the gradient descent methd is applied t each ple befre executing the steps f DOM s methd. By lking at Fig., it is easier t understand this hybrid methd, flwing the data flw which is divided int tw prcess. First, given a training data set, the ppulatin (thesis and antithesis) is generated. Then, it is applied the gradient descent methd t each member f the ppulatin. Finally the remaining DOM functins, gruped as Evlutin (first lp) and Revlutinary Crisis (secnd lp), are applied. At the end f the training phase, tw sets f SLSs are btained, which are the nes with minimum MSE errr. Secnd, given a test data set, the distances between the new examples and the btained SLSs frm the training set are cmputed t btain the MSE errr. As a final result, an image f the examples classificatin is btained, shwing the SLSs psitins. V. EXPERIMENTAL RESULTS AND DISCUSSION In rder t evaluate the prpsed hybrid methd fr training SLS classifiers, we cnducted tw experiments. In the first ne, using artificial data, we have applied ur hybrid methd n data sets drawn frm fur artificial prbability density functins named here as F, S, Simple and X and described in [4]. A. Artificial Data Sets In these data sets, each regin has a unifrm prbability density functin. This prperty makes pssible t apply the Bayes classifier [] t btain the ideal classificatin rate and cmpare it with ur result as it was prpsed in [4]. Since the prbability density functin is knwn, it is pssible t use numerical integratin t calculate the classificatin rate f the SLS classifier that was btained by applying ur hybrid methd in the training phase. Fr this experiments, the parameters used fr the gradient descent methd are the same used in [4]: Number f iteratins = 000,
5 Initial Value = 0., Displacement decrement = 0.5, Displacement increment = 0., Minimum value = 0 5, and, the parameters fr DOM are: Number f ples = 30, Number f phases = 20, Number f iteratins = 5, Minimum Value = 0 3, Learn Rate = 0.99, Crisis Effect value = 0.2. We have generated fur samples with 00, 200, 400 and 800 examples, respectively. Fr each sample, we have applied ur hybrid methd t, 2, 3 and 4 SLSs fr each class. T shw the results, we use a graphic representatin (a grayscale image) f the classificatin regins cmputed by the classificatin functin F L0,L (x i ) f the SLS classifier. Fr each image, the black and white regins represent the regins fr classes 0 and, respectively. The SLSs in L 0 and L are als indicated in the ppsite clr (i.e. black regins with white SLSs and white regins with black SLSs). The SLSs are in R d+. Fr the sake f visualizatin, when the SLSs are in R 3, they are prjected int R 2 with the purpse f shwing them in the image results. In rder t evaluate the hybrid methd, we have cmputed the classificatin rate based n the Bayes classifier fr each sample. The bld numbers indicates the best classificatin rate fr each sample (i.e. the best classifier with n SLSs per class). It is imprtant t emphasize that all the results are clse t the ideal classificatin rate and, in mst cases, they are better than the results btained in [4]. B. Discussin The prbability density functin F is represented n the first rw in Fig. 2. By lking at it, we can bserve that regins and SLSs psitins are better fitted n the prbability density functin, it is pssible t see the imprvement frm in secnd clumn, with genetic algrithms (-AG) in third clumn and ur methd in furth clumn. In the last three clumns we present the best results btained fr each methd, -AG and -DOM. It can be seen that and -DOM results are very similar but uses 3-SLS and -DOM nly 2-SLS which is better. We can als bserve that amng the classificatins rate presented at Table I, the results fr larger amunt f examples, such as 400 and 800, the best classificatin rate is btained frm a SLS classifier with 3 SLSs fr each class. The classificatin rate is abut 87.4% fr 800 examples which is very clse t the ideal classificatin rate (with Bayes classifier) 87.65%. In the same way, the prbability density functin S is represented n the secnd rw in Fig. 2, and nce again the best representatin f the data distributin is in clumn d which is the result frm ur hybrid methd. Frm Table I, we can say that the best classificatin rates were reached using 3 SLSs fr samples with 200 and 400 examples (clumns d and e frm Fig. 2); while fr samples with 800 examples, the best classifier uses 2 SLSs (clumn f ). In all results, the classificatin rates are clse t the ideal classificatin rate. The Simple-distributin is the ne with the best result amng the fur distributins prpsed in [4] and as can be seen n Fig. 2 and cmpared with the classificatin rates in Table I. We can highlight that all percentages f classificatin rate are very clse t the ideal classificatin rate (93.90%) and even ne f them is nly 0.40% percent lwer than the ideal ne. This crrespnds t the case f the SLS classifier with 3 SLSs btained frm a sample with 800 examples. It can als be seen that fr samples with 00 and 400 examples, the percentage f crrect classificatin is abut 93%. In the ppsite way, the X-distributin presents the weakest classificatin rate amng the thers fr ur prpsed methd as can be seen in Table I and it is the well knwn XOR distributin. Fr this distributin, we did nt btain a big classificatin rates imprvement (smetimes they are equal t the gradient descent methd, and smetimes they are wrst than it). As can be seen n Fig. 2 (last three clumns) the best results fr this distributin were reached with just -SLS fr and -DOM, but using -AG were used 3-SLS t reach the best classificatin rate. C. Gradient Descent vs. Gradient Descent with DOM In this sectin, we present a line diagram in Fig. 3, in rder t cmpare the classificatin rates btained fr bth methds, and -DOM when a SLS classifier with 3- SLS is applied n the fur distributins F, S, X and Simple presented previusly and drawn in different clrs. As can be seen, the gradient descent methd is represented with pinted lines and ur hybrid methd with cntnuus lines. In the figure are represented the best classificatin rates btained in the experiment cnduced n this wrk with 00, 200, 400 and 800 examples. Using this graphic is much easier t cmpare the gradient descent with the prpsed methd. Thus, fr F-distributin (using plus signs), the imprvement by using ur apprach is between % e 4%. In the same way, fr S and Simple distributins (using circles and asterisk, respectively) the imprvement f the classificatin rates varies between 0.50% e 8%. Finally, fr X-distributin (using letter x), the mst cmplicated fr ur methd the percentage f imprvement is between % e 4%. Furthermre, the graph suggests that fr this wrk the distributin with the best classificatin rate is Simple-distributin, with percentages fr bth and -DOM ver 90%. Whereas the F-distributin has a percentage f crrect classificatin between 80% and 90%, and fr the ther tw distributins, S and X, the percentages f crrect classificatin are between 60% e 70%. It is wrth nting that fr all the distributins presented in this wrk, ur methd increases the percentage f crrect classificatin when the number f examples is greater, as can be seen in Fig. 3, where fr 400 and 800 examples per sample
6 (a) Distributin (b) (-SLS) (c) -AG (-SLS) (d) -DOM (-SLS) (e) (best) (f) -AG (best) (g) -DOM (best) Fig. 2. Graphic representatin f the results btained fr the distributins befre mentined. The first fur clumns were btained using 800 examples and represent: (a) Data distributin (F, S, Simple and X), (b),(c) and (d) Results with the three methds used in this wrk. The last clumns represent the best classificatin rate btained fr each distributin using (e), (f) -AG and (g) -DOM, respectively. TABLE I C LASSIFICATION RATES OBTAINED USING : (i) (G RADIENT D ESCENT ); (ii) -DOM ( WITH D IALECTICAL O PTIMIZATION ) AND (iii) -AG ( WITH G ENETIC A LGORITHMS ), FOR EACH DISTRIBUTION WITH AN IDEAL CLASSIFICATION RATE REPRESENTED IN PARENTHESIS. E XAMPLES F (87.65%) % 75.02% 74.7% 76.86% -DOM 83.32% 86.24% 86.64% 84.69% -AG 59.94% 73.29% 74.52% 76.77% N UMBER OF S TRAIGHT L INE S EGMENTS PER CLASS 2 3 -DOM -AG -DOM -AG 85.99% 84.84% 84.48% 86.75% 85.49% 73.63% 86.88% 87.7% 80.94% 87.06% 86.94% 8.76% 86.95% 86.72% 85.02% 82.23% 86.77% 79.30% 86.34% 86.99% 82.5% 86.7% 87.4% 75.70% 4 -DOM -AG 84.4% 86.46% 67.9% 83.30% 87.3% 8.7% 83.95% 86.5% 75,04% 86.36% 86.55% 78.6% S (68.78%) % 59.09% 59.0% 58.93% 6.79% 65.37% 66.38% 66.72% 59.72% 60.27% 60.6% 59.45% 66.03% 66.87% 66.82% 59.8% 60.3% 65.32% 66.86% 67.90% 56.27% 64.80% 60.94% 66.5% 65.48% 62.89% 66.4% 59.76% 62.84% 65.90% 66.99% 67.66% 60.3% 63.00% 62.52% 65.9% 62.22% 63.90% 60.54% 59.52% 59.0% 65.22% 66.90% 67.78% 62.74% 58.92% 58.82% 62.28% S IMPLE (93.90%) % 92.5% 92.53% 9.75% 92.92% 92.66% 93.9% 93.50% 86.92% 89.30% 89.60% 90.02% 92.78% 92.55% 92.6% 92.56% 92.99% 92.62% 92.77% 93.56% 9.98% 9.82% 93.59% 93.3% 92.8% 92.49% 92.68% 92.39% 93.0% 92.02% 93.46% 93.59% 92.48% 92.22% 9.8% 93.04% 92.78% 92.4% 92.68% 92.70% 92.96% 92.66% 93.45% 93.5% 88.92% 92.0% 93.08% 92.92% X (66.70%) % 66.26% 65.9% 66.50% 65.75% 65.92% 65.98% 66.46% 40.79% 37.4% 29.65% 9.96% 66.04% 66.29% 65.% 66.3% 64.99% 64.48% 64.57% 65.90% 38.53% 62.70% 62.08% 65.25% 65.33% 66.09% 63.85% 65.93% 64.03% 62.98% 64.06% 66.2% 64.83% 6.70% 65.56% 64.9% 66.24% 66.8% 63.66% 66.35% 63.69% 60.96% 63.36% 65.8% 58.52% 62.77% 60.76% 63.7%
7 ur methd btained a percentage equal t r greater than the ne btained by the gradient descent methd. Classificatin Rate (%) * * * * * x x + * x dist_f dist_s dist_simples dist_x x x DOM Number f Examples Fig. 3. Cmparisn between bth methds and -DOM using the classificatin rates btained frm applying the SLS classifier with 3-SLS n the fur distributins described befre. D. Gradient Descent with DOM vs. Gradient Descent with Genetic Algrithms Sme experiments were perfrmed t cmpare ur hybrid methd with anther ptimizatin algrithms, in this case we chse Genetic Algrithms [9], [0], []. These techniques are cmmnly applied t find apprximate slutins in ptimizatin prblems and they belng t a particular class f evlutinary algrithms which are inspired n evlutinary bilgy such as natural selectin, mutatin and inheritance. The algrithm perates by iteratively updating a set f individuals, called the ppulatin. The ppulatin cnsists f many individuals called chrmsmes. All members f the ppulatin are evaluated by the fitness functin n each iteratin. A new ppulatin is then generated by prbabilistically selecting the mst fit chrmsmes frm the current ppulatin. Sme f the selected chrmsmes are added t the new generatin while thers are selected as parent chrmsmes. Parent chrmsmes are used fr creating new ffsprings by applying genetic peratrs such as crssver and mutatin [2]. Here the ppulatin is generated in the same way as DOM, each chrmsme represents a ptential slutin t the ptimizatin prblem (i.e. [L 0 L ]). Cnsequently, a ppulatin is a set f pssible slutins and initially is randmly generated. Like in DOM methd, there are sme parameters in genetic algrithms that must be prperly tuned. In particular the + x x number f chrmsmes fr initial ppulatin (set t 200) and the number f iteratins (set t 00). In Table I, can be seen the classificatin rates btained frm applying the SLS classifier using in this case a cmbinatin f gradient descent and genetic algrithms. Fr F-distributin the rates have a difference f abut 5% r 7% percent frm the classificatin rates btained with -DOM. In S-distributin, this new cmbinatin get the highest rate amng all the ther methds, but nly fr 00 examples and 4-SLS. It was the same situatin with Simple-distributin, but this time using 400 examples and 2-SLS. In additin, fr Simple and X distributins the rates were clser t the ideal classificatin rate in sme cases. These methds were cmbined in the same way as gradient descent with DOM: (i) generate ppulatin; (ii) apply gradient descent t each chrmsme in ppulatin and (iii) the remaining AG functins. The results suggests that ur prpsed methd btained better classificatin rates than the cmbinatin f gradient descent with genetic algrithms, thus DOM has mre iteratins with the pssible slutins and the peratrs in DOM were redefined in rder t adapt them t ur prblem. E. Public Data Set The prpsed hybrid methd was als applied t ne public data set frm the UCI Machine Learning Repsitry [3]: the Breast Cancer Wiscnsin (Diagnstic) Dataset, with 2 classes (B fr Benign and M fr Malign) and 0 attributes (features). The experiment was dne using a sample with 400 examples fr training and a sample with 300 examples fr testing. The applied SLS classifiers have, 2, 3 and 4 SLSs per class. Table II presents and cmpares the results btained by ur methd with the nes btained by [4], fr the same data set, but with a sample with 682 examples when using the gradient descent methd. As we can see in Table II, ur hybrid methd had a better perfrmance using SLS classifiers with 3-SLSs and 2-SLSs. It was nt pssible t btain sme figures t represent the classificatin fr this data set, because it is 0-dimensinal data set. TABLE II CLASSIFICATION RATE FOR BREAST CANCER DATA SET METHOD NUMBER OF SLSS PER CLASS GradDesc 96.78% 96.92% 96.34% 96.78% GradDescDOM 96.66% 97.32% 96.99% 82.94% VI. CONCLUSIONS This paper presents a new methd fr training SLS classifiers, with a hybrid between gradient descent and dialectical ptimizatin methds. The gal is t find the best psitin f the SLSs in rder t minimize the MSE errr. Fr that, in [4], it was applied the gradient descent methd t ptimize the search f this ptimal psitins. In this wrk, ur main cntributin is t imprve the training phase (ptimizatin
8 f SLSs psitins), cmbining the gradient descent with an evlutinary methd fr ptimizatin based n philsphical cncepts such as dialectics. Cmparing the methds previusly mentined, all f them have a high classificatin rate, but n the hybrid methd case (-DOM), the distributins are better fitted with the btained SLSs psitins. In additin, we can say that the best SLS classifier fr the fur prpsed artificial distributins and the public data set, must have 3 SLS per class, because it has imprved the classificatin rate in an average f 2%. The prpsed hybrid methd has shwn a very gd classificatin fr F and S distributin, althugh the classificatin rate was nt imprved fr X-distributin, which has the lwest imprvement percentage abut 0.5%. While this methd imprves the classificatin rate, the cmputatin time fr the training algrithm increases because f the multiple iteratins f evlutinary DOM and gradient descent methd. In additin it has been studied the use f threads n the implementatin t reduce the training time. Althugh, in this paper we use sme predefined distributins and ne public dataset, the presented results indicate that the SLS classifier using the prpsed hybrid methd can be ptentially used in Cmputer Visin prblems. We plan t d this analysis fr future wrk and als extend the SLS binary classifier t a multiclass classifier. [2] R. Chandra and C. Omlin, The cmparisn and cmbinatin f genetic and gradient descent learning in recurrent neural netwrks: An applicatin t speech phneme classificatin, in Artificial Intelligence and Pattern Recgnitin, 2007, pp [3] A. Asuncin and D. J. Newman, UCI Machine Learning Repsitry, ACKNOWLEDGMENT Financial supprt fr this research has been prvided by FAPESP and CNPq. The authrs are grateful t annymus reviewers fr the critical cmments and valuable suggestins. REFERENCES [] R. Duda, P. Hart, and D. Strk, Pattern Classificatin, 2nd ed. New Yrk: Wiley, 200. [2] A. Jain, R. Duin, and J. Ma, Statistical pattern recgnitin: a review, Pattern Analysis and Machine Intelligence, IEEE Transactins n, vl. 22, n., pp. 4 37, Jan [3] J. Ribeir and R. Hashimt, A new machine learning technique based n straight line segments, in ICMLA 2006: 5th Internatinal Cnference n Machine Learning and Applicatins, Prceedings. IEEE Cmputer Sciety, 2006, pp [4], A new training algrithm fr pattern recgnitin technique based n straight line segments, in Cmputer Graphics and Image Prcessing, SIBGRAPI 08. XXI Brazilian Sympsium n, ct. 2008, pp [5], Pattern Recgnitin, Recent Advances. I-Tech, 200, ch. Pattern Recgnitin Based n Straight Line Segments, bk Chapter. [6] W. D. Sants and F. D. Assis, Optimizatin based n dialectics, in IJCNN, 2009, pp [7] W. D. Sants, F. D. Assis, R. D. Suza, P. Mendes, H. Mnteir, and H. Alves, Dialectical nn-supervised image classificatin, in Prceedings f the Eleventh cnference n Cngress n Evlutinary Cmputatin, ser. CEC 09. Piscataway, NJ, USA: IEEE Press, 2009, pp [Online]. Available: id= [8] D. Michie, D. Spiegelhater, and C. Taylr, Intrductin t ptimizatin thery, 973. [9] D. Gldberg, Genetic Algrithms in Search, Optimizatin and Machine Learning, st ed. Bstn, MA, USA: Addisn-Wesley Lngman Publishing C., Inc., 989. [0] R. Haupt and S. Haupt, Practical Genetic Algrithms. Wiley- Interscience, [] J. Hlland, Adaptatin in natural and artificial systems. Cambridge, MA, USA: MIT Press, 992.
Chapter 3: Cluster Analysis
Chapter 3: Cluster Analysis } 3.1 Basic Cncepts f Clustering 3.1.1 Cluster Analysis 3.1. Clustering Categries } 3. Partitining Methds 3..1 The principle 3.. K-Means Methd 3..3 K-Medids Methd 3..4 CLARA
More informationBootstrap Method > # Purpose: understand how bootstrap method works > obs=c(11.96, 5.03, 67.40, 16.07, 31.50, 7.73, 11.10, 22.38) > n=length(obs) >
Btstrap Methd > # Purpse: understand hw btstrap methd wrks > bs=c(11.96, 5.03, 67.40, 16.07, 31.50, 7.73, 11.10, 22.38) > n=length(bs) > mean(bs) [1] 21.64625 > # estimate f lambda > lambda = 1/mean(bs);
More informationPart 3 Introduction to statistical classification techniques
Part 3 Intrductin t statistical classificatin techniques Machine Learning, Part 3, March 07 Fabi Rli Preamble ØIn Part we have seen that if we knw: Psterir prbabilities P(ω i / ) Or the equivalent terms
More informationthe results to larger systems due to prop'erties of the projection algorithm. First, the number of hidden nodes must
M.E. Aggune, M.J. Dambrg, M.A. El-Sharkawi, R.J. Marks II and L.E. Atlas, "Dynamic and static security assessment f pwer systems using artificial neural netwrks", Prceedings f the NSF Wrkshp n Applicatins
More informationNUROP CONGRESS PAPER CHINESE PINYIN TO CHINESE CHARACTER CONVERSION
NUROP Chinese Pinyin T Chinese Character Cnversin NUROP CONGRESS PAPER CHINESE PINYIN TO CHINESE CHARACTER CONVERSION CHIA LI SHI 1 AND LUA KIM TENG 2 Schl f Cmputing, Natinal University f Singapre 3 Science
More informationThe blessing of dimensionality for kernel methods
fr kernel methds Building classifiers in high dimensinal space Pierre Dupnt Pierre.Dupnt@ucluvain.be Classifiers define decisin surfaces in sme feature space where the data is either initially represented
More informationinitially lcated away frm the data set never win the cmpetitin, resulting in a nnptimal nal cdebk, [2] [3] [4] and [5]. Khnen's Self Organizing Featur
Cdewrd Distributin fr Frequency Sensitive Cmpetitive Learning with One Dimensinal Input Data Aristides S. Galanpuls and Stanley C. Ahalt Department f Electrical Engineering The Ohi State University Abstract
More informationCOMP 551 Applied Machine Learning Lecture 11: Support Vector Machines
COMP 551 Applied Machine Learning Lecture 11: Supprt Vectr Machines Instructr: (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/cmp551 Unless therwise nted, all material psted fr this curse
More information, which yields. where z1. and z2
The Gaussian r Nrmal PDF, Page 1 The Gaussian r Nrmal Prbability Density Functin Authr: Jhn M Cimbala, Penn State University Latest revisin: 11 September 13 The Gaussian r Nrmal Prbability Density Functin
More informationPattern Recognition 2014 Support Vector Machines
Pattern Recgnitin 2014 Supprt Vectr Machines Ad Feelders Universiteit Utrecht Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 1 / 55 Overview 1 Separable Case 2 Kernel Functins 3 Allwing Errrs (Sft
More informationMethods for Determination of Mean Speckle Size in Simulated Speckle Pattern
0.478/msr-04-004 MEASUREMENT SCENCE REVEW, Vlume 4, N. 3, 04 Methds fr Determinatin f Mean Speckle Size in Simulated Speckle Pattern. Hamarvá, P. Šmíd, P. Hrváth, M. Hrabvský nstitute f Physics f the Academy
More informationx 1 Outline IAML: Logistic Regression Decision Boundaries Example Data
Outline IAML: Lgistic Regressin Charles Suttn and Victr Lavrenk Schl f Infrmatics Semester Lgistic functin Lgistic regressin Learning lgistic regressin Optimizatin The pwer f nn-linear basis functins Least-squares
More informationTechnical Bulletin. Generation Interconnection Procedures. Revisions to Cluster 4, Phase 1 Study Methodology
Technical Bulletin Generatin Intercnnectin Prcedures Revisins t Cluster 4, Phase 1 Study Methdlgy Release Date: Octber 20, 2011 (Finalizatin f the Draft Technical Bulletin released n September 19, 2011)
More informationEric Klein and Ning Sa
Week 12. Statistical Appraches t Netwrks: p1 and p* Wasserman and Faust Chapter 15: Statistical Analysis f Single Relatinal Netwrks There are fur tasks in psitinal analysis: 1) Define Equivalence 2) Measure
More informationCS 477/677 Analysis of Algorithms Fall 2007 Dr. George Bebis Course Project Due Date: 11/29/2007
CS 477/677 Analysis f Algrithms Fall 2007 Dr. Gerge Bebis Curse Prject Due Date: 11/29/2007 Part1: Cmparisn f Srting Algrithms (70% f the prject grade) The bjective f the first part f the assignment is
More informationk-nearest Neighbor How to choose k Average of k points more reliable when: Large k: noise in attributes +o o noise in class labels
Mtivating Example Memry-Based Learning Instance-Based Learning K-earest eighbr Inductive Assumptin Similar inputs map t similar utputs If nt true => learning is impssible If true => learning reduces t
More informationLecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff
Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeff Reading: Chapter 2 STATS 202: Data mining and analysis September 27, 2017 1 / 20 Supervised vs. unsupervised learning In unsupervised
More informationDetermining the Accuracy of Modal Parameter Estimation Methods
Determining the Accuracy f Mdal Parameter Estimatin Methds by Michael Lee Ph.D., P.E. & Mar Richardsn Ph.D. Structural Measurement Systems Milpitas, CA Abstract The mst cmmn type f mdal testing system
More informationBiplots in Practice MICHAEL GREENACRE. Professor of Statistics at the Pompeu Fabra University. Chapter 13 Offprint
Biplts in Practice MICHAEL GREENACRE Prfessr f Statistics at the Pmpeu Fabra University Chapter 13 Offprint CASE STUDY BIOMEDICINE Cmparing Cancer Types Accrding t Gene Epressin Arrays First published:
More informationLeast Squares Optimal Filtering with Multirate Observations
Prc. 36th Asilmar Cnf. n Signals, Systems, and Cmputers, Pacific Grve, CA, Nvember 2002 Least Squares Optimal Filtering with Multirate Observatins Charles W. herrien and Anthny H. Hawes Department f Electrical
More informationOptimization Programming Problems For Control And Management Of Bacterial Disease With Two Stage Growth/Spread Among Plants
Internatinal Jurnal f Engineering Science Inventin ISSN (Online): 9 67, ISSN (Print): 9 676 www.ijesi.rg Vlume 5 Issue 8 ugust 06 PP.0-07 Optimizatin Prgramming Prblems Fr Cntrl nd Management Of Bacterial
More informationCOMP 551 Applied Machine Learning Lecture 4: Linear classification
COMP 551 Applied Machine Learning Lecture 4: Linear classificatin Instructr: Jelle Pineau (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/cmp551 Unless therwise nted, all material psted
More informationA New Evaluation Measure. J. Joiner and L. Werner. The problems of evaluation and the needed criteria of evaluation
III-l III. A New Evaluatin Measure J. Jiner and L. Werner Abstract The prblems f evaluatin and the needed criteria f evaluatin measures in the SMART system f infrmatin retrieval are reviewed and discussed.
More informationLecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff
Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeff Reading: Chapter 2 STATS 202: Data mining and analysis September 27, 2017 1 / 20 Supervised vs. unsupervised learning In unsupervised
More informationOn Huntsberger Type Shrinkage Estimator for the Mean of Normal Distribution ABSTRACT INTRODUCTION
Malaysian Jurnal f Mathematical Sciences 4(): 7-4 () On Huntsberger Type Shrinkage Estimatr fr the Mean f Nrmal Distributin Department f Mathematical and Physical Sciences, University f Nizwa, Sultanate
More informationA Scalable Recurrent Neural Network Framework for Model-free
A Scalable Recurrent Neural Netwrk Framewrk fr Mdel-free POMDPs April 3, 2007 Zhenzhen Liu, Itamar Elhanany Machine Intelligence Lab Department f Electrical and Cmputer Engineering The University f Tennessee
More informationBroadcast Program Generation for Unordered Queries with Data Replication
Bradcast Prgram Generatin fr Unrdered Queries with Data Replicatin Jiun-Lng Huang and Ming-Syan Chen Department f Electrical Engineering Natinal Taiwan University Taipei, Taiwan, ROC E-mail: jlhuang@arbr.ee.ntu.edu.tw,
More informationResampling Methods. Cross-validation, Bootstrapping. Marek Petrik 2/21/2017
Resampling Methds Crss-validatin, Btstrapping Marek Petrik 2/21/2017 Sme f the figures in this presentatin are taken frm An Intrductin t Statistical Learning, with applicatins in R (Springer, 2013) with
More informationMargin Distribution and Learning Algorithms
ICML 03 Margin Distributin and Learning Algrithms Ashutsh Garg IBM Almaden Research Center, San Jse, CA 9513 USA Dan Rth Department f Cmputer Science, University f Illinis, Urbana, IL 61801 USA ASHUTOSH@US.IBM.COM
More informationT Algorithmic methods for data mining. Slide set 6: dimensionality reduction
T-61.5060 Algrithmic methds fr data mining Slide set 6: dimensinality reductin reading assignment LRU bk: 11.1 11.3 PCA tutrial in mycurses (ptinal) ptinal: An Elementary Prf f a Therem f Jhnsn and Lindenstrauss,
More informationEnhancing Performance of MLP/RBF Neural Classifiers via an Multivariate Data Distribution Scheme
Enhancing Perfrmance f / Neural Classifiers via an Multivariate Data Distributin Scheme Halis Altun, Gökhan Gelen Nigde University, Electrical and Electrnics Engineering Department Nigde, Turkey haltun@nigde.edu.tr
More informationCAUSAL INFERENCE. Technical Track Session I. Phillippe Leite. The World Bank
CAUSAL INFERENCE Technical Track Sessin I Phillippe Leite The Wrld Bank These slides were develped by Christel Vermeersch and mdified by Phillippe Leite fr the purpse f this wrkshp Plicy questins are causal
More informationAdmissibility Conditions and Asymptotic Behavior of Strongly Regular Graphs
Admissibility Cnditins and Asympttic Behavir f Strngly Regular Graphs VASCO MOÇO MANO Department f Mathematics University f Prt Oprt PORTUGAL vascmcman@gmailcm LUÍS ANTÓNIO DE ALMEIDA VIEIRA Department
More informationENSC Discrete Time Systems. Project Outline. Semester
ENSC 49 - iscrete Time Systems Prject Outline Semester 006-1. Objectives The gal f the prject is t design a channel fading simulatr. Upn successful cmpletin f the prject, yu will reinfrce yur understanding
More informationAnalysis on the Stability of Reservoir Soil Slope Based on Fuzzy Artificial Neural Network
Research Jurnal f Applied Sciences, Engineering and Technlgy 5(2): 465-469, 2013 ISSN: 2040-7459; E-ISSN: 2040-7467 Maxwell Scientific Organizatin, 2013 Submitted: May 08, 2012 Accepted: May 29, 2012 Published:
More informationDesigning Polymorphic Circuits with Periodical Weight Adjustment
2015 IEEE Sympsium Series n Cmputatinal Intelligence Designing Plymrphic Circuits with Peridical Weight Adjustment Hujun Liang 1, Rui Xie 2 1 Department f Cmputer Science, 2 Department f Accunting Anhui
More informationMODULE 1. e x + c. [You can t separate a demominator, but you can divide a single denominator into each numerator term] a + b a(a + b)+1 = a + b
. REVIEW OF SOME BASIC ALGEBRA MODULE () Slving Equatins Yu shuld be able t slve fr x: a + b = c a d + e x + c and get x = e(ba +) b(c a) d(ba +) c Cmmn mistakes and strategies:. a b + c a b + a c, but
More informationFloating Point Method for Solving Transportation. Problems with Additional Constraints
Internatinal Mathematical Frum, Vl. 6, 20, n. 40, 983-992 Flating Pint Methd fr Slving Transprtatin Prblems with Additinal Cnstraints P. Pandian and D. Anuradha Department f Mathematics, Schl f Advanced
More informationSupport-Vector Machines
Supprt-Vectr Machines Intrductin Supprt vectr machine is a linear machine with sme very nice prperties. Haykin chapter 6. See Alpaydin chapter 13 fr similar cntent. Nte: Part f this lecture drew material
More informationThe Kullback-Leibler Kernel as a Framework for Discriminant and Localized Representations for Visual Recognition
The Kullback-Leibler Kernel as a Framewrk fr Discriminant and Lcalized Representatins fr Visual Recgnitin Nun Vascncels Purdy H Pedr Mren ECE Department University f Califrnia, San Dieg HP Labs Cambridge
More informationPure adaptive search for finite global optimization*
Mathematical Prgramming 69 (1995) 443-448 Pure adaptive search fr finite glbal ptimizatin* Z.B. Zabinskya.*, G.R. Wd b, M.A. Steel c, W.P. Baritmpa c a Industrial Engineering Prgram, FU-20. University
More informationA Novel Stochastic-Based Algorithm for Terrain Splitting Optimization Problem
A Nvel Stchastic-Based Algrithm fr Terrain Splitting Optimizatin Prblem Le Hang Sn Nguyen inh Ha Abstract This paper deals with the prblem f displaying large igital Elevatin Mdel data in 3 GIS urrent appraches
More informationTree Structured Classifier
Tree Structured Classifier Reference: Classificatin and Regressin Trees by L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stne, Chapman & Hall, 98. A Medical Eample (CART): Predict high risk patients
More informationPressure And Entropy Variations Across The Weak Shock Wave Due To Viscosity Effects
Pressure And Entrpy Variatins Acrss The Weak Shck Wave Due T Viscsity Effects OSTAFA A. A. AHOUD Department f athematics Faculty f Science Benha University 13518 Benha EGYPT Abstract:-The nnlinear differential
More informationCOMP 551 Applied Machine Learning Lecture 5: Generative models for linear classification
COMP 551 Applied Machine Learning Lecture 5: Generative mdels fr linear classificatin Instructr: Herke van Hf (herke.vanhf@mail.mcgill.ca) Slides mstly by: Jelle Pineau Class web page: www.cs.mcgill.ca/~hvanh2/cmp551
More informationOF SIMPLY SUPPORTED PLYWOOD PLATES UNDER COMBINED EDGEWISE BENDING AND COMPRESSION
U. S. FOREST SERVICE RESEARCH PAPER FPL 50 DECEMBER U. S. DEPARTMENT OF AGRICULTURE FOREST SERVICE FOREST PRODUCTS LABORATORY OF SIMPLY SUPPORTED PLYWOOD PLATES UNDER COMBINED EDGEWISE BENDING AND COMPRESSION
More informationChemistry 20 Lesson 11 Electronegativity, Polarity and Shapes
Chemistry 20 Lessn 11 Electrnegativity, Plarity and Shapes In ur previus wrk we learned why atms frm cvalent bnds and hw t draw the resulting rganizatin f atms. In this lessn we will learn (a) hw the cmbinatin
More informationResampling Methods. Chapter 5. Chapter 5 1 / 52
Resampling Methds Chapter 5 Chapter 5 1 / 52 1 51 Validatin set apprach 2 52 Crss validatin 3 53 Btstrap Chapter 5 2 / 52 Abut Resampling An imprtant statistical tl Pretending the data as ppulatin and
More informationInterference is when two (or more) sets of waves meet and combine to produce a new pattern.
Interference Interference is when tw (r mre) sets f waves meet and cmbine t prduce a new pattern. This pattern can vary depending n the riginal wave directin, wavelength, amplitude, etc. The tw mst extreme
More informationG3.6 The Evolutionary Planner/Navigator in a mobile robot environment
Engineering G3.6 The Evlutinary Planner/Navigatr in a mbile rbt envirnment Jing Xia Abstract Based n evlutinary cmputatin cncepts, the Evlutinary Planner/Navigatr (EP/N) represents a new apprach t path
More informationRevision: August 19, E Main Suite D Pullman, WA (509) Voice and Fax
.7.4: Direct frequency dmain circuit analysis Revisin: August 9, 00 5 E Main Suite D Pullman, WA 9963 (509) 334 6306 ice and Fax Overview n chapter.7., we determined the steadystate respnse f electrical
More informationSlide04 (supplemental) Haykin Chapter 4 (both 2nd and 3rd ed): Multi-Layer Perceptrons
Slide04 supplemental) Haykin Chapter 4 bth 2nd and 3rd ed): Multi-Layer Perceptrns CPSC 636-600 Instructr: Ynsuck Che Heuristic fr Making Backprp Perfrm Better 1. Sequential vs. batch update: fr large
More information22.54 Neutron Interactions and Applications (Spring 2004) Chapter 11 (3/11/04) Neutron Diffusion
.54 Neutrn Interactins and Applicatins (Spring 004) Chapter (3//04) Neutrn Diffusin References -- J. R. Lamarsh, Intrductin t Nuclear Reactr Thery (Addisn-Wesley, Reading, 966) T study neutrn diffusin
More informationChecking the resolved resonance region in EXFOR database
Checking the reslved resnance regin in EXFOR database Gttfried Bertn Sciété de Calcul Mathématique (SCM) Oscar Cabells OECD/NEA Data Bank JEFF Meetings - Sessin JEFF Experiments Nvember 0-4, 017 Bulgne-Billancurt,
More information1996 Engineering Systems Design and Analysis Conference, Montpellier, France, July 1-4, 1996, Vol. 7, pp
THE POWER AND LIMIT OF NEURAL NETWORKS T. Y. Lin Department f Mathematics and Cmputer Science San Jse State University San Jse, Califrnia 959-003 tylin@cs.ssu.edu and Bereley Initiative in Sft Cmputing*
More informationIEEE Int. Conf. Evolutionary Computation, Nagoya, Japan, May 1996, pp. 366{ Evolutionary Planner/Navigator: Operator Performance and
IEEE Int. Cnf. Evlutinary Cmputatin, Nagya, Japan, May 1996, pp. 366{371. 1 Evlutinary Planner/Navigatr: Operatr Perfrmance and Self-Tuning Jing Xia, Zbigniew Michalewicz, and Lixin Zhang Cmputer Science
More informationMedium Scale Integrated (MSI) devices [Sections 2.9 and 2.10]
EECS 270, Winter 2017, Lecture 3 Page 1 f 6 Medium Scale Integrated (MSI) devices [Sectins 2.9 and 2.10] As we ve seen, it s smetimes nt reasnable t d all the design wrk at the gate-level smetimes we just
More informationCollocation Map for Overcoming Data Sparseness
Cllcatin Map fr Overcming Data Sparseness Mnj Kim, Yung S. Han, and Key-Sun Chi Department f Cmputer Science Krea Advanced Institute f Science and Technlgy Taejn, 305-701, Krea mj0712~eve.kaist.ac.kr,
More informationDepartment of Economics, University of California, Davis Ecn 200C Micro Theory Professor Giacomo Bonanno. Insurance Markets
Department f Ecnmics, University f alifrnia, Davis Ecn 200 Micr Thery Prfessr Giacm Bnann Insurance Markets nsider an individual wh has an initial wealth f. ith sme prbability p he faces a lss f x (0
More informationComputational modeling techniques
Cmputatinal mdeling techniques Lecture 4: Mdel checing fr ODE mdels In Petre Department f IT, Åb Aademi http://www.users.ab.fi/ipetre/cmpmd/ Cntent Stichimetric matrix Calculating the mass cnservatin relatins
More informationCHAPTER 3 INEQUALITIES. Copyright -The Institute of Chartered Accountants of India
CHAPTER 3 INEQUALITIES Cpyright -The Institute f Chartered Accuntants f India INEQUALITIES LEARNING OBJECTIVES One f the widely used decisin making prblems, nwadays, is t decide n the ptimal mix f scarce
More informationSUPPLEMENTARY MATERIAL GaGa: a simple and flexible hierarchical model for microarray data analysis
SUPPLEMENTARY MATERIAL GaGa: a simple and flexible hierarchical mdel fr micrarray data analysis David Rssell Department f Bistatistics M.D. Andersn Cancer Center, Hustn, TX 77030, USA rsselldavid@gmail.cm
More informationDifferentiation Applications 1: Related Rates
Differentiatin Applicatins 1: Related Rates 151 Differentiatin Applicatins 1: Related Rates Mdel 1: Sliding Ladder 10 ladder y 10 ladder 10 ladder A 10 ft ladder is leaning against a wall when the bttm
More informationAP Statistics Notes Unit Two: The Normal Distributions
AP Statistics Ntes Unit Tw: The Nrmal Distributins Syllabus Objectives: 1.5 The student will summarize distributins f data measuring the psitin using quartiles, percentiles, and standardized scres (z-scres).
More informationThis section is primarily focused on tools to aid us in finding roots/zeros/ -intercepts of polynomials. Essentially, our focus turns to solving.
Sectin 3.2: Many f yu WILL need t watch the crrespnding vides fr this sectin n MyOpenMath! This sectin is primarily fcused n tls t aid us in finding rts/zers/ -intercepts f plynmials. Essentially, ur fcus
More informationWRITING THE REPORT. Organizing the report. Title Page. Table of Contents
WRITING THE REPORT Organizing the reprt Mst reprts shuld be rganized in the fllwing manner. Smetime there is a valid reasn t include extra chapters in within the bdy f the reprt. 1. Title page 2. Executive
More informationYou need to be able to define the following terms and answer basic questions about them:
CS440/ECE448 Sectin Q Fall 2017 Midterm Review Yu need t be able t define the fllwing terms and answer basic questins abut them: Intr t AI, agents and envirnments Pssible definitins f AI, prs and cns f
More information(1.1) V which contains charges. If a charge density ρ, is defined as the limit of the ratio of the charge contained. 0, and if a force density f
1.0 Review f Electrmagnetic Field Thery Selected aspects f electrmagnetic thery are reviewed in this sectin, with emphasis n cncepts which are useful in understanding magnet design. Detailed, rigrus treatments
More informationQuantum Harmonic Oscillator, a computational approach
IOSR Jurnal f Applied Physics (IOSR-JAP) e-issn: 78-4861.Vlume 7, Issue 5 Ver. II (Sep. - Oct. 015), PP 33-38 www.isrjurnals Quantum Harmnic Oscillatr, a cmputatinal apprach Sarmistha Sahu, Maharani Lakshmi
More informationModeling the Nonlinear Rheological Behavior of Materials with a Hyper-Exponential Type Function
www.ccsenet.rg/mer Mechanical Engineering Research Vl. 1, N. 1; December 011 Mdeling the Nnlinear Rhelgical Behavir f Materials with a Hyper-Expnential Type Functin Marc Delphin Mnsia Département de Physique,
More informationand the Doppler frequency rate f R , can be related to the coefficients of this polynomial. The relationships are:
Algrithm fr Estimating R and R - (David Sandwell, SIO, August 4, 2006) Azimith cmpressin invlves the alignment f successive eches t be fcused n a pint target Let s be the slw time alng the satellite track
More informationCOMP 551 Applied Machine Learning Lecture 9: Support Vector Machines (cont d)
COMP 551 Applied Machine Learning Lecture 9: Supprt Vectr Machines (cnt d) Instructr: Herke van Hf (herke.vanhf@mail.mcgill.ca) Slides mstly by: Class web page: www.cs.mcgill.ca/~hvanh2/cmp551 Unless therwise
More informationMATCHING TECHNIQUES. Technical Track Session VI. Emanuela Galasso. The World Bank
MATCHING TECHNIQUES Technical Track Sessin VI Emanuela Galass The Wrld Bank These slides were develped by Christel Vermeersch and mdified by Emanuela Galass fr the purpse f this wrkshp When can we use
More informationComputational modeling techniques
Cmputatinal mdeling techniques Lecture 2: Mdeling change. In Petre Department f IT, Åb Akademi http://users.ab.fi/ipetre/cmpmd/ Cntent f the lecture Basic paradigm f mdeling change Examples Linear dynamical
More informationA Matrix Representation of Panel Data
web Extensin 6 Appendix 6.A A Matrix Representatin f Panel Data Panel data mdels cme in tw brad varieties, distinct intercept DGPs and errr cmpnent DGPs. his appendix presents matrix algebra representatins
More informationElements of Machine Intelligence - I
ECE-175A Elements f Machine Intelligence - I Ken Kreutz-Delgad Nun Vascncels ECE Department, UCSD Winter 2011 The curse The curse will cver basic, but imprtant, aspects f machine learning and pattern recgnitin
More informationEarly detection of mining truck failure by modelling its operation with neural networks classification algorithms
RU, Rand GOLOSINSKI, T.S. Early detectin f mining truck failure by mdelling its peratin with neural netwrks classificatin algrithms. Applicatin f Cmputers and Operatins Research ill the Minerals Industries,
More informationDataflow Analysis and Abstract Interpretation
Dataflw Analysis and Abstract Interpretatin Cmputer Science and Artificial Intelligence Labratry MIT Nvember 9, 2015 Recap Last time we develped frm first principles an algrithm t derive invariants. Key
More informationSequential Allocation with Minimal Switching
In Cmputing Science and Statistics 28 (1996), pp. 567 572 Sequential Allcatin with Minimal Switching Quentin F. Stut 1 Janis Hardwick 1 EECS Dept., University f Michigan Statistics Dept., Purdue University
More informationSurface and Contact Stress
Surface and Cntact Stress The cncept f the frce is fundamental t mechanics and many imprtant prblems can be cast in terms f frces nly, fr example the prblems cnsidered in Chapter. Hwever, mre sphisticated
More informationFebruary 28, 2013 COMMENTS ON DIFFUSION, DIFFUSIVITY AND DERIVATION OF HYPERBOLIC EQUATIONS DESCRIBING THE DIFFUSION PHENOMENA
February 28, 2013 COMMENTS ON DIFFUSION, DIFFUSIVITY AND DERIVATION OF HYPERBOLIC EQUATIONS DESCRIBING THE DIFFUSION PHENOMENA Mental Experiment regarding 1D randm walk Cnsider a cntainer f gas in thermal
More informationWhat is Statistical Learning?
What is Statistical Learning? Sales 5 10 15 20 25 Sales 5 10 15 20 25 Sales 5 10 15 20 25 0 50 100 200 300 TV 0 10 20 30 40 50 Radi 0 20 40 60 80 100 Newspaper Shwn are Sales vs TV, Radi and Newspaper,
More informationSupplementary Crossover Operator for Genetic Algorithms based on the Center-of-Gravity Paradigm
Published in Cntrl and Cybernetics, vl. 30, N, 00, pp. 59-76, ISSN: 034-8569 Supplementary Crssver Operatr fr Genetic Algrithms based n the Center-f-Gravity Paradigm Plamen Angelv Department f Civil and
More informationNGSS High School Physics Domain Model
NGSS High Schl Physics Dmain Mdel Mtin and Stability: Frces and Interactins HS-PS2-1: Students will be able t analyze data t supprt the claim that Newtn s secnd law f mtin describes the mathematical relatinship
More informationA New Approach to Increase Parallelism for Dependent Loops
A New Apprach t Increase Parallelism fr Dependent Lps Yeng-Sheng Chen, Department f Electrical Engineering, Natinal Taiwan University, Taipei 06, Taiwan, Tsang-Ming Jiang, Arctic Regin Supercmputing Center,
More informationMODULE FOUR. This module addresses functions. SC Academic Elementary Algebra Standards:
MODULE FOUR This mdule addresses functins SC Academic Standards: EA-3.1 Classify a relatinship as being either a functin r nt a functin when given data as a table, set f rdered pairs, r graph. EA-3.2 Use
More informationSIZE BIAS IN LINE TRANSECT SAMPLING: A FIELD TEST. Mark C. Otto Statistics Research Division, Bureau of the Census Washington, D.C , U.S.A.
SIZE BIAS IN LINE TRANSECT SAMPLING: A FIELD TEST Mark C. Ott Statistics Research Divisin, Bureau f the Census Washingtn, D.C. 20233, U.S.A. and Kenneth H. Pllck Department f Statistics, Nrth Carlina State
More informationChapter 3 Kinematics in Two Dimensions; Vectors
Chapter 3 Kinematics in Tw Dimensins; Vectrs Vectrs and Scalars Additin f Vectrs Graphical Methds (One and Tw- Dimensin) Multiplicatin f a Vectr b a Scalar Subtractin f Vectrs Graphical Methds Adding Vectrs
More informationThermodynamics Partial Outline of Topics
Thermdynamics Partial Outline f Tpics I. The secnd law f thermdynamics addresses the issue f spntaneity and invlves a functin called entrpy (S): If a prcess is spntaneus, then Suniverse > 0 (2 nd Law!)
More informationB. Definition of an exponential
Expnents and Lgarithms Chapter IV - Expnents and Lgarithms A. Intrductin Starting with additin and defining the ntatins fr subtractin, multiplicatin and divisin, we discvered negative numbers and fractins.
More informationModule 4: General Formulation of Electric Circuit Theory
Mdule 4: General Frmulatin f Electric Circuit Thery 4. General Frmulatin f Electric Circuit Thery All electrmagnetic phenmena are described at a fundamental level by Maxwell's equatins and the assciated
More informationRevisiting the Socrates Example
Sectin 1.6 Sectin Summary Valid Arguments Inference Rules fr Prpsitinal Lgic Using Rules f Inference t Build Arguments Rules f Inference fr Quantified Statements Building Arguments fr Quantified Statements
More informationCopyright Paul Tobin 63
DT, Kevin t. lectric Circuit Thery DT87/ Tw-Prt netwrk parameters ummary We have seen previusly that a tw-prt netwrk has a pair f input terminals and a pair f utput terminals figure. These circuits were
More informationEXPERIMENTAL STUDY ON DISCHARGE COEFFICIENT OF OUTFLOW OPENING FOR PREDICTING CROSS-VENTILATION FLOW RATE
EXPERIMENTAL STUD ON DISCHARGE COEFFICIENT OF OUTFLOW OPENING FOR PREDICTING CROSS-VENTILATION FLOW RATE Tmnbu Gt, Masaaki Ohba, Takashi Kurabuchi 2, Tmyuki End 3, shihik Akamine 4, and Tshihir Nnaka 2
More informationSections 15.1 to 15.12, 16.1 and 16.2 of the textbook (Robbins-Miller) cover the materials required for this topic.
Tpic : AC Fundamentals, Sinusidal Wavefrm, and Phasrs Sectins 5. t 5., 6. and 6. f the textbk (Rbbins-Miller) cver the materials required fr this tpic.. Wavefrms in electrical systems are current r vltage
More informationAPPLICATION OF THE BRATSETH SCHEME FOR HIGH LATITUDE INTERMITTENT DATA ASSIMILATION USING THE PSU/NCAR MM5 MESOSCALE MODEL
JP2.11 APPLICATION OF THE BRATSETH SCHEME FOR HIGH LATITUDE INTERMITTENT DATA ASSIMILATION USING THE PSU/NCAR MM5 MESOSCALE MODEL Xingang Fan * and Jeffrey S. Tilley University f Alaska Fairbanks, Fairbanks,
More informationPhysical Layer: Outline
18-: Intrductin t Telecmmunicatin Netwrks Lectures : Physical Layer Peter Steenkiste Spring 01 www.cs.cmu.edu/~prs/nets-ece Physical Layer: Outline Digital Representatin f Infrmatin Characterizatin f Cmmunicatin
More informationGENESIS Structural Optimization for ANSYS Mechanical
P3 STRUCTURAL OPTIMIZATION (Vl. II) GENESIS Structural Optimizatin fr ANSYS Mechanical An Integrated Extensin that adds Structural Optimizatin t ANSYS Envirnment New Features and Enhancements Release 2017.03
More informationMath Foundations 20 Work Plan
Math Fundatins 20 Wrk Plan Units / Tpics 20.8 Demnstrate understanding f systems f linear inequalities in tw variables. Time Frame December 1-3 weeks 6-10 Majr Learning Indicatrs Identify situatins relevant
More informationRelationship Between Amplifier Settling Time and Pole-Zero Placements for Second-Order Systems *
Relatinship Between Amplifier Settling Time and Ple-Zer Placements fr Secnd-Order Systems * Mark E. Schlarmann and Randall L. Geiger Iwa State University Electrical and Cmputer Engineering Department Ames,
More information