Neural network based Boolean factor analysis of parliament voting
|
|
- Leonard Norman
- 6 years ago
- Views:
Transcription
1 Neural network based Boolean factor analyss of parlament votng Frolov A.A. 1, Polyakov P.Y. 2, Husek D. 3, and Rezankova H. 4 1 Insttute of Hgher Nervous Actvty and Neurophysology of the Russan Academy of Scences, Butlerova 5a, Moscow, Russa aafrolov@mal.ru 2 Insttute of Optcal Neural Technologes of the Russan Academy of Scences, Vavlova 44, Moscow, Russa labsteclo@mal.ru 3 Insttute of Computer Scence, Academy of Scences of the Czech Republc, Pod Vodárenskou věží 2, Prague, Czech Republc dusan@cs.cas.cz 4 Unversty of Economcs, nám. Churchlla, Prague 3, Czech Republc rezanka@vse.cz Summary. The sparse encoded Hopfeld lke neural network s modfed to provde the Boolean factor analyss. New, more effcent method of sequental factor extracton, based on the characterstcs behavor of the Lyapunov functon s ntroduced. Effcency of ths attempt s shown not only on smulated data but on real data from Russan parlament but as well. Key words: Boolean factor analyss, neural networks, socal networks 1 Introducton Our theoretcal analyss and computer smulatons [FSH04] revealed that Hopfeldlke neural networks are capable of the performng Boolean factor analyss (BFA) of sgnals of hgh dmenson and complexty. Factor analyss s a procedure whch maps orgnal sgnals nto the space of factors. The prncpal component analyss (PCA) s a classcal example of such mappng n the lnear case. Lnear factor analyss mples that each orgnal N-dmensonal case can be presented as X = FS + ε (1) where F s a matrx N L of factor loadngs, S s a L-dmensonal vector of factor scores and ε an error. Each component of S gves contrbuton of a correspondng factor n the orgnal sgnal. Columns of loadng matrx F gve vectors presentng correspondng factors n the sgnal space. These vectors are termed factors n followng. The mappng of the orgnal space to the factor space means that sgnals are represented by vectors S nstead of orgnal vectors X. Dmensonalty of vectors S s lover than the dmensonalty of sgnals X. Thereby the factor analyss provdes hgh compresson of orgnal sgnals.
2 862 Frolov A.A., Polyakov P.Y., Husek D., and Rezankova H. The BFA mples that a complex vector sgnal has a form of the Boolean sum of weghted bnary factors: X = S l F l. (2) In ths case, orgnal sgnals, factor scores and factor loadngs are bnary and mappng of the orgnal sgnal to the factor space means dentfcaton of factors that were mxed n the sgnal. The mean number of factors mxed n the sgnals we term sgnal complexty C. For the case of large dmensonalty and complexty of sgnals t was a challenge [FSH04] to utlze for the BFA the Hopfeld-lke neural network wth parallel dynamcs. Bnary patterns X of the sgnal space are treated as actvtes of N bnary neurons (1 - actve, 0 - nonactve) wth gradually rangng synaptc connectons between them. Durng the learnng stage patterns X (m) are stored n the matrx of synaptc connectons J accordng to the correlatonal Hebban rule: J j = M (X (m) m=1 q (m) (X (m) j q (m) ), j, J = 0, (3) where M s the number of patterns n the learnng set and bas q (m) = N È =1 X (m) /N s the total actvty of the m-th pattern. Ths form of bas corresponds to the bologcally plausble global nhbton beng proportonal to an overall neuronal actvty. Addtonally to N prncpal neurons of the Hopfeld network descrbed above we ntroduced one specal nhbtory neuron actvated durng the presentaton of every pattern of the learnng set and connected wth all prncpal neurons by bdrectonal connectons. Patterns of the learnng set are stored n the vector J of the connectons accordng to Hebban rule: J = M m=1 where q = M È (X (m) m=1 q (m) ) = M(q q), (4) X (m) /M s a mean actvty of the -th neuron n the learnng set and q s a mean actvty of all neurons n the learnng set. It s also supposed that the exctablty of the ntroduced nhbtory neuron decreases nversely proportonal to the sze of the learnng set beng 1/M after storng of all ts patterns. Due to the Hebban learnng rule (3), neurons whch represent one factor and therefore tend to fre together, become more tghtly connected than neurons belongng to dfferent factors, consttutng attractor of network dynamcs. Ths property of factors s a base of the proposed two-run procedure of factor search. Its ntalzaton starts by presentaton of random ntal pattern X (n) wth k (n) = r (n) N actve neurons. The actvty k (n) s supposed to be much smaller than the actvty of all factors. On presentaton of X (n), network actvty X evolves to some attractor. The evoluton s determned by the parallel dynamcs equaton n dscrete tme. At each tme step: X (t + 1) = Θ(h (t) T(t)), = 1,, N, X (0) = X (n) (5)
3 Neural network based Boolean factor analyss of parlament votng 863 where h are components of the vector of synaptc exctatons h (t) = N j=1 J jx j(t) (1/M)J N j=1 J j X j(t), (6) Θ s step functon, and T(t) s actvaton threshold. The frst term n (6) gves synaptc exctatons provded by the prncpal neurons of the Hopfeld network and the second one by the addtonal nhbtory neuron. The use of the nhbtory neuron s equvalent to the substracton of (1/M)J J j = M(q q)(q j q) from J j. Thus È (6) can be rewrtten as h (t) = N J jx j(t) where J = J Mqq T, q s a vector wth j=1 components q q and q T s a transposed q. As shown n [FSH04] the replacement of common connecton matrx J by J, frst, completely suppressed two global attractors whch domnate n network dynamcs for large sgnal complexty C, and second, made the sze of attractor basns around factors to be ndependent of C. At each tme step of the recall process the threshold T(t) was chosen n such a way that the level of the network actvty was kept constant and equal to k n. Thus, on each tme step k n wnners (neurons wth the greatest synaptc exctaton) were chosen and only they were actve on the next tme step. To avod uncertanty n the choce of wnners when several neurons had synaptc exctatons at the level of the actvaton threshold, small random nose was added to the actvaton threshold of each ndvdual neuron. The ampltude of the nose was put to be less than the smallest ncrement of the synaptc exctaton gven by formula (6). Ths ensured that neurons wth the hghest exctatons were kept to be wnners n spte of the random nose be added to the neurons thresholds. Nose to ndvdual neurons was fxed durng the whole recall process to provde ts convergence. As shown n [PFH06], ths choce of actvaton thresholds allows for stablzaton of the network actvty n pont or cyclc attractor of length two. When the actvty stablzes at the ntal level of actvty k (n), k (n) +1 neurons wth maxmal synaptc exctaton are chosen for the next teraton step, and the network actvty evolves to some attractor at the new level of actvty k (n) + 1. Then the level of actvty ncreases to k (n) + 2, and so on, untl the number of actve neurons reaches the fnal level r f N wth r f > p. Thus, one tral of the recall procedure contans (r f r (n) )N external steps and several steps nsde each external step to reach some attractor for a fxed level of actvty. At the end of each external step the relatve Lyapunov functon was calculated by formula Λ = X T (t + 1)JX(t)/(rN), (7) where X T (t + 1) and X(t) are two network states n the cyclc attractor (for pont attractor X T (t + 1) = X(t) ). The relatve Lyapunov functon s a mean synaptc exctaton of neurons belongng to some attractor at the end of the external step wth k = rn neurons. Attractors wth the hghest Lyapunov functon would be obvously wnners n the most trals of the recall process. Thus, more and more trals are requred to obtan new attractor wth relatvely small value of Lyapunov functon. To overcome ths problem the domnant attractors should be deleted from the network memory. The deleton was performed accordng to Hebban unlearnng rule by substracton J j, j from synaptc connectons J j where
4 864 Frolov A.A., Polyakov P.Y., Husek D., and Rezankova H. J j = η J(X)[(X(t 1) r)(xj(t) r) + (Xj(t 1) r)(x(t) r), (8) 2 J(X) s the average synaptc connecton between actve neurons of the attractor, X(t 1) and X(t) are patterns of network actvty at last tme steps of teraton process, r s the level of actvty, and η s an unlearnng rate. For pont attractor X(t) = X(t 1) and for cyclc attractor X(t 1) and X(t) are two states of attractor. If unlearn rule (8) s appled the tme to reveal all factors s proportonal to the number of elements n connecton matrx,.e. N 2 There are three mportant smlartes between the descrbed procedure of the BFA and lnear PCA. Frst, PCA s based on the smlar covarance matrx as s the connecton matrx n Hopfeld network. Second, factor search n PCA can be performed by the teraton procedure smlar to that descrbed by (5) and (6) but bnarzaton of synaptc exctatons by the step functon must be replaced by ther normalzaton: X (t+1)h (t)/ h(t). Then the teraton procedure startng from any random state converges to egenvector f 1 of covaraton matrx wth the largest egenvalue Λ 1. Just ths egenvector s treated as the frst factor n PCA. Thrd, to obtan the next factor the frst factor must be deleted from the covaraton matrx by the substracton of Λ 1f 1f1, T and so on. The substracton s smlar to Hebban unlearnng (8). However, the BFA by Hopfeld-lke network has one prncpal dfference from lnear PCA. Attractors of teraton procedure n PCA are always factors whle n Hopfeld-lke networks the teraton procedure can converge to factors (true attractors) and to spurous attractors whch are far from all factors. Thus, two man questons arse n vew of the BFA by the Hopfeld-lke network. Frst, how often would network actvty converge to one of the factors startng from the random state? Second, s t possble to dstngush true and spurous attractors when network actvty converges to some stable state? Both these questons are answered n the next Secton. 2 Artfcal sgnals To reveal peculartes of true and spurous attractors we performed computer experments wth artfcal sgnals. Each pattern of the learnng set s supposed to be a Boolean superposton of exactly C factors and each factor s supposed to contan exactly n = pn 1-s and (1 p)n 0-s. Thus, each factor f (l) Bn N and for each pattern of the learnng set, vector of factor scores S BC L where È Bn N = X X {0, 1}, N X = n. We supposed factor loadngs and factor scores =1 to be statstcally ndependent. As an example, Fg. 1 demonstrates changes of relatve Lyapunov functon for N = 3000, L = 5300, p = 0.02 and C = 10. A recall process started at r (n) = Trajectores of network dynamcs form two separated groups. As shown n Fg. 2, the trajectores wth hgher values of the Lyapunov functon are true and wth lower ones are spurous. Ths Fgure relates values of the Lyapunov functon for patterns of network actvty at ponts r = p to maxmal overlaps of these patterns wth factors. The overlap between two patterns X (1) and X (2) wth p actve neurons was calculated by formula m(x (1),X (2) 1 ) Np(1 p) N =1 (X (1) p)(x (2) p)
5 Neural network based Boolean factor analyss of parlament votng 865 1,0 1,0 0,8 0,6 0,8 0,4 0,2 0,01 0,02 0,03 r 0,6 0,0 0,2 0,4 0,6 0,8 1,0 Ov Fg. 1 Relatve Lyapunov functon λ n dependence on the relatve network actvty r. Fg. 2 Values of normalzed Lyapunov functon n relaton to overlaps wth the closest factors. Accordng ths formula the overlap between equal patterns s equal to 1 and the mean overlap between ndependent patterns s equal to 0. Patterns wth a hgh Lyapunov functon have hgh overlap wth one of the factors, whle the patterns wth a low Lyapunov functon are far from all the factors. It s shown that true and spurous trajectores are separated by the values of ther Lyapunov functons. In Fgs 1 and 2 the values of the Lyapunov functon are normalzed by a mean value of ths functon over true attractors at the pont r = p. The second characterstc feature of true trajectores s the exstence of a knk at a pont r = p where the level of network actvty concdes wth that n factors (see Fg. 1). When r < p, the ncrease of r results n almost lnear ncrease of the relatve Lyapunov functon. The ncrease of r occurs n ths case due to the jonng of neurons belongng to the factors that are strongly connected wth other neurons of the factor. Then the jonng of new neurons results n proportonal ncrease of mean synaptc exctaton to the actve neurons of factor that s just equal to ther relatve Lyapunov functon. When r > p, the ncrease of r occurs due to jonng of some random neurons that are connected wth the factor by week connectons. Thus, the ncrease of the relatve Lyapunov functon for true trajectory sharply slows and t tends to the values of the Lyapunov functon for spurous trajectores. The use of these two features of true trajectores provdes relable tool for recognton of factors and f unlearn rule s appled the tme to reveal all factors s proportonal to the number of elements n connecton matrx,.e. N 2. Ths phenomenon was clearly confrmed n our prevous papers [FHP04, HFR05, PFH06] where our method was used for textual data analyss. 3 Analyss of parlament votng There are many real problems [DLW03] when the BFA s helpful. We proved ts contrbuton n the nformaton retreval [FHP04,HFR05,PFH06], analyzng textual data. For the followng analyss we used as a data source the record of deputes votng n the Russan parlament n 2004 [PAT05]. Each poolng s encoded as a
6 866 Frolov A.A., Polyakov P.Y., Husek D., and Rezankova H. bnary vector wth component 1 f the correspondent deputy voted affrmatvely and 0 negatvely. The number of votng cases durng the year was The number of deputes (consequently the dmensonalty of sgnal space and network sze) amounts 430 (20 deputes voted less than 10 tmes were excluded from the analyss). Fg. 3 shows the Lyapunov functon along trajectores startng from 1500 random ntal states. All these states converge to four trajectores. Two of them have obvous knks and therefore were dentfed as two factors. The factor wth the hghest Lyapunov functon contans 50 deputes and completely concdes wth the fracton of the Communst Party (CPRF). Another factor contans 36 deputes. All of them belong to the fracton of Lberal-Democratc Party (LDPR) whch contans totally 37 deputes. Thus one of the members of ths fracton fell out of the correspondng factor. The ponted knks at the correspondng trajectores gve evdence that these fractons are most dscplne and ther members vote coherently. Fg. 4 demonstrates number of actve neurons Fg. 3 Relatve Lyapunov functon λ n dependence on the number of actve neurons. Thck ponts are knks of the frst and second factors number of actve neurons Fg. 4 The same as n Fg. 3 after deletng two frst factors. Thck pont s knk of the thrd factor. trajectores after deletng of the two factors. Startng from 1500 ntal states they converge to only two trajectores. One of them has a knk but t s not so strct as for CPRF and LDPR factors. We supposed that the pont where the second dervatve of the Lyapunov functon by k has mnmum s the thrd factor. The factor contans 37 deputes. All of them belong to the fracton Motherland (ML) whch contans totally 41 deputes. Thus 4 of ts members fell out of the factor. The fuzzness of knk at the trajectory gves evdence that ths fracton s not as homogeneous as the two frst ones and actually the fracton splt at two fractons n Matchng of neurons along the second trajectory n Fg. 4 wth the lst of deputes has shown that t corresponds to the fracton Unted Russa (UR). Ths fracton s the largest and contans totally 285 deputes but s less homogeneous. Therefore the Lyapunov functon along the trajectory s low and t has no knk at all. Fg. 5 shows trajectores of neurodynamcs after addtonal deletng the thrd factor from the network. Two remanng trajectores contan members of UR and ndependent deputes (ID). The upper trajectory contans only members of UR and lower one - manly ID but also members of UR. Ths s addtonal evdence of heterogenety of UR. Factors
7 Neural network based Boolean factor analyss of parlament votng 867 Table 1. Relaton between parlament fractons and factors. fractons/factors UR 283 / / 0 0 / 0 0 / 0 2 / 0 CPRF 0 / 0 51 / 49 0 / 0 0 / 2 0 / 0 LDPR 1 / 2 0 / 0 36 / 35 0 / 0 0 / 0 ML 3 / 3 0 / 0 0 / 0 37 / 38 1 / 0 Independent 1 / 14 0 / 0 0 / 1 0 / 1 15 / 0 UR and ID were dentfed by mnmums of the second dervatves along the correspondng trajectores. The general relaton between the parlament fractons and obtaned factors s shown n Table I. The ft between the fractons and the factors was estmated by F-measure. Averaged over all fractons t amounted to As number of actve neurons Fg. 5 The same as n Fgs. 3 and 4 after deletng three frst factors. Fg. 6 2D map of electoral college. Thn lnes - borders of clusters. - UR, - CPRF, - LDPR, - ML, - ID. obtaned factors do not overlap so we may nterpret them as clusters and compare our results wth those obtaned by some tradtonal methods of clusterng [MPP05]. Frst, we tred clusterng methods based on smlarty matrx. Smlarty between two deputes was calculated by comparson of vectors of ther votng. We used dfferent measures of smlarty: Eucldan dstance, cosne, Jaccard and Dce. Both herarchal and K-means clusterng gave clusters far from parlament fractons: all fractons ntersected n clusters and fracton LDPR could not be separated from ER at all. Second, we performed mappng of parlament members by the method of multdmensonal scalng. The results are shown n Fg.6. Obtaned map was clustered. The border of clusters are depcted by thn lnes. Generally, as factors obtaned before, clusters concde wth parlament fractons except for ndependent deputes. The results of clusterng and factorzaton are compared n the Table 1. The mean F-measure amounted to 0.95, that s slghtly smaller than that obtaned for factors.
8 868 Frolov A.A., Polyakov P.Y., Husek D., and Rezankova H. 4 Concluson We have shown that the modfed recurrent neural network s capable of performng the BFA of the sgnals of hgh dmenson and complexty. The new, more effcent method of sequental factor extracton, based on the characterstcs behavor of the Lyapunov functon was descrbed and used for analytcal tasks. The effcency of ths attempt s shown not only on the smulated data but on real data as well. In our prevous papers we showed ts hgh effcency n applcaton for textual data. Here ts ablty s demonstrated n the feld of poltcs. The resultng factors were not overlappng because of the data nature. Ths allowed us to compare the results obtaned by the BFA wth those obtaned by some tradtonal methods of clusterng. None of them gave approprate results. Only results obtaned by the multdmensonal scalng method were partally successful, but stll worse then the BFA. Acknowledgement Ths work was partally supported by grant RFBR No , by the projects No.1ET , 201/05/0079 and by the Insttutonal Research Plan AVOZ awarded by the Grant Agency of the Czech Republc. References [DLW03] De Leeuw, J.: Prncpal component analyss of bnary data. Applcaton to roll-call analyss. (2003) [FHM03] Frolov, A.A., Husek, D., Muravev, I.P.: Informatonal effcency of sparsely encoded Hopfeld-lke autoassocatve memory. Optcal Memory & Neural Networks, 12, (2003) [FSH04] Frolov, A.A., Srota, A.M., Husek, D., Muravev, I.P. Polyakov, P.Y.: Bnary factorzaton n Hopfeld-lke neural networks: sngle-step approxmaton and computer smulatons. Neural Networks World, 14, (2004) [FHP04] Frolov, A.A., Husek, D., Polyakov, P.A., Rezankova, H., Snasel, V.: Bnary Factorzaton of Textual Data by Hopfeld-Lke Neural Network. In: Antoch J.(ed) Computatonal Statstcs. Physca Verlag, Hedelberg, (2004) [HFR05] Husek, D., Frolov, A.A., Rezankova, H., Snasel, V., Polyakov, P.Y.: Neural Network Nonlnear Factor Analyss of Hgh Dmensonal Bnary Sgnals. In: SITIS Unversty of Bourgogne, Djon, (2005) [PFH06] Polyakov, P.Y., Frolov, A.A., Husek, D.: Bnary factor analyss by Hopfled network and ts applcaton to automatc text classfcaton. In: Neuronformatcs MIFI, Moscow, Russa (2006) [PAT05] [MPP05]
Neural Network Analysis of Russian Parliament Voting Patterns
eural etwork Analysis of Russian Parliament Voting Patterns Dusan Husek Acad. of Sci. of the Czech Republic, Institute of Computer Science, the Czech Republic Email: dusan@cs.cas.cz Alexander A. Frolov
More informationInternet Engineering. Jacek Mazurkiewicz, PhD Softcomputing. Part 3: Recurrent Artificial Neural Networks Self-Organising Artificial Neural Networks
Internet Engneerng Jacek Mazurkewcz, PhD Softcomputng Part 3: Recurrent Artfcal Neural Networks Self-Organsng Artfcal Neural Networks Recurrent Artfcal Neural Networks Feedback sgnals between neurons Dynamc
More information1 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations
Physcs 171/271 -Davd Klenfeld - Fall 2005 (revsed Wnter 2011) 1 Dervaton of Rate Equatons from Sngle-Cell Conductance (Hodgkn-Huxley-lke) Equatons We consder a network of many neurons, each of whch obeys
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationUnified Subspace Analysis for Face Recognition
Unfed Subspace Analyss for Face Recognton Xaogang Wang and Xaoou Tang Department of Informaton Engneerng The Chnese Unversty of Hong Kong Shatn, Hong Kong {xgwang, xtang}@e.cuhk.edu.hk Abstract PCA, LDA
More information9 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations
Physcs 171/271 - Chapter 9R -Davd Klenfeld - Fall 2005 9 Dervaton of Rate Equatons from Sngle-Cell Conductance (Hodgkn-Huxley-lke) Equatons We consder a network of many neurons, each of whch obeys a set
More informationThe equation of motion of a dynamical system is given by a set of differential equations. That is (1)
Dynamcal Systems Many engneerng and natural systems are dynamcal systems. For example a pendulum s a dynamcal system. State l The state of the dynamcal system specfes t condtons. For a pendulum n the absence
More informationCHAPTER III Neural Networks as Associative Memory
CHAPTER III Neural Networs as Assocatve Memory Introducton One of the prmary functons of the bran s assocatve memory. We assocate the faces wth names, letters wth sounds, or we can recognze the people
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationRegularized Discriminant Analysis for Face Recognition
1 Regularzed Dscrmnant Analyss for Face Recognton Itz Pma, Mayer Aladem Department of Electrcal and Computer Engneerng, Ben-Guron Unversty of the Negev P.O.Box 653, Beer-Sheva, 845, Israel. Abstract Ths
More informationarxiv:cs.cv/ Jun 2000
Correlaton over Decomposed Sgnals: A Non-Lnear Approach to Fast and Effectve Sequences Comparson Lucano da Fontoura Costa arxv:cs.cv/0006040 28 Jun 2000 Cybernetc Vson Research Group IFSC Unversty of São
More informationNON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS
IJRRAS 8 (3 September 011 www.arpapress.com/volumes/vol8issue3/ijrras_8_3_08.pdf NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS H.O. Bakodah Dept. of Mathematc
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationCHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD
CHALMERS, GÖTEBORGS UNIVERSITET SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS COURSE CODES: FFR 35, FIM 72 GU, PhD Tme: Place: Teachers: Allowed materal: Not allowed: January 2, 28, at 8 3 2 3 SB
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationLecture 23: Artificial neural networks
Lecture 23: Artfcal neural networks Broad feld that has developed over the past 20 to 30 years Confluence of statstcal mechancs, appled math, bology and computers Orgnal motvaton: mathematcal modelng of
More informationTurbulence classification of load data by the frequency and severity of wind gusts. Oscar Moñux, DEWI GmbH Kevin Bleibler, DEWI GmbH
Turbulence classfcaton of load data by the frequency and severty of wnd gusts Introducton Oscar Moñux, DEWI GmbH Kevn Blebler, DEWI GmbH Durng the wnd turbne developng process, one of the most mportant
More informationNUMERICAL DIFFERENTIATION
NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the
More informationVQ widely used in coding speech, image, and video
at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng
More informationFactor models with many assets: strong factors, weak factors, and the two-pass procedure
Factor models wth many assets: strong factors, weak factors, and the two-pass procedure Stanslav Anatolyev 1 Anna Mkusheva 2 1 CERGE-EI and NES 2 MIT December 2017 Stanslav Anatolyev and Anna Mkusheva
More information2016 Wiley. Study Session 2: Ethical and Professional Standards Application
6 Wley Study Sesson : Ethcal and Professonal Standards Applcaton LESSON : CORRECTION ANALYSIS Readng 9: Correlaton and Regresson LOS 9a: Calculate and nterpret a sample covarance and a sample correlaton
More informationModel of Neurons. CS 416 Artificial Intelligence. Early History of Neural Nets. Cybernetics. McCulloch-Pitts Neurons. Hebbian Modification.
Page 1 Model of Neurons CS 416 Artfcal Intellgence Lecture 18 Neural Nets Chapter 20 Multple nputs/dendrtes (~10,000!!!) Cell body/soma performs computaton Sngle output/axon Computaton s typcally modeled
More informationLINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity
LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have
More information8 Derivation of Network Rate Equations from Single- Cell Conductance Equations
Physcs 178/278 - Davd Klenfeld - Wnter 2015 8 Dervaton of Network Rate Equatons from Sngle- Cell Conductance Equatons We consder a network of many neurons, each of whch obeys a set of conductancebased,
More informationCOMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS
Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS
More informationOn the correction of the h-index for career length
1 On the correcton of the h-ndex for career length by L. Egghe Unverstet Hasselt (UHasselt), Campus Depenbeek, Agoralaan, B-3590 Depenbeek, Belgum 1 and Unverstet Antwerpen (UA), IBW, Stadscampus, Venusstraat
More informationFinite Element Modelling of truss/cable structures
Pet Schreurs Endhoven Unversty of echnology Department of Mechancal Engneerng Materals echnology November 3, 214 Fnte Element Modellng of truss/cable structures 1 Fnte Element Analyss of prestressed structures
More information4DVAR, according to the name, is a four-dimensional variational method.
4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The
More informationLogic Learning in Hopfield Networks
Logc Learnng n Hopfeld Networks Saratha Sathasvam School of Mathematcal Scences, Unversty of Scence Malaysa Penang, Malaysa E-mal: saratha@cs.usm.my Wan Ahmad Tauddn Wan Abdullah Department of Physcs,
More information1 Derivation of Point-to-Plane Minimization
1 Dervaton of Pont-to-Plane Mnmzaton Consder the Chen-Medon (pont-to-plane) framework for ICP. Assume we have a collecton of ponts (p, q ) wth normals n. We want to determne the optmal rotaton and translaton
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationLINEAR REGRESSION ANALYSIS. MODULE VIII Lecture Indicator Variables
LINEAR REGRESSION ANALYSIS MODULE VIII Lecture - 7 Indcator Varables Dr. Shalabh Department of Maematcs and Statstcs Indan Insttute of Technology Kanpur Indcator varables versus quanttatve explanatory
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationEEE 241: Linear Systems
EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they
More informationComparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method
Appled Mathematcal Scences, Vol. 7, 0, no. 47, 07-0 HIARI Ltd, www.m-hkar.com Comparson of the Populaton Varance Estmators of -Parameter Exponental Dstrbuton Based on Multple Crtera Decson Makng Method
More informationPower law and dimension of the maximum value for belief distribution with the max Deng entropy
Power law and dmenson of the maxmum value for belef dstrbuton wth the max Deng entropy Bngy Kang a, a College of Informaton Engneerng, Northwest A&F Unversty, Yanglng, Shaanx, 712100, Chna. Abstract Deng
More informationPolynomial Regression Models
LINEAR REGRESSION ANALYSIS MODULE XII Lecture - 6 Polynomal Regresson Models Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Test of sgnfcance To test the sgnfcance
More informationWavelet chaotic neural networks and their application to continuous function optimization
Vol., No.3, 04-09 (009) do:0.436/ns.009.307 Natural Scence Wavelet chaotc neural networks and ther applcaton to contnuous functon optmzaton Ja-Ha Zhang, Yao-Qun Xu College of Electrcal and Automatc Engneerng,
More information8 Derivation of Network Rate Equations from Single- Cell Conductance Equations
Physcs 178/278 - Davd Klenfeld - Wnter 2019 8 Dervaton of Network Rate Equatons from Sngle- Cell Conductance Equatons Our goal to derve the form of the abstract quanttes n rate equatons, such as synaptc
More informationProblem Set 9 Solutions
Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem
More informationMULTISPECTRAL IMAGE CLASSIFICATION USING BACK-PROPAGATION NEURAL NETWORK IN PCA DOMAIN
MULTISPECTRAL IMAGE CLASSIFICATION USING BACK-PROPAGATION NEURAL NETWORK IN PCA DOMAIN S. Chtwong, S. Wtthayapradt, S. Intajag, and F. Cheevasuvt Faculty of Engneerng, Kng Mongkut s Insttute of Technology
More informationEstimating the Fundamental Matrix by Transforming Image Points in Projective Space 1
Estmatng the Fundamental Matrx by Transformng Image Ponts n Projectve Space 1 Zhengyou Zhang and Charles Loop Mcrosoft Research, One Mcrosoft Way, Redmond, WA 98052, USA E-mal: fzhang,cloopg@mcrosoft.com
More informationTensor Smooth Length for SPH Modelling of High Speed Impact
Tensor Smooth Length for SPH Modellng of Hgh Speed Impact Roman Cherepanov and Alexander Gerasmov Insttute of Appled mathematcs and mechancs, Tomsk State Unversty 634050, Lenna av. 36, Tomsk, Russa RCherepanov82@gmal.com,Ger@npmm.tsu.ru
More informationSalmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2
Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to
More informationEfficient, General Point Cloud Registration with Kernel Feature Maps
Effcent, General Pont Cloud Regstraton wth Kernel Feature Maps Hanchen Xong, Sandor Szedmak, Justus Pater Insttute of Computer Scence Unversty of Innsbruck 30 May 2013 Hanchen Xong (Un.Innsbruck) 3D Regstraton
More informationChapter 8 Indicator Variables
Chapter 8 Indcator Varables In general, e explanatory varables n any regresson analyss are assumed to be quanttatve n nature. For example, e varables lke temperature, dstance, age etc. are quanttatve n
More informationHopfield networks and Boltzmann machines. Geoffrey Hinton et al. Presented by Tambet Matiisen
Hopfeld networks and Boltzmann machnes Geoffrey Hnton et al. Presented by Tambet Matsen 18.11.2014 Hopfeld network Bnary unts Symmetrcal connectons http://www.nnwj.de/hopfeld-net.html Energy functon The
More informationThe Order Relation and Trace Inequalities for. Hermitian Operators
Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence
More informationWorkshop: Approximating energies and wave functions Quantum aspects of physical chemistry
Workshop: Approxmatng energes and wave functons Quantum aspects of physcal chemstry http://quantum.bu.edu/pltl/6/6.pdf Last updated Thursday, November 7, 25 7:9:5-5: Copyrght 25 Dan Dll (dan@bu.edu) Department
More informationPerron Vectors of an Irreducible Nonnegative Interval Matrix
Perron Vectors of an Irreducble Nonnegatve Interval Matrx Jr Rohn August 4 2005 Abstract As s well known an rreducble nonnegatve matrx possesses a unquely determned Perron vector. As the man result of
More informationConvexity preserving interpolation by splines of arbitrary degree
Computer Scence Journal of Moldova, vol.18, no.1(52), 2010 Convexty preservng nterpolaton by splnes of arbtrary degree Igor Verlan Abstract In the present paper an algorthm of C 2 nterpolaton of dscrete
More informationWeek3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity
Week3, Chapter 4 Moton n Two Dmensons Lecture Quz A partcle confned to moton along the x axs moves wth constant acceleraton from x =.0 m to x = 8.0 m durng a 1-s tme nterval. The velocty of the partcle
More informationThe Geometry of Logit and Probit
The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.
More informationNeural Networks & Learning
Neural Netorks & Learnng. Introducton The basc prelmnares nvolved n the Artfcal Neural Netorks (ANN) are descrbed n secton. An Artfcal Neural Netorks (ANN) s an nformaton-processng paradgm that nspred
More informationParametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010
Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton
More informationSUPPLEMENTARY INFORMATION
do: 0.08/nature09 I. Resonant absorpton of XUV pulses n Kr + usng the reduced densty matrx approach The quantum beats nvestgated n ths paper are the result of nterference between two exctaton paths of
More informationAppendix B: Resampling Algorithms
407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationAutomatic Object Trajectory- Based Motion Recognition Using Gaussian Mixture Models
Automatc Object Trajectory- Based Moton Recognton Usng Gaussan Mxture Models Fasal I. Bashr, Ashfaq A. Khokhar, Dan Schonfeld Electrcal and Computer Engneerng, Unversty of Illnos at Chcago. Chcago, IL,
More informationFREQUENCY DISTRIBUTIONS Page 1 of The idea of a frequency distribution for sets of observations will be introduced,
FREQUENCY DISTRIBUTIONS Page 1 of 6 I. Introducton 1. The dea of a frequency dstrbuton for sets of observatons wll be ntroduced, together wth some of the mechancs for constructng dstrbutons of data. Then
More informationUnsupervised Learning
Unsupervsed Learnng Kevn Swngler What s Unsupervsed Learnng? Most smply, t can be thought of as learnng to recognse and recall thngs Recognton I ve seen that before Recall I ve seen that before and I can
More informationAn Improved multiple fractal algorithm
Advanced Scence and Technology Letters Vol.31 (MulGraB 213), pp.184-188 http://dx.do.org/1.1427/astl.213.31.41 An Improved multple fractal algorthm Yun Ln, Xaochu Xu, Jnfeng Pang College of Informaton
More informationCHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE
CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng
More informationHidden Markov Models & The Multivariate Gaussian (10/26/04)
CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models
More informationQuantum and Classical Information Theory with Disentropy
Quantum and Classcal Informaton Theory wth Dsentropy R V Ramos rubensramos@ufcbr Lab of Quantum Informaton Technology, Department of Telenformatc Engneerng Federal Unversty of Ceara - DETI/UFC, CP 6007
More informationDesign and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm
Desgn and Optmzaton of Fuzzy Controller for Inverse Pendulum System Usng Genetc Algorthm H. Mehraban A. Ashoor Unversty of Tehran Unversty of Tehran h.mehraban@ece.ut.ac.r a.ashoor@ece.ut.ac.r Abstract:
More informationNON-LINEAR CONVOLUTION: A NEW APPROACH FOR THE AURALIZATION OF DISTORTING SYSTEMS
NON-LINEAR CONVOLUTION: A NEW APPROAC FOR TE AURALIZATION OF DISTORTING SYSTEMS Angelo Farna, Alberto Belln and Enrco Armellon Industral Engneerng Dept., Unversty of Parma, Va delle Scenze 8/A Parma, 00
More informationQuantitative Discrimination of Effective Porosity Using Digital Image Analysis - Implications for Porosity-Permeability Transforms
2004, 66th EAGE Conference, Pars Quanttatve Dscrmnaton of Effectve Porosty Usng Dgtal Image Analyss - Implcatons for Porosty-Permeablty Transforms Gregor P. Eberl 1, Gregor T. Baechle 1, Ralf Weger 1,
More informationA Fast Computer Aided Design Method for Filters
2017 Asa-Pacfc Engneerng and Technology Conference (APETC 2017) ISBN: 978-1-60595-443-1 A Fast Computer Aded Desgn Method for Flters Gang L ABSTRACT *Ths paper presents a fast computer aded desgn method
More informationRELIABILITY ASSESSMENT
CHAPTER Rsk Analyss n Engneerng and Economcs RELIABILITY ASSESSMENT A. J. Clark School of Engneerng Department of Cvl and Envronmental Engneerng 4a CHAPMAN HALL/CRC Rsk Analyss for Engneerng Department
More informationCS 468 Lecture 16: Isometry Invariance and Spectral Techniques
CS 468 Lecture 16: Isometry Invarance and Spectral Technques Justn Solomon Scrbe: Evan Gawlk Introducton. In geometry processng, t s often desrable to characterze the shape of an object n a manner that
More informationCSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography
CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve
More informationArmy Ants Tunneling for Classical Simulations
Electronc Supplementary Materal (ESI) for Chemcal Scence. Ths journal s The Royal Socety of Chemstry 2014 electronc supplementary nformaton (ESI) for Chemcal Scence Army Ants Tunnelng for Classcal Smulatons
More information10-701/ Machine Learning, Fall 2005 Homework 3
10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40
More informationLifetime prediction of EP and NBR rubber seal by thermos-viscoelastic model
ECCMR, Prague, Czech Republc; September 3 th, 2015 Lfetme predcton of EP and NBR rubber seal by thermos-vscoelastc model Kotaro KOBAYASHI, Takahro ISOZAKI, Akhro MATSUDA Unversty of Tsukuba, Japan Yoshnobu
More information2.3 Nilpotent endomorphisms
s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms
More informationUncertainty and auto-correlation in. Measurement
Uncertanty and auto-correlaton n arxv:1707.03276v2 [physcs.data-an] 30 Dec 2017 Measurement Markus Schebl Federal Offce of Metrology and Surveyng (BEV), 1160 Venna, Austra E-mal: markus.schebl@bev.gv.at
More informationStructure and Drive Paul A. Jensen Copyright July 20, 2003
Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.
More informationTransfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system
Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng
More informationNotes on Frequency Estimation in Data Streams
Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to
More informationQueueing Networks II Network Performance
Queueng Networks II Network Performance Davd Tpper Assocate Professor Graduate Telecommuncatons and Networkng Program Unversty of Pttsburgh Sldes 6 Networks of Queues Many communcaton systems must be modeled
More informationAssignment 4. Adsorption Isotherms
Insttute of Process Engneerng Assgnment 4. Adsorpton Isotherms Part A: Compettve adsorpton of methane and ethane In large scale adsorpton processes, more than one compound from a mxture of gases get adsorbed,
More informationHomework Assignment 3 Due in class, Thursday October 15
Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.
More informationPop-Click Noise Detection Using Inter-Frame Correlation for Improved Portable Auditory Sensing
Advanced Scence and Technology Letters, pp.164-168 http://dx.do.org/10.14257/astl.2013 Pop-Clc Nose Detecton Usng Inter-Frame Correlaton for Improved Portable Audtory Sensng Dong Yun Lee, Kwang Myung Jeon,
More informationNon-linear Canonical Correlation Analysis Using a RBF Network
ESANN' proceedngs - European Smposum on Artfcal Neural Networks Bruges (Belgum), 4-6 Aprl, d-sde publ., ISBN -97--, pp. 57-5 Non-lnear Canoncal Correlaton Analss Usng a RBF Network Sukhbnder Kumar, Elane
More informationA Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach
A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland
More informationEEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming
EEL 6266 Power System Operaton and Control Chapter 3 Economc Dspatch Usng Dynamc Programmng Pecewse Lnear Cost Functons Common practce many utltes prefer to represent ther generator cost functons as sngle-
More informationSome Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)
Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998
More informationTesting for seasonal unit roots in heterogeneous panels
Testng for seasonal unt roots n heterogeneous panels Jesus Otero * Facultad de Economía Unversdad del Rosaro, Colomba Jeremy Smth Department of Economcs Unversty of arwck Monca Gulett Aston Busness School
More informationSPANC -- SPlitpole ANalysis Code User Manual
Functonal Descrpton of Code SPANC -- SPltpole ANalyss Code User Manual Author: Dale Vsser Date: 14 January 00 Spanc s a code created by Dale Vsser for easer calbratons of poston spectra from magnetc spectrometer
More informationx = , so that calculated
Stat 4, secton Sngle Factor ANOVA notes by Tm Plachowsk n chapter 8 we conducted hypothess tests n whch we compared a sngle sample s mean or proporton to some hypotheszed value Chapter 9 expanded ths to
More informationReport on Image warping
Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.
More informationGlobal Sensitivity. Tuesday 20 th February, 2018
Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values
More informationUNR Joint Economics Working Paper Series Working Paper No Further Analysis of the Zipf Law: Does the Rank-Size Rule Really Exist?
UNR Jont Economcs Workng Paper Seres Workng Paper No. 08-005 Further Analyss of the Zpf Law: Does the Rank-Sze Rule Really Exst? Fungsa Nota and Shunfeng Song Department of Economcs /030 Unversty of Nevada,
More informationComposite Hypotheses testing
Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter
More informationUsing Immune Genetic Algorithm to Optimize BP Neural Network and Its Application Peng-fei LIU1,Qun-tai SHEN1 and Jun ZHI2,*
Advances n Computer Scence Research (ACRS), volume 54 Internatonal Conference on Computer Networks and Communcaton Technology (CNCT206) Usng Immune Genetc Algorthm to Optmze BP Neural Network and Its Applcaton
More informationComputational Biology Lecture 8: Substitution matrices Saad Mneimneh
Computatonal Bology Lecture 8: Substtuton matrces Saad Mnemneh As we have ntroduced last tme, smple scorng schemes lke + or a match, - or a msmatch and -2 or a gap are not justable bologcally, especally
More informationMMA and GCMMA two methods for nonlinear optimization
MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons
More informationLOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin
Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence
More informationDifference Equations
Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1
More information