An Algorithm of Two-Phase Learning for Eleman Neural Network to Avoid the Local Minima Problem

Size: px
Start display at page:

Download "An Algorithm of Two-Phase Learning for Eleman Neural Network to Avoid the Local Minima Problem"

Transcription

1 IJCSNS International Journal of Comuter Science and Networ Security, VOL.7 No.4, Aril An Algorithm of Two-Phase Learning for Eleman Neural Networ to Avoid the Local Minima Problem Zhiqiang Zhang, Zheng Tang Faculty of Engineering, Toyama University, Toyama-shi, Jaan Summary Eleman Neural Networ have been efficient identification tool in many areas (classification and rediction fields) since they have dynamic memories. However, one of the roblems often associated with this tye of networ is the local minima roblem which usually occurs in the rocess of the learning. To solve this roblem and seed u the rocess of the convergence, we roose an imroved algorithm which includes two hases, a bacroagation hase and a gradient ascent hase. When networ gets stuc in local minimum, the gradient ascent hase is erformed in an attemt to fill u the local minimum valley by modifying arameter in a gradient ascent direction of the energy function. We aly this method to the Boolean Series Prediction Questions to demonstrate its validity. The simulation result shows that the roosed method can avoid the local minima roblem and largely accelerate the seed of the convergence and get good results for the rediction tass. Key words: Eleman Neural Networ (ENN), Local Minima Problem, Gain arameter; Boolean Series Prediction Questions (BSPQ) 1. Introduction Eleman Neural Networ (ENN) is one tye of the artial recurrent neural networ which more includes Jordan neural networs [1-2]. ENN consists of two-layer bac roagation networs with an additional feedbac connection from the outut of the hidden layer to its inut. The advantage of this feedbac ath is that it allows ENN to recognize and generate temoral atterns and satial atterns. This means that after training, interrelations between the current inut and internal states are rocessed to roduce the outut and to reresent the relevant ast information in the internal states [3-4]. As a result, the ENN has been widely used in various fields, from a temoral version of the Exclusive-OR function to the discovery of syntactic or semantic categories in natural language data [1]. However, the ENN is a local recurrent networ, so when learning a roblem it needs more hidden neurons in its hidden layer and more training time than actually are required for a solution by other methods. Furthermore, the ENN is less able to find the most aroriate weights for hidden neurons and often get into the sub-otimal areas because the error gradient is aroximated [5]. Therefore, having a fair number of neurons to begin with maes it more liely that the hidden neurons will start out dividing u the inut sace in useful ways. Since ENN uses bac roagation (BP) to deal with the various signals, it has been aroved that it suffers from a sub-otimal solution roblem [6-10]. Therefore, the same roblem would occur in the rediction tas for the ENN. The efficiency of the ENN is limited to low order system due to the insufficient memory caacity [11-12]. ENN had failed in identifying even second order linear systems as reorted in the references [1, 6, 13-14], several aroaches have been suggested in the literature to increase the erformance of the ENN with simle modifications [15-19]. One of the modified methods is roosed by Pham and Liu on the idea which adds a self-connection weight (fixed between 0.0 and 1.0 before the training rocess) for the context layer. Furthermore, two methods about outut-inut feedbac Eleman networ (a context is added between outut layer and inut layer) and outut-hidden feedbac Eleman networ (a context is added between outut layer and hidden layer) are the classical imrovement of the ENN [20]. The two methods have largely enhanced the memory caacity of the networ and got broadly alication by adding other contexts unit. The suggested modifications on the ENN in the literature mostly have been able to imrove certain inds of roblems, but it is not clear yet which networ architecture is best suited to dynamic system identification or rediction [21]. And at the same time, these methods usually change or add some other elements or connections in the networ and enhance the comlexity of the comutation. However, these imroved modifications attemt to add other feedbac connections factors to the model that will increase the caacity of the memory in order to overcome the tendency to sin into local minima. The random erturbations of the search direction and various inds of stochastic adustment to the current set of weights are largely ineffective at enabling networ to escae from local minima and mae the networ fail to converge to a global minimum within a reasonable number of iterations [22-23]. So the local minimum roblem still is Manuscrit received Aril 5, 2007 Manuscrit revised Aril 25, 2007

2 2 IJCSNS International Journal of Comuter Science and Networ Security, VOL.7 No.4, Aril 2007 a serious roblem and usually occurs in various alications. In this aer, we roose a novel aroach to suervised learning for multilayer artificial ENN. The learning model has two hases-a Bacroagation hase, and a gradient ascent hase. The Bacroagation hase erforms steeest descent on a surface in weight sace whose height at any oint in weight sace is equal to an error measure, and finds a set of weights minimizing the error measure. When the Bacroagation gets stuc in local minima, the gradient ascent hase attemts to fill u the valley by modifying gain arameters in a gradient ascent direction of the error measure. Thus, the two hases are reeated until the networ gets out of local minima. The learning model has been alied into the Boolean Series Prediction Questions (BSPQ) roblems including "11"," 111" and "00" roblems. The roosed algorithm is shown to be caable of escaing from local minima and get better simulation results than the original ENN and imroved ENN. Since a three-layered networ is caable of forming arbitrarily close aroximation to any continuous nonlinear maing [24-25], we use three layers for all training networ. 2. Structure of ENN Fig. 1 shows the structure of a simle ENN. In Fig. 1, after the hidden units are calculated, their values are used to comute the outut of the networ and are also all stored as "extra inuts" (called context unit) to be used when the next time the networ is oerated, and the recurrent units of the connect layer is same as the units form hidden layer. Thus, the recurrent contexts rovide a weighted sum of the revious values of the hidden units as inut to the hidden units. As shown in the Fig.1, the activations are coied from hidden layer to context layer on a one for one basis, with fixed weight of 1.0 (w=1.0). The forward connection weight is trainable between hidden units and context units as others weights [1]. If self-connections are introduced to the context unit, when the values of the self-connections weights (a) are fixed between 0.0 and 1.0 (usually 0.5) before the training rocess, it is an imroved ENN roosed by Pham and Liu [6]. When weights (a) are 0, the networ is the original ENN. Fig. 2 is the internal learning rocess of the ENN by the error bac-roagation algorithm. From Fig. 2 we can see that training such a networ is not straightforward since the outut of the networ deends on the inuts and also all revious inuts to the networ. So, it should trace the revious values according to the recurrent connections. One aroach used in the machine learning is to oerate the rocess by time shown as Fig. 3. Fig. 1. The Structure of the ENN Fig. 3 shows that a long feed forward networ where by bac roagation is able to calculate the derivatives of the error (at each outut unit) by unrolling the networ to the beginning. At the next time ste t+1 inut is reresented, here the context units contain values which are exactly the hidden unit values at time t ( and the time t-1, t-2 ), thus, these context units rovide the networ with memory [28]. Therefore, the ENN networ is converted into a dynamical networ that is efficient use of temoral information in the inut sequence, both for classification [29-30] as well as for rediction [31-32]. Context Inut layer Context Error bacagation Hidden layer Outut layer Information stream Fig. 2. Internal Process Analysis of ENN Exected outut Based on Fig. 2 and Fig.3 it is obvious that it is identical to the feed-forward networ excet for the hidden units taing inuts consisting of weighted sums of the revious values of the hidden units. At a word, ENN is not merely a function of its inuts but also comutes some value of its ast values, able to recall it when it is required and then it is then used again. However, these reresentations of temoral context need not be literal and they reresent a memory which is highly tas and stimulus-deendent [1]. So the algorithm of the networ is almost same as the bac-roagation algorithm excet the context layers which bring the memory to the networ.

3 IJCSNS International Journal of Comuter Science and Networ Security, VOL.7 No.4, Aril Out T Hidden unit,time T Inuts,time T Context Context Context Out T-1 Out T-2 Hidden unit,time T-1 Inuts,time T-1 Hidden unit,time T-2 Inuts,time T-2 Fig. 3. Unroll the ENN Through Time Fig. 4 is a flowchart of the roosed learning algorithm. In the flowchart, hase I is the bacroagation hase in weight sace, and hase II is the gradient ascent hase in gain arameter sace. Begin Phase I Bacroagation Phase Error measure is less than error criteria End Yes Phase II Gradient Ascent Phase No Fig.4 Flowchart of roosed algorithm. 3. Two-Phase Learning Algorithm 3.1 Bacroagation Phase for ENN Bacroagation ENN learning algorithm and its variations are widely used as methods for the syntactic or semantic categories in natural language data. The Bacroagation algorithm for ENN tries to find a set of weights and thresholds that minimizes an overall error measure E T E = E (1) t = 0 where indexes over all the atterns for the training set in the time interval [0, T]. In our aer, the time element is udated by the next inut of the attern training set. So we can get the following equation as following. M E is defined by T P (2) t= 0 = 1 E = E = E 1 E t o 2 = ( ) (3) 2 where t is target value (desired outut) of the -th comonent of the outut for attern and o is the -th unit of the actual outut attern roduced by the resentation of inut attern in the time, and indexes over the outut units. To get the correct generalization of the delta rule, the BP algorithm for ENN set at E w i = η (4) A w i Where w i is the weight connected between unit i and unit. The inut of each unit in a layer (excet inut layer) is given by (5) net = θ + w o + r u i i i i i i where net is the net inut to unit in a layer roduced by the resentation of attern at the time, is a threshold of the unit and o is the outut value of unit i for attern. let r i reresent the forward weights from the context layer to the hidden layer, u i is the recurrent value from the hidden layer to the context layer, but only when net is the net inut for the neuron unit in the hidden layer, u i will be valid, otherwise, it will be 0. The number of the neuron unit in the hidden layer and context layer is same for the ENN. The outut of unit for attern is secified by o = f ( net ) (6) where f(x) is a semilinear activation function which is differentiable and nondecreasing. According to the definition of the ENN, the outut value of the hidden layer at time -1 is the inut value for the context layer at time, so 1 oi = o (7) The rule for changing weight (threshold) tyically used in Bacroagation is given by E E net w i = η = η = ηδ o (8) i w i net w i where w is the change to be made to w i i following resentation of attern and η is called the learning rate. δ is defined as E / net at time. When unit is an outut unit, then δ in the above equation is given by i θ

4 4 IJCSNS International Journal of Comuter Science and Networ Security, VOL.7 No.4, Aril 2007 δ ( ) ( = t o f net) (9) And when unit is a hidden unit, then δ is given by δ = f ( net ) o w (10) m m m where f ( net) is the derivative of the activation function of unit, w m is the weights between outut layer and hidden layer. But for the weight r i between context layer and hidden layer, the adustment of the weight is given by E E o ri = η = η (11) ri o ri where o is the outut value of the hidden layer for the unit, and we can get the following equations, resectively. E E net m = η = o mw (12) m o net m o m where net m is the inut value of the out layer for the unit m, w m is the weight between hidden layer and outut layer for the unit m. o f ( net ) r 1 = = f ( θ + wo i i + ru i i) = uif ( net) o i ri i i (13) where is the inut value of the hidden layer for the unit. So, we can get the final comutation equation as following E 1 ri = η = η δ mwm f ( net ) o (14) ri m Where w are the weights between the outut and m the hidden layer for the unit. Thus, the Bacroagation rule (referred to as the generalized delta rule) causes each iteration to modify weights in such way as to aroximate the steeest descent. But, if the Bacroagation descends into a minimum that achieves insufficient accuracy in the functional aroximation, it fails to learn. To hel the networ escae from the local minima, we add a gradient ascent hase in the gain sace. The details are described in the next section. First, we must consider when a gradient ascent hase should be started. in the bacroagation hase, the absolute value of the change of the error measure is accumulated for every 50 corrections. Here, one weight correction corresonded to bacroagation modification to weights for all atterns. If the accumulation of change of E for the 50 weight corrections was less than a very small re-selected constant, in our aer, it is 0.001, and the current error measure E was larger than the error criteria (E=0.1 or E=0.01, where E was the sum of squares error function for the full training set.), the Bacroagation learning (Phase I) is considered to be traed in a local minimum. Then the bacroagation hase stos and goes to the gradient ascent hase. 3.2Gradient Ascend Phase for ENN It is nown to all that the Bacroagation in weight sace for ENN may lead to a convergence to either a local minimum or a global minimum. However, there is not an effective way for the networ to reach to global minimum from a local minimum. We roose a gradient ascent learning method that attemts to fill u a local minimum valley by increasing the error measure of a local minimum in the best ath. Usually, the activation function of unit, f is given by a sigmoid function f ( x ) with the "gain" arameter g. 1 f ( x ) = (15) g x 1 + e The error measure is also a function of the gain arameters of neurons. Therefore we can increase intentionally the error measure by modifying the gain arameters in a gradient ascent direction of the error measure in gain arameter sace and drive the Bacroagation out of the local minimum. We consider the gain arameters as variables that can be modified during the hase II as weights and thresholds in hase I. Here suose that a vector g corresonds to the gain arameters of neurons. Since for the gain vector g, the modification requires the change of the gain to be in the ositive gradient direction, we tae: w g = ε E( g r ) (16) where ε is a ositive learning rate constant and E is the gradient of the error measure E with resect to the gain arameter g. If the constant ε is small enough, the change of the gain arameter g results in an increase of the error measure (Phase II). The derivative of the error measure with resect to the gain arameter g of neuron can be easily calculated using the chain rule as following (here we do not consider the time gene): E g = ε (17) g And now let us define ne t = g net, then E net E g = ε = ε net (18) net g net because E E (19) = δ = g net net E δ = (20) net g

5 IJCSNS International Journal of Comuter Science and Networ Security, VOL.7 No.4, Aril Finally E net g = ε = εδ (21) g g Here, δ can be comuted as the same as conventional Bacroagation, for any outut unit, δ = ( t o) f ( net) (22) and for the unit that is not an outut unit, δ = f ( net ) o w (23) m m m We tae f ( ) (1) /(1 g x x = + e ) as activation function, in this case: f ( net ) = g o (1 o ) (24) Hence for an outut layer unit δ = g( t o ) o(1 o) (25) and for a hidden unit δ = g o (1 o ) δ w (26) m m m where w m reresents the weights between hidden layer and outut layer We can calculate the g through the above δ rules alie as the gradient descend hase. In Phase II, the absolute value of the change of the error measure E is also accumulated for every gain correction. If the accumulated value of the error measure E is larger than a reselected small constant, for examle, 0.02 in our simulation, go to hase II. In order to exlain simly how the roosed algorithm hels a networ escae from a local minimum, we use a two-dimensional grah of an error measure of a neural networ with a local minimum and a global minimum, as shown in Fig.5 (a). The horizontal axis reresents the error roduced by the networ corresonding to some state in weight sace on the vertical axis (to facilitate understanding, it is exressed in one dimension). The initial networ defines a oint (e.g., oint A) on the sloe of a secific "valley", the bottom of this valley (oint B) is found by error minimization of the Bacroagation in the weight sace (Fig.5 (b)). Then the error measure is evaluated. If the error measure is less than an error criterion, then sto. If connection weights cannot be corrected even when the networ has larger error values than the error criteria, go to hase II. Phase II tends to increase the error measure in the gradient ascent direction of the gain sace. Fig.5 The relationshi between error measure and the weight sace After the gradient ascent hase (Phase II), the error measure at oint B is raised, and then oint B becomes a oint at the sloe of the "valley" in weight landscae again (Fig.5 (c)). Then Bacroagation hase sees a new minimum (Point C) in weight landscae (Fig.5 (d)). Thus, the reeats of Bacroagation hase in weight sace and the gradient ascent hase in gain sace may result in a movement out of a local minimum as shown in Fig.5 (e)(f). In this way, hase I and II were reeated until the error measure was less than the error criteria, which is assumed to be a successful run, oositely, when the iterative stes reach an uer limit of 30000, it was assumed to be an unsuccessful run. 4. Simulation In order to test the effectiveness of the roosed method, we comared its erformance with those of the original ENN algorithm (roosed by Eleman [1]) and imroved ENN algorithm (roosed by Pham and Liu [6]) on a series of BSPQ roblems including "11", "111" and

6 6 IJCSNS International Journal of Comuter Science and Networ Security, VOL.7 No.4, Aril 2007 "00" roblems. To comare them, each algorithm was used to train a networ on a given roblem with identical starting weight values and momentum values. In our roosed method, when the networ was traed into the local minimum, go to Phase II and the gain arameters of all neuron in hidden layer were modified to hel the networ escae from the local minimum, which is found to be a more effective scheme during the rediction tass. Three asects of training algorithm erformance- "success rate","iterative" and "training time" were assessed for each algorithm. Every method will run 100 trials. And the iterative stes were two arts which were the bacroagation hase and the gradient ascent hase for the roosed ENN algorithm. All data of simulations were erformed out on an PC (Pentium4 2.8GHz, 1G). We used the modified bac roagation algorithm with momentum 0.9. And the learning rate η =1.0 for the bacroagation and ε =1.0 for the gradient ascent were selected. The weights and thresholds were initialized randomly from 0.0 to 1.0 and the gain arameters of all neurons were set to 1.0 initially. In this condition, the requested error criteria was very high, if the networ reach to this error criteria oint (E=0.1 or E=0.01), all atterns in the training set could get a tolerance of 0.05 for each target element. And we used the well trained networ to do the final rediction about sequence P 1 to test the rediction caacity of it. For all the trials, 150 atterns were rovided to satisfy the equilibrium of the training set and at the same time to ensure that there was enough and reasonable running time for all algorithms "11" questions Boolean Series Prediction Questions is one of the roblems about time sequence rediction. First let us see the definition of the BSPQ [10]. Now suose that we want to train a networ with an inut P and targets T as defined below. P= And T= Here T is defined to be 0, excet when two 1's occur in P in which case T is 1 and we called this roblem as "11" roblem (one ind of the BSPQ ). Also when "00"or "111" (two 0's or three 1's) occurs, it is named as the "00" or "111" roblem. Firstly, we deal with the "11" question and analyze the effect of the memory of the Context layer of the networ. In this aer we define the rediction set P 1 randomly as stated in the 20's figures below. P 1 = Firstly, we deal with the "11" question and analyze the effect of the memory from the context layer for the networ. Table 1 comared the simulation results of the three algorithm, we can see that our roosed method could almost 100% succeeded to get the convergent criterion. Of course, the original ENN was able to redict the requested inut test, but the training success rate was slow. Further, it succeeded only 75% when E was set to 0.1. And the imroved ENN has increased the ability of the dynamic memorization of the networ because of the self-connection weights gene (a=0.5). Although imroved ENN could accelerate the convergent of the learning rocess (iterative was less than original ENN), it could not essentially avoid the local minima roblems, the success rate was only 68%, when error was set Fig.6 shows the learning characteristics for three methods with the same initialization weight for the networ (1-5-1), when E was set to 0.1. From the simulation results we can see that the roosed ENN algorithm only needed about 170 iterations to be successful, but the original ENN and the imroved ENN algorithm have not got the goal value and got into local minima oint A and B, resectively. The imroved ENN have only accelerated the rocess of the learning, but it could not escae the local minima roblem. But our roosed ENN algorithm could get the convergent oint by avoiding the local minima. From the "roosed ENN" line we can see that at time 1, after the Bacroagation learning about 127 iterative stes, the error measure E decreased from 3.12 to 2.22 and got stuc in a local mimimum. Then the gradient ascent learning rocess (Phase II) was erformed, and error is increased from 2.22 to 2.24 (2). Then the Bacroagation learning settled to a new local minimum (3). Bacroagation learning escaed from the local minimum and converged to the global minimum (4). For the rediction set P 1, we can get its corresonding exected results with below T 1 T 1 = Fig. 7 is the simulation rediction result of P 1 for the "11" question with our roosed ENN algorithm. In the Fig. 7, the two lines reresented the T 1 line and the rediction results line resectively. From the Fig. 7 we can see that the tolerance of every attern was less than required standard So the networ has enough ability to do the rediction of the given tas as desired.

7 IJCSNS International Journal of Comuter Science and Networ Security, VOL.7 No.4, Aril Fig.6. Training error curve of the three ENN algorithm For the same roblem, as we gradually increased the quantity of the neuron of the hidden layer, the original ENN and imroved ENN were able to get to the convergence oint with the finite iterative stes, however, our roosed ENN could 100% get to the error criterion by avoiding the local minima. The results are shown in Table rediction line T1 line Fig.7. Exected Outut T1 and rediction result with Proosed ENN In order to better testify the effectiveness of the roosed algorithm, we continued to increase the quantity of neuron unit in the hidden layer. From the Table 3, we can see, the roosed method could get almost 100% success rate than other two algorithms through avoiding the local minima roblems, although the iterative stes was bigger than the imroved ENN algorithm sometimes. This is because the networ's iterative stes were increased because of the gradient ascent hase II "111" and "00" questions As we change rules of the inut sequence, we can continue to testify the valid of our roosed ENN algorithm. The secific arameter set of the networ was same as the "11" questions. Table 4 is the secific comarison results from the "111" question for the three algorithms. From Table 4 we can see that, for the networ (1-7-1), the success rate of the roosed ENN was lower (only 90% when the error criterion was set to 0.01) because of the insufficient caacity with less hidden unit nodes. As the unit nodes of the hidden was increased, our roosed ENN algorithm could almost 100% succeed than original ENN and imroved ENN algorithms although the iterative ste was bigger than two methods. The roosed ENN has successfully escaed from the local minima roblems through gradient ascent hase. Table 5 is the secific comarison results from the "00" question for the three algorithms. From Table 5 we can see that, as the comlexity of the roblem was increased, the training time and iterative stes were also increased for three algorithms, but our roosed algorithm was much more effective on the comlicated BSPQ roblem to avoid the local minima roblem. Although the BSPQ roblem is only a simle rediction tas for the ENN, the other roblems are also comatible with our roosed algorithm, such as the detection of the wave amlitude [10]. And our algorithm is also aroriate to the Jordan networ or other artially modified recurrent neural networs, such as the networ roosed by Shi [20]. 5. Conclusion In this aer, we roosed a suervised learning method for Eleman Neural Networ to avoid the local minima roblem. The aroach was shown to be of higher convergence to global minimum than the original ENN learning algorithm and than the imroved ENN algorithm in networ training. The algorithm has been alied to the BSPQ roblems including "11", "111"and "00" roblems. Through the analysis of the result from various Boolean Series Predict roblems we can see that the roosed algorithm is effective at avoiding the local minima roblems and getting high success rate. Although we only alied the algorithm into the basic ENN, we mae this imrovement as a benchmar and tae effort to research alication of the algorithm in the recurrent neural networ lie Jordan networ and fully recurrent networ. Acnowledgment I would lie to than Professor Tang Zheng, my suervisor, for his many suggestions and constant suort in the whole course of this research.

8 8 IJCSNS International Journal of Comuter Science and Networ Security, VOL.7 No.4, Aril 2007 Methods(1-5-1 networ) Table 1: Exeriment results for the 11 question with 5 neurons in the hidden layer Success rate(100 trials) Iterative(average stes) Average CPU time (second) E=0.1 E=0.01 E=0.1 E=0.01 E=0.1 E=0.01 Original ENN 75% 66% imroved ENN 87% 65% roose ENN 100% 100% Methods(1-7-1 networ) Table 2. Exeriment results for the 11 question with 7 neurons in the hidden layer Success rate(100 trials) Iterative(average) Average CPU time (second) E=0.1 E=0.01 E=0.1 E=0.01 E=0.1 E=0.01 Original ENN 89% 78% Imroved ENN 95% 90% Proosed ENN 100% 100% Methods( networ) Table 3. Exeriment results for the 11 question with 10 neurons in the hidden layer Success rate(100 trials) Iterative(average) Average CPU time (second) E=0.1 E=0.01 E=0.1 E=0.01 E=0.1 E=0.01 Original ENN 91% 89% Imroved ENN 95% 91% Proosed ENN 100% 98% Table 4. Exeriment results for the 111 question with different neuron units in the hidden layer structure of the networ networ networ networ Items Success trials) rate(100 Iterative(average) Average CPU time (second) Methods E=0.1 E=0.01 E=0.1 E=0.01 E=0.1 E=0.01 Original ENN 56% 43% Imroved ENN 75% 60% Proosed ENN 95% 90% Original ENN 78% 74% Imroved ENN 85% 79% Proosed ENN 100% 99% Original ENN 85% 75% Imroved ENN 90% 81% Proosed ENN 100% 98% Table 5. Exeriment results for the 00 question with different neuron units in the hidden layer structure Items Success rate(100 Average CPU time Iterative(average) of the trials) (second) networ Methods E=0.1 E=0.01 E=0.1 E=0.01 E=0.1 E= networ networ networ Original ENN 78% 71% Imroved ENN 90% 84% Proosed ENN 100% 97% Original ENN 85% 81% Imroved ENN 95% 90% Proosed ENN 99% 99% Original ENN 95% 89% Imroved ENN 98% 92% Proosed ENN 100% 100%

9 IJCSNS International Journal of Comuter Science and Networ Security, VOL.7 No.4, Aril References [1] Jeffrey L.Eleman, "Finding Structure in Time", Cognitive Science, 14, ,1990. [2] M. I. Jordan, "Attractor dynamics and arallelism in a connectionsist sequential machine" in Proceedings of the 8 th Conference on Cognitive Science, ,1986. [3] C.W.Omlin, C.L. Giles, Extraction of rules from dicrete-time recurrent neural networs, Neural Networs 9(1),41-52,1996. [4] P.Stagge, B. Sendhoff, Organisation of ast states in recurrent neural networs: imlicit embedding, in: M. Mohammadian (Ed.), Comutational Intelligence for Modelling, Control & Automation, IOS Press, Amsterda ,1999. [5] G.Cybeno, "aroximation by suerosition of a sigmoid function", Mathematics of Control, Signals, and Systems, 2, ,1989. [6] D.T.Pham and X. Liu, "Identification of linear and nonlinear dynamic systems using recurrent neural networs", Artificial Intelligence in Engineering, Vol.8,.90-97,1993. [7] Andrew SMITH," Branch rediction with Neural Networs: Hidden layersand Recurrent Connections", Deartment of Comuter Science University of California, San Diego La Jolla, CA 92307, [8] G.Cybeno, "aroximation by suerosition of a sigmoid function", Mathematics of Control, Signals, and Systems, 2, , [9] Multilayer Networ Learning Algorithm Based on Pattern Search Method, IEICE Transaction on Fundamentals of Electronics, Communications and Comuter Science, E86-A, , [10] htt:// [11] C.L. Giles, C.B. Miller, D, Chen, G.Z. Sun, H.H. Chen, Y.C. Lee, Extracting and learning an unnown grammar with recurrent neural networs, in: J.E. Moody, S.J. Hanson. R.P. Limann (Eds.), Advances in Neural Information Processing Systems 4, Morgan Kaufmann Publishers, San Mateo, CA, , [12] R.I.Watrous, G. M. Kuhn, Induction fo finite-state automata using second-order recurrent neural networs, in: J. E. Moody, S. J. Hanson, R. P. Limann (Eds.), Advances in Neural Information Processing System 4, Morgan Kaufmann Publishers, San Mateo, CA, , [13]A.Kalinli and D.Karaboga,"Training recurrent neural networs by using arallel tabu search algorithm based on crossover oeration", Engineering Alications of Artificial Intelligence, Vol. 17, ,2004. [14] D.T.Pham and X.Liu, "Training of Eleman networs and dynamic system modeling", International Journal of Systems Science, Vol.27, ,1996. [15]J. W. Dong, J. X. Qian and Y.X.Sun, "Recurrent neural networs for recursive identification of nonlinear dynamic rocess", in Proceedings of the IEEE International Conference on IndustrialTechnology, ,1994. [16] D.P.Kwo, P. Wang, and K.Zhou,"Process identification using a modified Eleman neural networ", International Symosium on Seech, Image Processing and Neural Networs,, ,1994, [17] X.Z.Gao,X.M.Gao, and S.J.Ovasa, "A modified Eleman neural networ model with alication to dynamical systems identification", in Proceedings of the IEEE International Conference on System, Man and Cybernetics, Vol.2, ,1996. [18] W.Chagra, R. B. Abdennour, F.Bouani, M.Ksouri, and G.Favier, "A comarative study on the channel modeling using feedforward and recurrent neural networ structures," in Proceedings of the IEEE International Conference on System, Man and Cybernetics, Vol.4, ,1998. [19] J.E.W. Holm and N.J.H.Kotze, "training recurrent neural networs with leafrog," in Proceeding of the International Conference on Industrial Electronics, ,1998. [20] Shi XH, Liang YC, Xu x. An imroved Eleman model and recurrent bac-roagation control neural networ. Journal of software, 14(6): ,2003. [21] Adem Kalinli and Seref Sagiroglu, "Eleman Networ with Embedded Memory for System Identification" in Journal of Informaiton Science and Engineering 22, ,2006. [22] Ingman and Y. Merlis, " Local minimization escae using thermodynamic roerties of Neural Networs", Neural Networs, vol. 4,no.3, ,1991. [23] Wang Chuan, Princie Jose.C, "Training neural networs with additive noise in the desired signal", IEEE Transactions on Neural Networs, vol. 10, no. 6, , [24] C.Servan-Schreiber, H. Printz and J.D.Cohen, "A networ model of neuromodulatory effects: Gain, Signal- to-noise ratio and behavior", Science, 249, , [25]G.Cybeno, "Aroximation by suerosition of a sigmoid function", Mathematics of Control, Signals, and System, vol.2, , [26]K.Funahashi, "On the aroximate realization of continuous maing by neural networs", Neural Networs, vol.2, , [27]K. Horni, M.Stinchcombe and H.White, "Multilayer forward networs are nuniversal aroximators", Neural Networs, vol.i, , [28] C.Servan-Schreiber, H.Printz and J.D.Cohen, "A networ model of neuromodulatory effects: Gain, Signal- to-noise ratio and behavior", Science, 249, ,1990. [29] C.L.Giles, C.B.Miller, D,Chen, G.Z.Sun, H.H. Chen, Y.C. Lee, Extracting and learning an unnown grammar with recurrent neural networs, in : J. E. Moody, S.J. Hanson. R.P. Limann (Eds.), Advances in Neural Information Processing Systems 4, Morgan Kaufmann Publishers, San Mateo, CA, , [30] S. Lawrence, C.L.Giles, S.Fong, Natural language grammatical inference with recurrent neural networ, IEEE Trans. Knowledge Data Eng. 12(1), , [31]. J.T.Conner, D. Martin, L.E. Atlas, recurrent neural networs and robust time series rediction, IEEE Trans. Neural Networs 5(2) , [32].E.W.Saad, D.V.Prohorov, D.C.Wunsch, Comarative study of stoc trend rediction using time delay recurrent and robabilistic neural networs, IEEE Trans, Neural Networs 9(6), , [33]C.Servan-Schreiber, H.Printz and J.D.Cohen, "A networ model of neuromodulatory effects: Gain, signal-to-noise ratio, and behavior", Science, vol.249, ,1976. [34] T.L.Burrow, and M.Niranan,"The use of feed forward and recurrent neural networs for system identification",

10 10 IJCSNS International Journal of Comuter Science and Networ Security, VOL.7 No.4, Aril 2007 Technical reort, Cambridge University Engineering Deartment, ZhiQiang Zhang Received his B.S degree and M.S. degree from Shandong University, Jinan, Shandong, China, in 2002, 2005 and in comuter science and management science, resectively. He is currently woring for his Ph.D. degree at Toyama University, Jaan. His main research interests are neural networs and otimizations. Zheng Tang Received the B.S. degree from Zheiang University, Zheiang, China in 1982 and an M.S. degree and a D.E. degree from Tshinghua University, Beiing, China in 1984 and 1988, resectively. From 1988 to 1989, he was an Instructor in the Institute of Microelectronics at Tsinghua University. From 1990 to 1999, he was an associate rofessor in the Deartment of Electrical and Electronic Engineering, Miyazai University, Miyazai, Jaan. In 2000, he oined Toyama University, Toyama, Jaan, where he is currently a rofessor in the Deartment of Intellectual Information Systems. His current research interests include intellectual information technology, neural networs, and otimizations.

Fuzzy Automata Induction using Construction Method

Fuzzy Automata Induction using Construction Method Journal of Mathematics and Statistics 2 (2): 395-4, 26 ISSN 1549-3644 26 Science Publications Fuzzy Automata Induction using Construction Method 1,2 Mo Zhi Wen and 2 Wan Min 1 Center of Intelligent Control

More information

Radial Basis Function Networks: Algorithms

Radial Basis Function Networks: Algorithms Radial Basis Function Networks: Algorithms Introduction to Neural Networks : Lecture 13 John A. Bullinaria, 2004 1. The RBF Maing 2. The RBF Network Architecture 3. Comutational Power of RBF Networks 4.

More information

LIMITATIONS OF RECEPTRON. XOR Problem The failure of the perceptron to successfully simple problem such as XOR (Minsky and Papert).

LIMITATIONS OF RECEPTRON. XOR Problem The failure of the perceptron to successfully simple problem such as XOR (Minsky and Papert). LIMITATIONS OF RECEPTRON XOR Problem The failure of the ercetron to successfully simle roblem such as XOR (Minsky and Paert). x y z x y z 0 0 0 0 0 0 Fig. 4. The exclusive-or logic symbol and function

More information

Feedback-error control

Feedback-error control Chater 4 Feedback-error control 4.1 Introduction This chater exlains the feedback-error (FBE) control scheme originally described by Kawato [, 87, 8]. FBE is a widely used neural network based controller

More information

Analysis of Group Coding of Multiple Amino Acids in Artificial Neural Network Applied to the Prediction of Protein Secondary Structure

Analysis of Group Coding of Multiple Amino Acids in Artificial Neural Network Applied to the Prediction of Protein Secondary Structure Analysis of Grou Coding of Multile Amino Acids in Artificial Neural Networ Alied to the Prediction of Protein Secondary Structure Zhu Hong-ie 1, Dai Bin 2, Zhang Ya-feng 1, Bao Jia-li 3,* 1 College of

More information

Recent Developments in Multilayer Perceptron Neural Networks

Recent Developments in Multilayer Perceptron Neural Networks Recent Develoments in Multilayer Percetron eural etworks Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, Texas 75265 walter.delashmit@lmco.com walter.delashmit@verizon.net Michael

More information

Solved Problems. (a) (b) (c) Figure P4.1 Simple Classification Problems First we draw a line between each set of dark and light data points.

Solved Problems. (a) (b) (c) Figure P4.1 Simple Classification Problems First we draw a line between each set of dark and light data points. Solved Problems Solved Problems P Solve the three simle classification roblems shown in Figure P by drawing a decision boundary Find weight and bias values that result in single-neuron ercetrons with the

More information

A Simple Weight Decay Can Improve. Abstract. It has been observed in numerical simulations that a weight decay can improve

A Simple Weight Decay Can Improve. Abstract. It has been observed in numerical simulations that a weight decay can improve In Advances in Neural Information Processing Systems 4, J.E. Moody, S.J. Hanson and R.P. Limann, eds. Morgan Kaumann Publishers, San Mateo CA, 1995,. 950{957. A Simle Weight Decay Can Imrove Generalization

More information

2-D Analysis for Iterative Learning Controller for Discrete-Time Systems With Variable Initial Conditions Yong FANG 1, and Tommy W. S.

2-D Analysis for Iterative Learning Controller for Discrete-Time Systems With Variable Initial Conditions Yong FANG 1, and Tommy W. S. -D Analysis for Iterative Learning Controller for Discrete-ime Systems With Variable Initial Conditions Yong FANG, and ommy W. S. Chow Abstract In this aer, an iterative learning controller alying to linear

More information

Using a Computational Intelligence Hybrid Approach to Recognize the Faults of Variance Shifts for a Manufacturing Process

Using a Computational Intelligence Hybrid Approach to Recognize the Faults of Variance Shifts for a Manufacturing Process Journal of Industrial and Intelligent Information Vol. 4, No. 2, March 26 Using a Comutational Intelligence Hybrid Aroach to Recognize the Faults of Variance hifts for a Manufacturing Process Yuehjen E.

More information

Computer arithmetic. Intensive Computation. Annalisa Massini 2017/2018

Computer arithmetic. Intensive Computation. Annalisa Massini 2017/2018 Comuter arithmetic Intensive Comutation Annalisa Massini 7/8 Intensive Comutation - 7/8 References Comuter Architecture - A Quantitative Aroach Hennessy Patterson Aendix J Intensive Comutation - 7/8 3

More information

State Estimation with ARMarkov Models

State Estimation with ARMarkov Models Deartment of Mechanical and Aerosace Engineering Technical Reort No. 3046, October 1998. Princeton University, Princeton, NJ. State Estimation with ARMarkov Models Ryoung K. Lim 1 Columbia University,

More information

MODELING THE RELIABILITY OF C4ISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL

MODELING THE RELIABILITY OF C4ISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL Technical Sciences and Alied Mathematics MODELING THE RELIABILITY OF CISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL Cezar VASILESCU Regional Deartment of Defense Resources Management

More information

Multilayer Perceptron Neural Network (MLPs) For Analyzing the Properties of Jordan Oil Shale

Multilayer Perceptron Neural Network (MLPs) For Analyzing the Properties of Jordan Oil Shale World Alied Sciences Journal 5 (5): 546-552, 2008 ISSN 1818-4952 IDOSI Publications, 2008 Multilayer Percetron Neural Network (MLPs) For Analyzing the Proerties of Jordan Oil Shale 1 Jamal M. Nazzal, 2

More information

Research Article Research on Evaluation Indicator System and Methods of Food Network Marketing Performance

Research Article Research on Evaluation Indicator System and Methods of Food Network Marketing Performance Advance Journal of Food Science and Technology 7(0: 80-84 205 DOI:0.9026/afst.7.988 ISSN: 2042-4868; e-issn: 2042-4876 205 Maxwell Scientific Publication Cor. Submitted: October 7 204 Acceted: December

More information

PCA fused NN approach for drill wear prediction in drilling mild steel specimen

PCA fused NN approach for drill wear prediction in drilling mild steel specimen PCA fused NN aroach for drill wear rediction in drilling mild steel secimen S.S. Panda Deartment of Mechanical Engineering, Indian Institute of Technology Patna, Bihar-8000, India, Tel No.: 9-99056477(M);

More information

Rotations in Curved Trajectories for Unconstrained Minimization

Rotations in Curved Trajectories for Unconstrained Minimization Rotations in Curved rajectories for Unconstrained Minimization Alberto J Jimenez Mathematics Deartment, California Polytechnic University, San Luis Obiso, CA, USA 9407 Abstract Curved rajectories Algorithm

More information

On Line Parameter Estimation of Electric Systems using the Bacterial Foraging Algorithm

On Line Parameter Estimation of Electric Systems using the Bacterial Foraging Algorithm On Line Parameter Estimation of Electric Systems using the Bacterial Foraging Algorithm Gabriel Noriega, José Restreo, Víctor Guzmán, Maribel Giménez and José Aller Universidad Simón Bolívar Valle de Sartenejas,

More information

Period-two cycles in a feedforward layered neural network model with symmetric sequence processing

Period-two cycles in a feedforward layered neural network model with symmetric sequence processing PHYSICAL REVIEW E 75, 4197 27 Period-two cycles in a feedforward layered neural network model with symmetric sequence rocessing F. L. Metz and W. K. Theumann Instituto de Física, Universidade Federal do

More information

Neural network models for river flow forecasting

Neural network models for river flow forecasting Neural network models for river flow forecasting Nguyen Tan Danh Huynh Ngoc Phien* and Ashim Das Guta Asian Institute of Technology PO Box 4 KlongLuang 220 Pathumthani Thailand Abstract In this study back

More information

COMPARISON OF VARIOUS OPTIMIZATION TECHNIQUES FOR DESIGN FIR DIGITAL FILTERS

COMPARISON OF VARIOUS OPTIMIZATION TECHNIQUES FOR DESIGN FIR DIGITAL FILTERS NCCI 1 -National Conference on Comutational Instrumentation CSIO Chandigarh, INDIA, 19- March 1 COMPARISON OF VARIOUS OPIMIZAION ECHNIQUES FOR DESIGN FIR DIGIAL FILERS Amanjeet Panghal 1, Nitin Mittal,Devender

More information

Uncorrelated Multilinear Principal Component Analysis for Unsupervised Multilinear Subspace Learning

Uncorrelated Multilinear Principal Component Analysis for Unsupervised Multilinear Subspace Learning TNN-2009-P-1186.R2 1 Uncorrelated Multilinear Princial Comonent Analysis for Unsuervised Multilinear Subsace Learning Haiing Lu, K. N. Plataniotis and A. N. Venetsanooulos The Edward S. Rogers Sr. Deartment

More information

Combining Logistic Regression with Kriging for Mapping the Risk of Occurrence of Unexploded Ordnance (UXO)

Combining Logistic Regression with Kriging for Mapping the Risk of Occurrence of Unexploded Ordnance (UXO) Combining Logistic Regression with Kriging for Maing the Risk of Occurrence of Unexloded Ordnance (UXO) H. Saito (), P. Goovaerts (), S. A. McKenna (2) Environmental and Water Resources Engineering, Deartment

More information

An Analysis of Reliable Classifiers through ROC Isometrics

An Analysis of Reliable Classifiers through ROC Isometrics An Analysis of Reliable Classifiers through ROC Isometrics Stijn Vanderlooy s.vanderlooy@cs.unimaas.nl Ida G. Srinkhuizen-Kuyer kuyer@cs.unimaas.nl Evgueni N. Smirnov smirnov@cs.unimaas.nl MICC-IKAT, Universiteit

More information

Robust Predictive Control of Input Constraints and Interference Suppression for Semi-Trailer System

Robust Predictive Control of Input Constraints and Interference Suppression for Semi-Trailer System Vol.7, No.7 (4),.37-38 htt://dx.doi.org/.457/ica.4.7.7.3 Robust Predictive Control of Inut Constraints and Interference Suression for Semi-Trailer System Zhao, Yang Electronic and Information Technology

More information

Shadow Computing: An Energy-Aware Fault Tolerant Computing Model

Shadow Computing: An Energy-Aware Fault Tolerant Computing Model Shadow Comuting: An Energy-Aware Fault Tolerant Comuting Model Bryan Mills, Taieb Znati, Rami Melhem Deartment of Comuter Science University of Pittsburgh (bmills, znati, melhem)@cs.itt.edu Index Terms

More information

One step ahead prediction using Fuzzy Boolean Neural Networks 1

One step ahead prediction using Fuzzy Boolean Neural Networks 1 One ste ahead rediction using Fuzzy Boolean eural etworks 1 José A. B. Tomé IESC-ID, IST Rua Alves Redol, 9 1000 Lisboa jose.tome@inesc-id.t João Paulo Carvalho IESC-ID, IST Rua Alves Redol, 9 1000 Lisboa

More information

Generation of Linear Models using Simulation Results

Generation of Linear Models using Simulation Results 4. IMACS-Symosium MATHMOD, Wien, 5..003,. 436-443 Generation of Linear Models using Simulation Results Georg Otte, Sven Reitz, Joachim Haase Fraunhofer Institute for Integrated Circuits, Branch Lab Design

More information

Uniformly best wavenumber approximations by spatial central difference operators: An initial investigation

Uniformly best wavenumber approximations by spatial central difference operators: An initial investigation Uniformly best wavenumber aroximations by satial central difference oerators: An initial investigation Vitor Linders and Jan Nordström Abstract A characterisation theorem for best uniform wavenumber aroximations

More information

VIBRATION ANALYSIS OF BEAMS WITH MULTIPLE CONSTRAINED LAYER DAMPING PATCHES

VIBRATION ANALYSIS OF BEAMS WITH MULTIPLE CONSTRAINED LAYER DAMPING PATCHES Journal of Sound and Vibration (998) 22(5), 78 85 VIBRATION ANALYSIS OF BEAMS WITH MULTIPLE CONSTRAINED LAYER DAMPING PATCHES Acoustics and Dynamics Laboratory, Deartment of Mechanical Engineering, The

More information

System Reliability Estimation and Confidence Regions from Subsystem and Full System Tests

System Reliability Estimation and Confidence Regions from Subsystem and Full System Tests 009 American Control Conference Hyatt Regency Riverfront, St. Louis, MO, USA June 0-, 009 FrB4. System Reliability Estimation and Confidence Regions from Subsystem and Full System Tests James C. Sall Abstract

More information

A Network-Flow Based Cell Sizing Algorithm

A Network-Flow Based Cell Sizing Algorithm A Networ-Flow Based Cell Sizing Algorithm Huan Ren and Shantanu Dutt Det. of ECE, University of Illinois-Chicago Abstract We roose a networ flow based algorithm for the area-constrained timing-driven discrete

More information

A Qualitative Event-based Approach to Multiple Fault Diagnosis in Continuous Systems using Structural Model Decomposition

A Qualitative Event-based Approach to Multiple Fault Diagnosis in Continuous Systems using Structural Model Decomposition A Qualitative Event-based Aroach to Multile Fault Diagnosis in Continuous Systems using Structural Model Decomosition Matthew J. Daigle a,,, Anibal Bregon b,, Xenofon Koutsoukos c, Gautam Biswas c, Belarmino

More information

4. Score normalization technical details We now discuss the technical details of the score normalization method.

4. Score normalization technical details We now discuss the technical details of the score normalization method. SMT SCORING SYSTEM This document describes the scoring system for the Stanford Math Tournament We begin by giving an overview of the changes to scoring and a non-technical descrition of the scoring rules

More information

DIFFERENTIAL evolution (DE) [3] has become a popular

DIFFERENTIAL evolution (DE) [3] has become a popular Self-adative Differential Evolution with Neighborhood Search Zhenyu Yang, Ke Tang and Xin Yao Abstract In this aer we investigate several self-adative mechanisms to imrove our revious work on [], which

More information

RUN-TO-RUN CONTROL AND PERFORMANCE MONITORING OF OVERLAY IN SEMICONDUCTOR MANUFACTURING. 3 Department of Chemical Engineering

RUN-TO-RUN CONTROL AND PERFORMANCE MONITORING OF OVERLAY IN SEMICONDUCTOR MANUFACTURING. 3 Department of Chemical Engineering Coyright 2002 IFAC 15th Triennial World Congress, Barcelona, Sain RUN-TO-RUN CONTROL AND PERFORMANCE MONITORING OF OVERLAY IN SEMICONDUCTOR MANUFACTURING C.A. Bode 1, B.S. Ko 2, and T.F. Edgar 3 1 Advanced

More information

On the capacity of the general trapdoor channel with feedback

On the capacity of the general trapdoor channel with feedback On the caacity of the general tradoor channel with feedback Jui Wu and Achilleas Anastasooulos Electrical Engineering and Comuter Science Deartment University of Michigan Ann Arbor, MI, 48109-1 email:

More information

A New Method of DDB Logical Structure Synthesis Using Distributed Tabu Search

A New Method of DDB Logical Structure Synthesis Using Distributed Tabu Search A New Method of DDB Logical Structure Synthesis Using Distributed Tabu Search Eduard Babkin and Margarita Karunina 2, National Research University Higher School of Economics Det of nformation Systems and

More information

Approximating min-max k-clustering

Approximating min-max k-clustering Aroximating min-max k-clustering Asaf Levin July 24, 2007 Abstract We consider the roblems of set artitioning into k clusters with minimum total cost and minimum of the maximum cost of a cluster. The cost

More information

University of North Carolina-Charlotte Department of Electrical and Computer Engineering ECGR 4143/5195 Electrical Machinery Fall 2009

University of North Carolina-Charlotte Department of Electrical and Computer Engineering ECGR 4143/5195 Electrical Machinery Fall 2009 University of North Carolina-Charlotte Deartment of Electrical and Comuter Engineering ECG 4143/5195 Electrical Machinery Fall 9 Problem Set 5 Part Due: Friday October 3 Problem 3: Modeling the exerimental

More information

Design of NARMA L-2 Control of Nonlinear Inverted Pendulum

Design of NARMA L-2 Control of Nonlinear Inverted Pendulum International Research Journal of Alied and Basic Sciences 016 Available online at www.irjabs.com ISSN 51-838X / Vol, 10 (6): 679-684 Science Exlorer Publications Design of NARMA L- Control of Nonlinear

More information

Principal Components Analysis and Unsupervised Hebbian Learning

Principal Components Analysis and Unsupervised Hebbian Learning Princial Comonents Analysis and Unsuervised Hebbian Learning Robert Jacobs Deartment of Brain & Cognitive Sciences University of Rochester Rochester, NY 1467, USA August 8, 008 Reference: Much of the material

More information

Metrics Performance Evaluation: Application to Face Recognition

Metrics Performance Evaluation: Application to Face Recognition Metrics Performance Evaluation: Alication to Face Recognition Naser Zaeri, Abeer AlSadeq, and Abdallah Cherri Electrical Engineering Det., Kuwait University, P.O. Box 5969, Safat 6, Kuwait {zaery, abeer,

More information

Logistics Optimization Using Hybrid Metaheuristic Approach under Very Realistic Conditions

Logistics Optimization Using Hybrid Metaheuristic Approach under Very Realistic Conditions 17 th Euroean Symosium on Comuter Aided Process Engineering ESCAPE17 V. Plesu and P.S. Agachi (Editors) 2007 Elsevier B.V. All rights reserved. 1 Logistics Otimization Using Hybrid Metaheuristic Aroach

More information

Dynamic System Eigenvalue Extraction using a Linear Echo State Network for Small-Signal Stability Analysis a Novel Application

Dynamic System Eigenvalue Extraction using a Linear Echo State Network for Small-Signal Stability Analysis a Novel Application Dynamic System Eigenvalue Extraction using a Linear Echo State Network for Small-Signal Stability Analysis a Novel Alication Jiaqi Liang, Jing Dai, Ganesh K. Venayagamoorthy, and Ronald G. Harley Abstract

More information

John Weatherwax. Analysis of Parallel Depth First Search Algorithms

John Weatherwax. Analysis of Parallel Depth First Search Algorithms Sulementary Discussions and Solutions to Selected Problems in: Introduction to Parallel Comuting by Viin Kumar, Ananth Grama, Anshul Guta, & George Karyis John Weatherwax Chater 8 Analysis of Parallel

More information

Model checking, verification of CTL. One must verify or expel... doubts, and convert them into the certainty of YES [Thomas Carlyle]

Model checking, verification of CTL. One must verify or expel... doubts, and convert them into the certainty of YES [Thomas Carlyle] Chater 5 Model checking, verification of CTL One must verify or exel... doubts, and convert them into the certainty of YES or NO. [Thomas Carlyle] 5. The verification setting Page 66 We introduce linear

More information

Outline. Markov Chains and Markov Models. Outline. Markov Chains. Markov Chains Definitions Huizhen Yu

Outline. Markov Chains and Markov Models. Outline. Markov Chains. Markov Chains Definitions Huizhen Yu and Markov Models Huizhen Yu janey.yu@cs.helsinki.fi Det. Comuter Science, Univ. of Helsinki Some Proerties of Probabilistic Models, Sring, 200 Huizhen Yu (U.H.) and Markov Models Jan. 2 / 32 Huizhen Yu

More information

Chapter 1 Fundamentals

Chapter 1 Fundamentals Chater Fundamentals. Overview of Thermodynamics Industrial Revolution brought in large scale automation of many tedious tasks which were earlier being erformed through manual or animal labour. Inventors

More information

Power System Reactive Power Optimization Based on Fuzzy Formulation and Interior Point Filter Algorithm

Power System Reactive Power Optimization Based on Fuzzy Formulation and Interior Point Filter Algorithm Energy and Power Engineering, 203, 5, 693-697 doi:0.4236/ee.203.54b34 Published Online July 203 (htt://www.scir.org/journal/ee Power System Reactive Power Otimization Based on Fuzzy Formulation and Interior

More information

Linear diophantine equations for discrete tomography

Linear diophantine equations for discrete tomography Journal of X-Ray Science and Technology 10 001 59 66 59 IOS Press Linear diohantine euations for discrete tomograhy Yangbo Ye a,gewang b and Jiehua Zhu a a Deartment of Mathematics, The University of Iowa,

More information

DETC2003/DAC AN EFFICIENT ALGORITHM FOR CONSTRUCTING OPTIMAL DESIGN OF COMPUTER EXPERIMENTS

DETC2003/DAC AN EFFICIENT ALGORITHM FOR CONSTRUCTING OPTIMAL DESIGN OF COMPUTER EXPERIMENTS Proceedings of DETC 03 ASME 003 Design Engineering Technical Conferences and Comuters and Information in Engineering Conference Chicago, Illinois USA, Setember -6, 003 DETC003/DAC-48760 AN EFFICIENT ALGORITHM

More information

PERFORMANCE BASED DESIGN SYSTEM FOR CONCRETE MIXTURE WITH MULTI-OPTIMIZING GENETIC ALGORITHM

PERFORMANCE BASED DESIGN SYSTEM FOR CONCRETE MIXTURE WITH MULTI-OPTIMIZING GENETIC ALGORITHM PERFORMANCE BASED DESIGN SYSTEM FOR CONCRETE MIXTURE WITH MULTI-OPTIMIZING GENETIC ALGORITHM Takafumi Noguchi 1, Iei Maruyama 1 and Manabu Kanematsu 1 1 Deartment of Architecture, University of Tokyo,

More information

A Recursive Block Incomplete Factorization. Preconditioner for Adaptive Filtering Problem

A Recursive Block Incomplete Factorization. Preconditioner for Adaptive Filtering Problem Alied Mathematical Sciences, Vol. 7, 03, no. 63, 3-3 HIKARI Ltd, www.m-hiari.com A Recursive Bloc Incomlete Factorization Preconditioner for Adative Filtering Problem Shazia Javed School of Mathematical

More information

E( x ) [b(n) - a(n, m)x(m) ]

E( x ) [b(n) - a(n, m)x(m) ] Homework #, EE5353. An XOR network has two inuts, one hidden unit, and one outut. It is fully connected. Gie the network's weights if the outut unit has a ste actiation and the hidden unit actiation is

More information

NUMERICAL AND THEORETICAL INVESTIGATIONS ON DETONATION- INERT CONFINEMENT INTERACTIONS

NUMERICAL AND THEORETICAL INVESTIGATIONS ON DETONATION- INERT CONFINEMENT INTERACTIONS NUMERICAL AND THEORETICAL INVESTIGATIONS ON DETONATION- INERT CONFINEMENT INTERACTIONS Tariq D. Aslam and John B. Bdzil Los Alamos National Laboratory Los Alamos, NM 87545 hone: 1-55-667-1367, fax: 1-55-667-6372

More information

Lower bound solutions for bearing capacity of jointed rock

Lower bound solutions for bearing capacity of jointed rock Comuters and Geotechnics 31 (2004) 23 36 www.elsevier.com/locate/comgeo Lower bound solutions for bearing caacity of jointed rock D.J. Sutcliffe a, H.S. Yu b, *, S.W. Sloan c a Deartment of Civil, Surveying

More information

CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules

CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules. Introduction: The is widely used in industry to monitor the number of fraction nonconforming units. A nonconforming unit is

More information

FE FORMULATIONS FOR PLASTICITY

FE FORMULATIONS FOR PLASTICITY G These slides are designed based on the book: Finite Elements in Plasticity Theory and Practice, D.R.J. Owen and E. Hinton, 1970, Pineridge Press Ltd., Swansea, UK. 1 Course Content: A INTRODUCTION AND

More information

The Motion Path Study of Measuring Robot Based on Variable Universe Fuzzy Control

The Motion Path Study of Measuring Robot Based on Variable Universe Fuzzy Control MATEC Web of Conferences 95, 8 (27) DOI:.5/ matecconf/27958 ICMME 26 The Motion Path Study of Measuring Robot Based on Variable Universe Fuzzy Control Ma Guoqing, Yu Zhenglin, Cao Guohua, Zhang Ruoyan

More information

Gradient Based Iterative Algorithms for Solving a Class of Matrix Equations

Gradient Based Iterative Algorithms for Solving a Class of Matrix Equations 1216 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 50, NO. 8, AUGUST 2005 Exanding (A3)gives M 0 8 T +1 T +18 =( T +1 +1 ) F = I (A4) g = 0F 8 T +1 =( T +1 +1 ) f = T +1 +1 + T +18 F 8 T +1 =( T +1 +1 )

More information

AI*IA 2003 Fusion of Multiple Pattern Classifiers PART III

AI*IA 2003 Fusion of Multiple Pattern Classifiers PART III AI*IA 23 Fusion of Multile Pattern Classifiers PART III AI*IA 23 Tutorial on Fusion of Multile Pattern Classifiers by F. Roli 49 Methods for fusing multile classifiers Methods for fusing multile classifiers

More information

5. PRESSURE AND VELOCITY SPRING Each component of momentum satisfies its own scalar-transport equation. For one cell:

5. PRESSURE AND VELOCITY SPRING Each component of momentum satisfies its own scalar-transport equation. For one cell: 5. PRESSURE AND VELOCITY SPRING 2019 5.1 The momentum equation 5.2 Pressure-velocity couling 5.3 Pressure-correction methods Summary References Examles 5.1 The Momentum Equation Each comonent of momentum

More information

An Ant Colony Optimization Approach to the Probabilistic Traveling Salesman Problem

An Ant Colony Optimization Approach to the Probabilistic Traveling Salesman Problem An Ant Colony Otimization Aroach to the Probabilistic Traveling Salesman Problem Leonora Bianchi 1, Luca Maria Gambardella 1, and Marco Dorigo 2 1 IDSIA, Strada Cantonale Galleria 2, CH-6928 Manno, Switzerland

More information

MULTI-CHANNEL PARAMETRIC ESTIMATOR FAST BLOCK MATRIX INVERSES

MULTI-CHANNEL PARAMETRIC ESTIMATOR FAST BLOCK MATRIX INVERSES MULTI-CANNEL ARAMETRIC ESTIMATOR FAST BLOCK MATRIX INVERSES S Lawrence Marle Jr School of Electrical Engineering and Comuter Science Oregon State University Corvallis, OR 97331 Marle@eecsoregonstateedu

More information

Principles of Computed Tomography (CT)

Principles of Computed Tomography (CT) Page 298 Princiles of Comuted Tomograhy (CT) The theoretical foundation of CT dates back to Johann Radon, a mathematician from Vienna who derived a method in 1907 for rojecting a 2-D object along arallel

More information

Recursive Estimation of the Preisach Density function for a Smart Actuator

Recursive Estimation of the Preisach Density function for a Smart Actuator Recursive Estimation of the Preisach Density function for a Smart Actuator Ram V. Iyer Deartment of Mathematics and Statistics, Texas Tech University, Lubbock, TX 7949-142. ABSTRACT The Preisach oerator

More information

Finite-State Verification or Model Checking. Finite State Verification (FSV) or Model Checking

Finite-State Verification or Model Checking. Finite State Verification (FSV) or Model Checking Finite-State Verification or Model Checking Finite State Verification (FSV) or Model Checking Holds the romise of roviding a cost effective way of verifying imortant roerties about a system Not all faults

More information

Genetic Algorithms, Selection Schemes, and the Varying Eects of Noise. IlliGAL Report No November Department of General Engineering

Genetic Algorithms, Selection Schemes, and the Varying Eects of Noise. IlliGAL Report No November Department of General Engineering Genetic Algorithms, Selection Schemes, and the Varying Eects of Noise Brad L. Miller Det. of Comuter Science University of Illinois at Urbana-Chamaign David E. Goldberg Det. of General Engineering University

More information

Robust Performance Design of PID Controllers with Inverse Multiplicative Uncertainty

Robust Performance Design of PID Controllers with Inverse Multiplicative Uncertainty American Control Conference on O'Farrell Street San Francisco CA USA June 9 - July Robust Performance Design of PID Controllers with Inverse Multilicative Uncertainty Tooran Emami John M Watkins Senior

More information

A Parallel Algorithm for Minimization of Finite Automata

A Parallel Algorithm for Minimization of Finite Automata A Parallel Algorithm for Minimization of Finite Automata B. Ravikumar X. Xiong Deartment of Comuter Science University of Rhode Island Kingston, RI 02881 E-mail: fravi,xiongg@cs.uri.edu Abstract In this

More information

Analysis of some entrance probabilities for killed birth-death processes

Analysis of some entrance probabilities for killed birth-death processes Analysis of some entrance robabilities for killed birth-death rocesses Master s Thesis O.J.G. van der Velde Suervisor: Dr. F.M. Sieksma July 5, 207 Mathematical Institute, Leiden University Contents Introduction

More information

Lorenz Wind Disturbance Model Based on Grey Generated Components

Lorenz Wind Disturbance Model Based on Grey Generated Components Energies 214, 7, 7178-7193; doi:1.339/en7117178 Article OPEN ACCESS energies ISSN 1996-173 www.mdi.com/journal/energies Lorenz Wind Disturbance Model Based on Grey Generated Comonents Yagang Zhang 1,2,

More information

Multivariable Generalized Predictive Scheme for Gas Turbine Control in Combined Cycle Power Plant

Multivariable Generalized Predictive Scheme for Gas Turbine Control in Combined Cycle Power Plant Multivariable Generalized Predictive Scheme for Gas urbine Control in Combined Cycle Power Plant L.X.Niu and X.J.Liu Deartment of Automation North China Electric Power University Beiing, China, 006 e-mail

More information

MATHEMATICAL MODELLING OF THE WIRELESS COMMUNICATION NETWORK

MATHEMATICAL MODELLING OF THE WIRELESS COMMUNICATION NETWORK Comuter Modelling and ew Technologies, 5, Vol.9, o., 3-39 Transort and Telecommunication Institute, Lomonosov, LV-9, Riga, Latvia MATHEMATICAL MODELLIG OF THE WIRELESS COMMUICATIO ETWORK M. KOPEETSK Deartment

More information

Best approximation by linear combinations of characteristic functions of half-spaces

Best approximation by linear combinations of characteristic functions of half-spaces Best aroximation by linear combinations of characteristic functions of half-saces Paul C. Kainen Deartment of Mathematics Georgetown University Washington, D.C. 20057-1233, USA Věra Kůrková Institute of

More information

MODEL-BASED MULTIPLE FAULT DETECTION AND ISOLATION FOR NONLINEAR SYSTEMS

MODEL-BASED MULTIPLE FAULT DETECTION AND ISOLATION FOR NONLINEAR SYSTEMS MODEL-BASED MULIPLE FAUL DEECION AND ISOLAION FOR NONLINEAR SYSEMS Ivan Castillo, and homas F. Edgar he University of exas at Austin Austin, X 78712 David Hill Chemstations Houston, X 77009 Abstract A

More information

Construction of High-Girth QC-LDPC Codes

Construction of High-Girth QC-LDPC Codes MITSUBISHI ELECTRIC RESEARCH LABORATORIES htt://wwwmerlcom Construction of High-Girth QC-LDPC Codes Yige Wang, Jonathan Yedidia, Stark Draer TR2008-061 Setember 2008 Abstract We describe a hill-climbing

More information

Universal Finite Memory Coding of Binary Sequences

Universal Finite Memory Coding of Binary Sequences Deartment of Electrical Engineering Systems Universal Finite Memory Coding of Binary Sequences Thesis submitted towards the degree of Master of Science in Electrical and Electronic Engineering in Tel-Aviv

More information

COMPLEX-VALUED ASSOCIATIVE MEMORIES WITH PROJECTION AND ITERATIVE LEARNING RULES

COMPLEX-VALUED ASSOCIATIVE MEMORIES WITH PROJECTION AND ITERATIVE LEARNING RULES JAISCR, 2018, Vol 8, o 3, 237 249 101515/jaiscr-2018-0015 COMPLEX-VALUED ASSOCIATIVE MEMORIES WITH PROJECTIO AD ITERATIVE LEARIG RULES Teijiro Isokawa 1, Hiroki Yamamoto 1, Haruhiko ishimura 2, Takayuki

More information

Optimal Design of Truss Structures Using a Neutrosophic Number Optimization Model under an Indeterminate Environment

Optimal Design of Truss Structures Using a Neutrosophic Number Optimization Model under an Indeterminate Environment Neutrosohic Sets and Systems Vol 14 016 93 University of New Mexico Otimal Design of Truss Structures Using a Neutrosohic Number Otimization Model under an Indeterminate Environment Wenzhong Jiang & Jun

More information

CONSTRUCTIVE NEURAL NETWORKS IN FORECASTING WEEKLY RIVER FLOW

CONSTRUCTIVE NEURAL NETWORKS IN FORECASTING WEEKLY RIVER FLOW CONSTRUCTIVE NEURAL NETWORKS IN FORECASTING WEEKLY RIVER FLOW MÊUSER VALENÇA Universidade Salgado de Oliveira - UNIVERSO /Comanhia Hidroelétrica do São Francisco - Chesf Rua Gasar Peres, 47/4 CDU, Recife,

More information

Lending Direction to Neural Networks. Richard S. Zemel North Torrey Pines Rd. La Jolla, CA Christopher K. I.

Lending Direction to Neural Networks. Richard S. Zemel North Torrey Pines Rd. La Jolla, CA Christopher K. I. Lending Direction to Neural Networks Richard S Zemel CNL, The Salk Institute 10010 North Torrey Pines Rd La Jolla, CA 92037 Christoher K I Williams Deartment of Comuter Science University of Toronto Toronto,

More information

A Unified 2D Representation of Fuzzy Reasoning, CBR, and Experience Based Reasoning

A Unified 2D Representation of Fuzzy Reasoning, CBR, and Experience Based Reasoning University of Wollongong Research Online Faculty of Commerce - aers (Archive) Faculty of Business 26 A Unified 2D Reresentation of Fuzzy Reasoning, CBR, and Exerience Based Reasoning Zhaohao Sun University

More information

Distributed K-means over Compressed Binary Data

Distributed K-means over Compressed Binary Data 1 Distributed K-means over Comressed Binary Data Elsa DUPRAZ Telecom Bretagne; UMR CNRS 6285 Lab-STICC, Brest, France arxiv:1701.03403v1 [cs.it] 12 Jan 2017 Abstract We consider a networ of binary-valued

More information

Churilova Maria Saint-Petersburg State Polytechnical University Department of Applied Mathematics

Churilova Maria Saint-Petersburg State Polytechnical University Department of Applied Mathematics Churilova Maria Saint-Petersburg State Polytechnical University Deartment of Alied Mathematics Technology of EHIS (staming) alied to roduction of automotive arts The roblem described in this reort originated

More information

Minimax Design of Nonnegative Finite Impulse Response Filters

Minimax Design of Nonnegative Finite Impulse Response Filters Minimax Design of Nonnegative Finite Imulse Resonse Filters Xiaoing Lai, Anke Xue Institute of Information and Control Hangzhou Dianzi University Hangzhou, 3118 China e-mail: laix@hdu.edu.cn; akxue@hdu.edu.cn

More information

A MIXED CONTROL CHART ADAPTED TO THE TRUNCATED LIFE TEST BASED ON THE WEIBULL DISTRIBUTION

A MIXED CONTROL CHART ADAPTED TO THE TRUNCATED LIFE TEST BASED ON THE WEIBULL DISTRIBUTION O P E R A T I O N S R E S E A R C H A N D D E C I S I O N S No. 27 DOI:.5277/ord73 Nasrullah KHAN Muhammad ASLAM 2 Kyung-Jun KIM 3 Chi-Hyuck JUN 4 A MIXED CONTROL CHART ADAPTED TO THE TRUNCATED LIFE TEST

More information

On Fractional Predictive PID Controller Design Method Emmanuel Edet*. Reza Katebi.**

On Fractional Predictive PID Controller Design Method Emmanuel Edet*. Reza Katebi.** On Fractional Predictive PID Controller Design Method Emmanuel Edet*. Reza Katebi.** * echnology and Innovation Centre, Level 4, Deartment of Electronic and Electrical Engineering, University of Strathclyde,

More information

Percolation Thresholds of Updated Posteriors for Tracking Causal Markov Processes in Complex Networks

Percolation Thresholds of Updated Posteriors for Tracking Causal Markov Processes in Complex Networks Percolation Thresholds of Udated Posteriors for Tracing Causal Marov Processes in Comlex Networs We resent the adative measurement selection robarxiv:95.2236v1 stat.ml 14 May 29 Patric L. Harrington Jr.

More information

Uncorrelated Multilinear Discriminant Analysis with Regularization and Aggregation for Tensor Object Recognition

Uncorrelated Multilinear Discriminant Analysis with Regularization and Aggregation for Tensor Object Recognition Uncorrelated Multilinear Discriminant Analysis with Regularization and Aggregation for Tensor Object Recognition Haiing Lu, K.N. Plataniotis and A.N. Venetsanooulos The Edward S. Rogers Sr. Deartment of

More information

Evaluating Process Capability Indices for some Quality Characteristics of a Manufacturing Process

Evaluating Process Capability Indices for some Quality Characteristics of a Manufacturing Process Journal of Statistical and Econometric Methods, vol., no.3, 013, 105-114 ISSN: 051-5057 (rint version), 051-5065(online) Scienress Ltd, 013 Evaluating Process aability Indices for some Quality haracteristics

More information

Distributed Rule-Based Inference in the Presence of Redundant Information

Distributed Rule-Based Inference in the Presence of Redundant Information istribution Statement : roved for ublic release; distribution is unlimited. istributed Rule-ased Inference in the Presence of Redundant Information June 8, 004 William J. Farrell III Lockheed Martin dvanced

More information

Research on Evaluation Method of Organization s Performance Based on Comparative Advantage Characteristics

Research on Evaluation Method of Organization s Performance Based on Comparative Advantage Characteristics Vol.1, No.10, Ar 01,.67-7 Research on Evaluation Method of Organization s Performance Based on Comarative Advantage Characteristics WEN Xin 1, JIA Jianfeng and ZHAO Xi nan 3 Abstract It as under the guidance

More information

Analysis of execution time for parallel algorithm to dertmine if it is worth the effort to code and debug in parallel

Analysis of execution time for parallel algorithm to dertmine if it is worth the effort to code and debug in parallel Performance Analysis Introduction Analysis of execution time for arallel algorithm to dertmine if it is worth the effort to code and debug in arallel Understanding barriers to high erformance and redict

More information

Ensemble Forecasting the Number of New Car Registrations

Ensemble Forecasting the Number of New Car Registrations Ensemble Forecasting the Number of New Car Registrations SJOERT FLEURKE Radiocommunications Agency Netherlands Emmasingel 1, 9700 AL, Groningen THE NETHERLANDS sjoert.fleurke@agentschatelecom.nl htt://www.agentschatelecom.nl

More information

Uncertainty Modeling with Interval Type-2 Fuzzy Logic Systems in Mobile Robotics

Uncertainty Modeling with Interval Type-2 Fuzzy Logic Systems in Mobile Robotics Uncertainty Modeling with Interval Tye-2 Fuzzy Logic Systems in Mobile Robotics Ondrej Linda, Student Member, IEEE, Milos Manic, Senior Member, IEEE bstract Interval Tye-2 Fuzzy Logic Systems (IT2 FLSs)

More information

Probability Estimates for Multi-class Classification by Pairwise Coupling

Probability Estimates for Multi-class Classification by Pairwise Coupling Probability Estimates for Multi-class Classification by Pairwise Couling Ting-Fan Wu Chih-Jen Lin Deartment of Comuter Science National Taiwan University Taiei 06, Taiwan Ruby C. Weng Deartment of Statistics

More information

Improved Capacity Bounds for the Binary Energy Harvesting Channel

Improved Capacity Bounds for the Binary Energy Harvesting Channel Imroved Caacity Bounds for the Binary Energy Harvesting Channel Kaya Tutuncuoglu 1, Omur Ozel 2, Aylin Yener 1, and Sennur Ulukus 2 1 Deartment of Electrical Engineering, The Pennsylvania State University,

More information

Mobility-Induced Service Migration in Mobile. Micro-Clouds

Mobility-Induced Service Migration in Mobile. Micro-Clouds arxiv:503054v [csdc] 7 Mar 205 Mobility-Induced Service Migration in Mobile Micro-Clouds Shiiang Wang, Rahul Urgaonkar, Ting He, Murtaza Zafer, Kevin Chan, and Kin K LeungTime Oerating after ossible Deartment

More information