Fuzzy Automata Induction using Construction Method
|
|
- Howard Holland
- 6 years ago
- Views:
Transcription
1 Journal of Mathematics and Statistics 2 (2): 395-4, 26 ISSN Science Publications Fuzzy Automata Induction using Construction Method 1,2 Mo Zhi Wen and 2 Wan Min 1 Center of Intelligent Control and Develoment, Southwest Jiaotong University, China 2 College of Mathematics and Software Science, Sichuan Normal University, China Abstract: Recurrent neural networs have recently been demonstrated to have the ability to learn simle grammars. In articular, networs using second-order units have been successfully at this tas. However, it is often difficult to redict the otimal neural networ size to induce an unnown automaton from examles. Instead of just adjusting the weights in a networ of fixed toology, we adot the dynamic networs (i.e. the toology and weights can be simultaneously changed during training) for this alication. We aly the idea of maximizing correlation in the cascade-correlation algorithm to the second-order single-layer recurrent neural networ to generate a new construction algorithm and use it to induce fuzzy finite state automata. The exeriment indicates that such a dynamic networ erforms well. Key words: Fuzzy automation, construction method, dynamic networ INTRODUCTION Choosing an aroriate architecture for a learning tas is an imortant issue in training neural networs. Because general methods for determining a good networ architecture rior to training are generally lacing, algorithms that adat the networ architecture during training have been develoed. Constructive algorithm determines both the architecture of the networ in addition to the arameters (weights, thresholds, etc) necessary for learning the data. Comared to algorithms such as bac roagation they have the following otential advantages: * They grow the architecture of the neural networ in addition to finding the arameters required. * They can reduce the training to single-node learning hence substantially simlifying and accelerating the training rocess. Constructive algorithms such as cascade correlation have at least an order of magnitude imrovement in learning seed over bac roagation. * Some constructive algorithms are guaranteed to converge unlie the bac roagation algorithm, for examle. * Algorithms such as bac roagation can suffer from catastrohic interference when learning new data, that is, storage of new information seriously disruts the retrieval of reviously stored data. The incremental learning strategy of constructive algorithms offers a ossible solution to this roblem. Rather than growing neural networs destructive or runing [1] algorithms remove neurons or weights from large and trained networs and retain the reduced networs in an attemt to imrove their generalization erformance. Aroaches include removing the smallest magnitude or insensitive weights or by adding a enalty term to the energy function to encourage weight decay. However, it is difficult to guess the initial networ which is bound to solve the roblem. Recurrent neural networs have been demonstrated to have the ability to learn simle grammars [2-6]. However, the roblem associated with recurrent networs of fixed toology is aarent. We have to emirically choose the number of hidden neurons, there is no general methods for determining an aroriate toology for secific alication. Consequently, the convergence can t be guaranteed. Constructive or destructive methods that add or subtract neurons, layers, connections, etc. might offer a solution to this roblem though it is comlementary. We roose a construction algorithm for second-order single-layer recurrent neural networ which adds a second-order recurrent hidden neuron at a time to adat the toology of the networ in addition to the change of the weights and use it to learn fuzzy finite state machines and exerimentally find it effective. Fuzzy finite state automata: We begin by the class of fuzzy automata which we are interested in learning: Definition: A fuzzy finite-state automaton (FFA) is a 6-tule < Σ, Q, Z, q,δ,ω > where Σ is a finite inut alhabet and Q is the set of states; Z is a finite outut alhabet, q is an initial state, δ : Σ Q [,1] Q is the fuzzy transition ma andω : Q Z is the outut ma. Corresonding Author: Mo Zhi Wen, 1 Center of Intelligent control and Develoment,Southwest Jiaotong University 395
2 J. Math. & Stat., 2 (2): 395-4, 26 It should be noted that a finite fuzzy automaton is reduced to a conventional (cris) one when all the transition degrees are equal to 1. The following result is the basis for maing FFAs into the corresonding deterministic recurrent neural networs [7] : Theorem: Given a FFA, there exists a deterministic finite state automaton (DFA) M with outut alhabet Z { θ : θ is a roduction weight} {} which comutes the membershi function µ : Σ [,1] of the language L ( M ). An examle of FFA-to-DFA transformation is shown in Fig. 1a and b [8]. recurrent layer with N neurons and two outut layers both with M neurons, where M is determined by the level of accuracy desired in the outut. If r is the accuracy required(r.1 if the accuracy is decimal, r.1 if the accuracy is centesimal, etc.), M(1 log( r) +1). The hidden recurrent layer is activated by both the inut neurons and the recurrent layer itself. The first outut layer receives the values of the recurrent neurons at time m, where m is the length of the inut string. The neurons in this layer then comete to have a winner to determine the outut of the networ. The dynamic of the neural networ is as follows: S t i + 1 g( Ξ + ) Ξ t t Wij S ji i1, N (1) j, O g( σ ) σ N-1 u i S i m i 1, M (2) 1 if O max{o,..., O M-1} out otherwise (3) Fig. 1: Examle of a transformation of a secific FFA into its corresonding DFA. (a) A fuzzy finite-state automaton with weighted state transitions. State 1 is the automaton s start state; acceting states are drawn with double circle. A transition from state q j to q i on inut symbol a with weightθ is reresented as a direct arc from q j to q i labeled a /θ. (b) corresonding deterministic finite-state automata which comutes the membershi function strings. The acceting states are labeled with the degree of membershi. Notice that all transitions in the DFA have weight one Second-order recurrent neural networ used to induce FFA: A second-order recurrent neural networ lined to an outut layer is shown to be a good technique to erform fuzzy automaton induction in [9], it consists of three arts (Fig. 2): inut layer, a single 396 Where g is a sigmoid discriminant function and b is the bias associated with hidden recurrent state neurons S. u i connected S i to the neuron O of the first outut layer, W ij is the second-order connection weight of the recurrent layer and inut layer. The membershi degree associated to the inut string is determined by the last outut layer: µ L (x)i*r, where r is the desired accuracy, i is the index of the non-zero neuron of the last outut layer. The weights are udated by the seudo-gradient decent algorithm: The error of each outut neuron is 1 2 E i (Ti O i ), 2 Where T i is the exected value for the ith outut neuron and O i is the rovisional value given by equation (2). The total error for the networ is M-1 E Ei ( 4 ) i When E>,modify the weights as follows: E u ij - α i u ij E w lon - α wlon, Whereis the error tolerance of the neural networ, is the learning rate. Further ste: E i (T i -O i ) O i 1-O i S m j u ij
3 Fig. 2: E wlon Neural networ used for fuzzy graininar inference M-1 N-1 N-1 L-1 S m-1 m-1 m-1 m-1 m-1 j (Tr O r )g'( σr ) urg'( θ ) δls In + wji w lon r j Initialize j wlon Dynamic adatation of recurrent neural networ architecture: Constructive algorithms dynamically grow the structure during the networ s training. Various tyes of networ growing methods have been roosed [1-14]. However, a few of them are for recurrent networs. The aroach we roose in this aer is a constructive algorithm for second-order single-layer recurrent neural networs. In the course of the induction of an unnown fuzzy finite-finite state automata from training samles, our algorithm toologically changes the networ in addition to adjusting the weights of the second-order recurrent neural networ. We utilize the idea of maximizing correlation in cascade-correlation algorithm to get a rior nowledge at each stage of the dynamic training which is then used to initialize the new generated networ for the following learning. Before the descrition of the constructive learning, the following criteria need to be et in mind: * When the networ structure needs to change J. Math. & Stat., 2 (2): 395-4, * How to connect the newly created neuron to the existing networ * How to assign initial values to the newly added connection weights We initialize the networ with only one hidden unit. The size of the inut and outut layer are determined by the I\O reresentation the exerimenter has chosen. We resent the whole training examles to the recurrent neural networ. When no significant error reduction has occurred or the training eochs have achieved a reset number, we measure the error over the entire training set to get the residual error signal according to (4). Here an eoch is defined as a training cycle in which the networ sees all training samles. If the residual error is small enough, we sto; if not, we attemt to add new hidden unit one by one to the single hidden layer using the unit-creation algorithm to maximize the magnitude of the correlation between the residual error and the candidate unit s outut. For the second criterion, the newly added hidden neuron receives second-order connection from the inuts and re-existing hidden units and then oututs to the outut layer and recurrent layer as shown in Fig. 3. Now, we turn to the unit-creation algorithm in our method, it is different from the case of cascade-correlation due to the different structure we use. We define S as in [1] : S ( )(, ) (5) where V is the candidate unit s outut value when the networ rocessing the th training examle and E,o is the residual error observed at outut neuron o when the networ structure needs to change. and E are the values of V and E averaged over the whole training examles. During the maximization of S, all the re-exiting weights are frozen other than the inut weights of the candidate unit and its connection with the hidden units. The following training algorithm udates the weights at the end of the resentation of the whole training examles. Suose we are ready to add the hth neuron, denote the length of the th examle, then V S, i.e. the outut of the hth hidden neuron at time. w hj V w hj w ih β β w hj, σ (E, - ) j1, h, 1, Σ ( 6 ) β w ih β, σ (E, - )
4 J. Math. & Stat., 2 (2): 395-4, 26 Error Eochs Fig. 4: The evolution of the training rocess for 1 hidden neuron networ Fig. 3: The networ starts with neuron 1 and grows incrementally to n neurons V w ih i1, h-1, 1, Σ (7) Error where σ o is the sign of the correlation between the candidate s value and outut o due to the modulus in (5), β is the learning rate, w hj are the trainable inut weights to the newly-added neuron h; w ih connect h to the re-existing ith hidden unit and Σ is the number of the inut units. According to (1), we further induce: l V h g xs w hfm w hfm V l h g x(s w hhm w hhm hh + w I l l h w fhm I 1 1 I I where f h l 1 h ) where fh (8) W hhm g [ W hf I (g x(s l 2 h I l 1 m + l 2 1 f W fhm Initialize w ))] f fhm (9) W ff Where g is the derivation for the th examle of the re-existing or candidate unit s activation function with resect to the sum of its inuts,α is the learning rate. Eochs Fig. 5: The evolution of the training rocess for current networ We resent the whole training examles to the current networ and train W and W according to the udate rules as described in (6)-(9) until we achieve a re-set number of training eochs or S sto increasing. At this time, we fix the trained W hj and W jh as the initial weights of the newly-added neuron, its connections to the outut layer are set to zero or very small random numbers so that they initially have little or no effect on training. The old weights start with their reviously trained values. Comared with the cascade-correlation in which only the outut weights of the new neuron is trainable, we allow all the weights are trainable. In order to reserve as much nowledge we ve learned as ossible, we set different learning rate to udate the weights we obtained and the outut weights of the new neuron connecting to the outut layer where the latter is larger than the former. U to now, the third criterion is resolved. Instead of a single candidate unit, a ool of candidates is ossibly used for the unit-creation algorithm which are trained in arallel from different random initial weights. They all receive the same inut signals and see the same residual error for each training examle. Whenever we decide that no further rogress of S is being made, we install the candidate whose 398
5 J. Math. & Stat., 2 (2): 395-4, 26 Fig. 6: The extracted automaton from learned networ correlation score is the best with the trained inut weights. Simulation exeriment: In this art, we have tested the construction algorithm on fuzzy grammar inference where the training examles are generated from the fuzzy automaton shown in Fig. 1a. The exeriment has been designed in the same way as in [9]. 1. Generation of examles: We start from a fuzzy automaton M as shown in Fig. 1a, from which we generate a set of 2 examles recognized by M. Any examle consist of a air (P i, µ i ) where P i is a random sequence formed by symbols of the reset number of eochs are achieved or S sto increasing, we install the candidate whose correlation score is the best. After 225 eochs, no significant increasing of S has aeared. Initial the new networ as described earlier and train it with learning rate.15 for the weights connecting the newly-added neuron to the outut layer and.8 for the rest resectively. The neural networ learned all the examles in eochs 348. Figure 5 shows the evolution of the error on the training rocess. We adot extraction algorithm 1 in [8] -artitioning of the outut sace to extract fuzzy finite automaton from learned networ: alhabet σ {a, b} and µ i is the membershi degree CONCLUSION to fuzzy language L(M) [9] for detail. The samles strings are ordered according to their length. An aroriate toology made u of a 2. Forgetting M and train a dynamic networ to second-order recurrent neural networ lined to an induce an unnown automaton which recognizes outut layer is roosed in [8] to erform fuzzy the training examles: grammatical inference. However, an aroriate The initial recurrent neural networ is comosed of number of the hidden unites is difficult to be redicted. 1 recurrent hidden neuron and 11 neurons in the outut Instead of just adjusting the weights in a networ of layer, i.e., a decimal accuracy is required. The fixed toology, we adot the construction method for arameters for the networ learning are: learning this alication. We aly the idea of maximizing rateα.3, error toleranceε 2.5x1-5. Figure 4 shows correlation of the cascade-correlation algorithm to the the evolution of the training rocess for 1-hidden unite second-order recurrent neural networ to generate a networ. After 285 eochs, no significant error new construction method and use it to infer fuzzy reduction has occurred and the residual error signal is grammar from examles. At every stage of dynamical It s much larger thanε. So we add a neuron to training, we obtain a rior nowledge to initialize a the hidden layer. newly generated networ and train it until no further We use 8 candidate units, each with a different set decrease of the error being made. It recovers the of random initial weights. Learning rate β is set to.3. absence of the emiric in the case of the fixed-toology According to (5)-(9) to maximize S in arallel until a networ and generates an otimal toology 399
6 J. Math. & Stat., 2 (2): 395-4, 26 automatically. An exeriment shows that this algorithm is ractical. REFERENCES 1. Mozer, M.C. and P. Smolensy, Seletonization: A technique for trimming the fat from a networ via relevance assessment. Connection Sci., 11: Giles, C.L., C.B. Miller, D. Chen, H.H. Chen, G.Z. Sun and Y.C. Lee, Learning and extracting finite state automata with second-order recurrent neural networs. Neural Comutation, 4: Zeng, Z., R. Goodman and P. Smyth, Learning finite state machines with self-clustering recurrent networs. Neural Comutation, 5: Elman, J.L., Distributed reresentations, simle recurrent networs and grammatical structure. Machine Learning, 7: Cleeremans, A., D. Servan-Schreiber and J.L. McClelland, Finite state automata and simle recurrent networs. Neural Comutation, 1: Williams, R.J. and D. Ziser, A learning algorithm for continually running fully recurrent neural networs. Neural Comutation, 1: Thomason, M. and P. Marinos, Deterministic accetors of regular fuzzy languages. IEEE Trans. Systems, Man and Cybernetics, 3: Christian, W.O., K.K. Thornber and C.L. Giles, Fuzzy finite-state automata can be deterministically encoded into recurrent neural networs. IEEE Trans. Fuzzy Systems, 6: Blanco, A., M. Delgado and M.C. Pegalajar, 21. Fuzzy automaton induction using neural networ. Intl. J. Aroximate Reasoning, 27: S. Fahlman, the cascade-correlation learning architecture, in advances in neural information rocessing systems 2(D. Touretzy, ed.),(san Mateo, CA), , Morgan Kaufmann Publishers, Ash, T., Dynamic node creation in bac-roagation networs. Connection Sci., 1: Frean, M., 199. The ustart algorithm: a method for constructing and training feedforward neural networs. Neural Comutation, 2: Gallant, S.I., Three constructive algorithms for networ learning. Proc. 8th Ann. Conf. Cognitive Science Society, : Hirose, Y., K. Yamashita and S. Hijiya, Bac-roagation algorithm which varies the number if hidden units. Neural Networs, 4:
An Evolution Strategy for the Induction of Fuzzy Finite-state Automata
Journal of Mathematics and Statistics 2 (2): 386-390, 2006 ISSN 1549-3644 Science Publications, 2006 An Evolution Strategy for the Induction of Fuzzy Finite-state Automata 1,2 Mozhiwen and 1 Wanmin 1 College
More informationRadial Basis Function Networks: Algorithms
Radial Basis Function Networks: Algorithms Introduction to Neural Networks : Lecture 13 John A. Bullinaria, 2004 1. The RBF Maing 2. The RBF Network Architecture 3. Comutational Power of RBF Networks 4.
More informationRecent Developments in Multilayer Perceptron Neural Networks
Recent Develoments in Multilayer Percetron eural etworks Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, Texas 75265 walter.delashmit@lmco.com walter.delashmit@verizon.net Michael
More informationAI*IA 2003 Fusion of Multiple Pattern Classifiers PART III
AI*IA 23 Fusion of Multile Pattern Classifiers PART III AI*IA 23 Tutorial on Fusion of Multile Pattern Classifiers by F. Roli 49 Methods for fusing multile classifiers Methods for fusing multile classifiers
More informationLIMITATIONS OF RECEPTRON. XOR Problem The failure of the perceptron to successfully simple problem such as XOR (Minsky and Papert).
LIMITATIONS OF RECEPTRON XOR Problem The failure of the ercetron to successfully simle roblem such as XOR (Minsky and Paert). x y z x y z 0 0 0 0 0 0 Fig. 4. The exclusive-or logic symbol and function
More informationMultilayer Perceptron Neural Network (MLPs) For Analyzing the Properties of Jordan Oil Shale
World Alied Sciences Journal 5 (5): 546-552, 2008 ISSN 1818-4952 IDOSI Publications, 2008 Multilayer Percetron Neural Network (MLPs) For Analyzing the Proerties of Jordan Oil Shale 1 Jamal M. Nazzal, 2
More informationSolved Problems. (a) (b) (c) Figure P4.1 Simple Classification Problems First we draw a line between each set of dark and light data points.
Solved Problems Solved Problems P Solve the three simle classification roblems shown in Figure P by drawing a decision boundary Find weight and bias values that result in single-neuron ercetrons with the
More informationPrincipal Components Analysis and Unsupervised Hebbian Learning
Princial Comonents Analysis and Unsuervised Hebbian Learning Robert Jacobs Deartment of Brain & Cognitive Sciences University of Rochester Rochester, NY 1467, USA August 8, 008 Reference: Much of the material
More informationAn Algorithm of Two-Phase Learning for Eleman Neural Network to Avoid the Local Minima Problem
IJCSNS International Journal of Comuter Science and Networ Security, VOL.7 No.4, Aril 2007 1 An Algorithm of Two-Phase Learning for Eleman Neural Networ to Avoid the Local Minima Problem Zhiqiang Zhang,
More informationA Parallel Algorithm for Minimization of Finite Automata
A Parallel Algorithm for Minimization of Finite Automata B. Ravikumar X. Xiong Deartment of Comuter Science University of Rhode Island Kingston, RI 02881 E-mail: fravi,xiongg@cs.uri.edu Abstract In this
More informationAnalysis of Group Coding of Multiple Amino Acids in Artificial Neural Network Applied to the Prediction of Protein Secondary Structure
Analysis of Grou Coding of Multile Amino Acids in Artificial Neural Networ Alied to the Prediction of Protein Secondary Structure Zhu Hong-ie 1, Dai Bin 2, Zhang Ya-feng 1, Bao Jia-li 3,* 1 College of
More informationA Bound on the Error of Cross Validation Using the Approximation and Estimation Rates, with Consequences for the Training-Test Split
A Bound on the Error of Cross Validation Using the Aroximation and Estimation Rates, with Consequences for the Training-Test Slit Michael Kearns AT&T Bell Laboratories Murray Hill, NJ 7974 mkearns@research.att.com
More informationMODELING THE RELIABILITY OF C4ISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL
Technical Sciences and Alied Mathematics MODELING THE RELIABILITY OF CISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL Cezar VASILESCU Regional Deartment of Defense Resources Management
More informationNeural network models for river flow forecasting
Neural network models for river flow forecasting Nguyen Tan Danh Huynh Ngoc Phien* and Ashim Das Guta Asian Institute of Technology PO Box 4 KlongLuang 220 Pathumthani Thailand Abstract In this study back
More informationA Simple Weight Decay Can Improve. Abstract. It has been observed in numerical simulations that a weight decay can improve
In Advances in Neural Information Processing Systems 4, J.E. Moody, S.J. Hanson and R.P. Limann, eds. Morgan Kaumann Publishers, San Mateo CA, 1995,. 950{957. A Simle Weight Decay Can Imrove Generalization
More informationGeneral Linear Model Introduction, Classes of Linear models and Estimation
Stat 740 General Linear Model Introduction, Classes of Linear models and Estimation An aim of scientific enquiry: To describe or to discover relationshis among events (variables) in the controlled (laboratory)
More informationUncorrelated Multilinear Principal Component Analysis for Unsupervised Multilinear Subspace Learning
TNN-2009-P-1186.R2 1 Uncorrelated Multilinear Princial Comonent Analysis for Unsuervised Multilinear Subsace Learning Haiing Lu, K. N. Plataniotis and A. N. Venetsanooulos The Edward S. Rogers Sr. Deartment
More informationGENERATING FUZZY RULES FOR PROTEIN CLASSIFICATION E. G. MANSOORI, M. J. ZOLGHADRI, S. D. KATEBI, H. MOHABATKAR, R. BOOSTANI AND M. H.
Iranian Journal of Fuzzy Systems Vol. 5, No. 2, (2008). 21-33 GENERATING FUZZY RULES FOR PROTEIN CLASSIFICATION E. G. MANSOORI, M. J. ZOLGHADRI, S. D. KATEBI, H. MOHABATKAR, R. BOOSTANI AND M. H. SADREDDINI
More informationOne step ahead prediction using Fuzzy Boolean Neural Networks 1
One ste ahead rediction using Fuzzy Boolean eural etworks 1 José A. B. Tomé IESC-ID, IST Rua Alves Redol, 9 1000 Lisboa jose.tome@inesc-id.t João Paulo Carvalho IESC-ID, IST Rua Alves Redol, 9 1000 Lisboa
More informationEvaluating Circuit Reliability Under Probabilistic Gate-Level Fault Models
Evaluating Circuit Reliability Under Probabilistic Gate-Level Fault Models Ketan N. Patel, Igor L. Markov and John P. Hayes University of Michigan, Ann Arbor 48109-2122 {knatel,imarkov,jhayes}@eecs.umich.edu
More informationThe Graph Accessibility Problem and the Universality of the Collision CRCW Conflict Resolution Rule
The Grah Accessibility Problem and the Universality of the Collision CRCW Conflict Resolution Rule STEFAN D. BRUDA Deartment of Comuter Science Bisho s University Lennoxville, Quebec J1M 1Z7 CANADA bruda@cs.ubishos.ca
More informationGeneration of Linear Models using Simulation Results
4. IMACS-Symosium MATHMOD, Wien, 5..003,. 436-443 Generation of Linear Models using Simulation Results Georg Otte, Sven Reitz, Joachim Haase Fraunhofer Institute for Integrated Circuits, Branch Lab Design
More informationParallel Quantum-inspired Genetic Algorithm for Combinatorial Optimization Problem
Parallel Quantum-insired Genetic Algorithm for Combinatorial Otimization Problem Kuk-Hyun Han Kui-Hong Park Chi-Ho Lee Jong-Hwan Kim Det. of Electrical Engineering and Comuter Science, Korea Advanced Institute
More informationFeedback-error control
Chater 4 Feedback-error control 4.1 Introduction This chater exlains the feedback-error (FBE) control scheme originally described by Kawato [, 87, 8]. FBE is a widely used neural network based controller
More informationFactor Analysis of Convective Heat Transfer for a Horizontal Tube in the Turbulent Flow Region Using Artificial Neural Network
COMPUTATIONAL METHODS IN ENGINEERING AND SCIENCE EPMESC X, Aug. -3, 6, Sanya, Hainan,China 6 Tsinghua University ess & Sringer-Verlag Factor Analysis of Convective Heat Transfer for a Horizontal Tube in
More information2-D Analysis for Iterative Learning Controller for Discrete-Time Systems With Variable Initial Conditions Yong FANG 1, and Tommy W. S.
-D Analysis for Iterative Learning Controller for Discrete-ime Systems With Variable Initial Conditions Yong FANG, and ommy W. S. Chow Abstract In this aer, an iterative learning controller alying to linear
More informationUncorrelated Multilinear Discriminant Analysis with Regularization and Aggregation for Tensor Object Recognition
Uncorrelated Multilinear Discriminant Analysis with Regularization and Aggregation for Tensor Object Recognition Haiing Lu, K.N. Plataniotis and A.N. Venetsanooulos The Edward S. Rogers Sr. Deartment of
More informationUsing a Computational Intelligence Hybrid Approach to Recognize the Faults of Variance Shifts for a Manufacturing Process
Journal of Industrial and Intelligent Information Vol. 4, No. 2, March 26 Using a Comutational Intelligence Hybrid Aroach to Recognize the Faults of Variance hifts for a Manufacturing Process Yuehjen E.
More informationUsing the Divergence Information Criterion for the Determination of the Order of an Autoregressive Process
Using the Divergence Information Criterion for the Determination of the Order of an Autoregressive Process P. Mantalos a1, K. Mattheou b, A. Karagrigoriou b a.deartment of Statistics University of Lund
More informationDETC2003/DAC AN EFFICIENT ALGORITHM FOR CONSTRUCTING OPTIMAL DESIGN OF COMPUTER EXPERIMENTS
Proceedings of DETC 03 ASME 003 Design Engineering Technical Conferences and Comuters and Information in Engineering Conference Chicago, Illinois USA, Setember -6, 003 DETC003/DAC-48760 AN EFFICIENT ALGORITHM
More informationMetrics Performance Evaluation: Application to Face Recognition
Metrics Performance Evaluation: Alication to Face Recognition Naser Zaeri, Abeer AlSadeq, and Abdallah Cherri Electrical Engineering Det., Kuwait University, P.O. Box 5969, Safat 6, Kuwait {zaery, abeer,
More informationUncertainty Modeling with Interval Type-2 Fuzzy Logic Systems in Mobile Robotics
Uncertainty Modeling with Interval Tye-2 Fuzzy Logic Systems in Mobile Robotics Ondrej Linda, Student Member, IEEE, Milos Manic, Senior Member, IEEE bstract Interval Tye-2 Fuzzy Logic Systems (IT2 FLSs)
More informationResearch Article Research on Evaluation Indicator System and Methods of Food Network Marketing Performance
Advance Journal of Food Science and Technology 7(0: 80-84 205 DOI:0.9026/afst.7.988 ISSN: 2042-4868; e-issn: 2042-4876 205 Maxwell Scientific Publication Cor. Submitted: October 7 204 Acceted: December
More information4. Score normalization technical details We now discuss the technical details of the score normalization method.
SMT SCORING SYSTEM This document describes the scoring system for the Stanford Math Tournament We begin by giving an overview of the changes to scoring and a non-technical descrition of the scoring rules
More informationarxiv: v1 [physics.data-an] 26 Oct 2012
Constraints on Yield Parameters in Extended Maximum Likelihood Fits Till Moritz Karbach a, Maximilian Schlu b a TU Dortmund, Germany, moritz.karbach@cern.ch b TU Dortmund, Germany, maximilian.schlu@cern.ch
More informationAn Analysis of Reliable Classifiers through ROC Isometrics
An Analysis of Reliable Classifiers through ROC Isometrics Stijn Vanderlooy s.vanderlooy@cs.unimaas.nl Ida G. Srinkhuizen-Kuyer kuyer@cs.unimaas.nl Evgueni N. Smirnov smirnov@cs.unimaas.nl MICC-IKAT, Universiteit
More informationADAPTIVE CONTROL METHODS FOR EXCITED SYSTEMS
ADAPTIVE CONTROL METHODS FOR NON-LINEAR SELF-EXCI EXCITED SYSTEMS by Michael A. Vaudrey Dissertation submitted to the Faculty of the Virginia Polytechnic Institute and State University in artial fulfillment
More informationRecursive Estimation of the Preisach Density function for a Smart Actuator
Recursive Estimation of the Preisach Density function for a Smart Actuator Ram V. Iyer Deartment of Mathematics and Statistics, Texas Tech University, Lubbock, TX 7949-142. ABSTRACT The Preisach oerator
More informationRobust Predictive Control of Input Constraints and Interference Suppression for Semi-Trailer System
Vol.7, No.7 (4),.37-38 htt://dx.doi.org/.457/ica.4.7.7.3 Robust Predictive Control of Inut Constraints and Interference Suression for Semi-Trailer System Zhao, Yang Electronic and Information Technology
More informationConvex Optimization methods for Computing Channel Capacity
Convex Otimization methods for Comuting Channel Caacity Abhishek Sinha Laboratory for Information and Decision Systems (LIDS), MIT sinhaa@mit.edu May 15, 2014 We consider a classical comutational roblem
More informationUncorrelated Multilinear Discriminant Analysis with Regularization and Aggregation for Tensor Object Recognition
TNN-2007-P-0332.R1 1 Uncorrelated Multilinear Discriminant Analysis with Regularization and Aggregation for Tensor Object Recognition Haiing Lu, K.N. Plataniotis and A.N. Venetsanooulos The Edward S. Rogers
More informationA Qualitative Event-based Approach to Multiple Fault Diagnosis in Continuous Systems using Structural Model Decomposition
A Qualitative Event-based Aroach to Multile Fault Diagnosis in Continuous Systems using Structural Model Decomosition Matthew J. Daigle a,,, Anibal Bregon b,, Xenofon Koutsoukos c, Gautam Biswas c, Belarmino
More informationA Network-Flow Based Cell Sizing Algorithm
A Networ-Flow Based Cell Sizing Algorithm Huan Ren and Shantanu Dutt Det. of ECE, University of Illinois-Chicago Abstract We roose a networ flow based algorithm for the area-constrained timing-driven discrete
More informationPeriod-two cycles in a feedforward layered neural network model with symmetric sequence processing
PHYSICAL REVIEW E 75, 4197 27 Period-two cycles in a feedforward layered neural network model with symmetric sequence rocessing F. L. Metz and W. K. Theumann Instituto de Física, Universidade Federal do
More informationNamed Entity Recognition using Maximum Entropy Model SEEM5680
Named Entity Recognition using Maximum Entroy Model SEEM5680 Named Entity Recognition System Named Entity Recognition (NER): Identifying certain hrases/word sequences in a free text. Generally it involves
More informationPARTIAL FACE RECOGNITION: A SPARSE REPRESENTATION-BASED APPROACH. Luoluo Liu, Trac D. Tran, and Sang Peter Chin
PARTIAL FACE RECOGNITION: A SPARSE REPRESENTATION-BASED APPROACH Luoluo Liu, Trac D. Tran, and Sang Peter Chin Det. of Electrical and Comuter Engineering, Johns Hokins Univ., Baltimore, MD 21218, USA {lliu69,trac,schin11}@jhu.edu
More informationVariable Selection and Model Building
LINEAR REGRESSION ANALYSIS MODULE XIII Lecture - 38 Variable Selection and Model Building Dr. Shalabh Deartment of Mathematics and Statistics Indian Institute of Technology Kanur Evaluation of subset regression
More informationNotes on Instrumental Variables Methods
Notes on Instrumental Variables Methods Michele Pellizzari IGIER-Bocconi, IZA and frdb 1 The Instrumental Variable Estimator Instrumental variable estimation is the classical solution to the roblem of
More informationA Unified 2D Representation of Fuzzy Reasoning, CBR, and Experience Based Reasoning
University of Wollongong Research Online Faculty of Commerce - aers (Archive) Faculty of Business 26 A Unified 2D Reresentation of Fuzzy Reasoning, CBR, and Exerience Based Reasoning Zhaohao Sun University
More informationCONSTRUCTIVE NEURAL NETWORKS IN FORECASTING WEEKLY RIVER FLOW
CONSTRUCTIVE NEURAL NETWORKS IN FORECASTING WEEKLY RIVER FLOW MÊUSER VALENÇA Universidade Salgado de Oliveira - UNIVERSO /Comanhia Hidroelétrica do São Francisco - Chesf Rua Gasar Peres, 47/4 CDU, Recife,
More informationAPPLICATION OF THE TAKAGI-SUGENO FUZZY CONTROLLER FOR SOLVING THE ROBOTS' INVERSE KINEMATICS PROBLEM UDC
FACTA UNIVERSITATIS Series: Mechanics, Automatic Control and Robotics Vol.3, N o 5, 3,. 39-54 APPLICATION OF THE TAKAGI-SUGENO FUZZY CONTROLLER FOR SOLVING THE ROBOTS' INVERSE KINEMATICS PROBLEM UDC 6-5+68.53+68.58
More informationDesign of NARMA L-2 Control of Nonlinear Inverted Pendulum
International Research Journal of Alied and Basic Sciences 016 Available online at www.irjabs.com ISSN 51-838X / Vol, 10 (6): 679-684 Science Exlorer Publications Design of NARMA L- Control of Nonlinear
More informationFor q 0; 1; : : : ; `? 1, we have m 0; 1; : : : ; q? 1. The set fh j(x) : j 0; 1; ; : : : ; `? 1g forms a basis for the tness functions dened on the i
Comuting with Haar Functions Sami Khuri Deartment of Mathematics and Comuter Science San Jose State University One Washington Square San Jose, CA 9519-0103, USA khuri@juiter.sjsu.edu Fax: (40)94-500 Keywords:
More informationOn Fractional Predictive PID Controller Design Method Emmanuel Edet*. Reza Katebi.**
On Fractional Predictive PID Controller Design Method Emmanuel Edet*. Reza Katebi.** * echnology and Innovation Centre, Level 4, Deartment of Electronic and Electrical Engineering, University of Strathclyde,
More informationq-ary Symmetric Channel for Large q
List-Message Passing Achieves Caacity on the q-ary Symmetric Channel for Large q Fan Zhang and Henry D Pfister Deartment of Electrical and Comuter Engineering, Texas A&M University {fanzhang,hfister}@tamuedu
More informationFE FORMULATIONS FOR PLASTICITY
G These slides are designed based on the book: Finite Elements in Plasticity Theory and Practice, D.R.J. Owen and E. Hinton, 1970, Pineridge Press Ltd., Swansea, UK. 1 Course Content: A INTRODUCTION AND
More informationSystem Reliability Estimation and Confidence Regions from Subsystem and Full System Tests
009 American Control Conference Hyatt Regency Riverfront, St. Louis, MO, USA June 0-, 009 FrB4. System Reliability Estimation and Confidence Regions from Subsystem and Full System Tests James C. Sall Abstract
More informationDistributed Rule-Based Inference in the Presence of Redundant Information
istribution Statement : roved for ublic release; distribution is unlimited. istributed Rule-ased Inference in the Presence of Redundant Information June 8, 004 William J. Farrell III Lockheed Martin dvanced
More informationA New GP-evolved Formulation for the Relative Permittivity of Water and Steam
ew GP-evolved Formulation for the Relative Permittivity of Water and Steam S. V. Fogelson and W. D. Potter rtificial Intelligence Center he University of Georgia, US Contact Email ddress: sergeyf1@uga.edu
More informationApproximating min-max k-clustering
Aroximating min-max k-clustering Asaf Levin July 24, 2007 Abstract We consider the roblems of set artitioning into k clusters with minimum total cost and minimum of the maximum cost of a cluster. The cost
More informationA Recursive Block Incomplete Factorization. Preconditioner for Adaptive Filtering Problem
Alied Mathematical Sciences, Vol. 7, 03, no. 63, 3-3 HIKARI Ltd, www.m-hiari.com A Recursive Bloc Incomlete Factorization Preconditioner for Adative Filtering Problem Shazia Javed School of Mathematical
More informationA Comparison between Biased and Unbiased Estimators in Ordinary Least Squares Regression
Journal of Modern Alied Statistical Methods Volume Issue Article 7 --03 A Comarison between Biased and Unbiased Estimators in Ordinary Least Squares Regression Ghadban Khalaf King Khalid University, Saudi
More informationBayesian Model Averaging Kriging Jize Zhang and Alexandros Taflanidis
HIPAD LAB: HIGH PERFORMANCE SYSTEMS LABORATORY DEPARTMENT OF CIVIL AND ENVIRONMENTAL ENGINEERING AND EARTH SCIENCES Bayesian Model Averaging Kriging Jize Zhang and Alexandros Taflanidis Why use metamodeling
More informationInformation collection on a graph
Information collection on a grah Ilya O. Ryzhov Warren Powell October 25, 2009 Abstract We derive a knowledge gradient olicy for an otimal learning roblem on a grah, in which we use sequential measurements
More informationFig. 4. Example of Predicted Workers. Fig. 3. A Framework for Tackling the MQA Problem.
217 IEEE 33rd International Conference on Data Engineering Prediction-Based Task Assignment in Satial Crowdsourcing Peng Cheng #, Xiang Lian, Lei Chen #, Cyrus Shahabi # Hong Kong University of Science
More informationUnit 1 - Computer Arithmetic
FIXD-POINT (FX) ARITHMTIC Unit 1 - Comuter Arithmetic INTGR NUMBRS n bit number: b n 1 b n 2 b 0 Decimal Value Range of values UNSIGND n 1 SIGND D = b i 2 i D = 2 n 1 b n 1 + b i 2 i n 2 i=0 i=0 [0, 2
More informationDIFFERENTIAL evolution (DE) [3] has become a popular
Self-adative Differential Evolution with Neighborhood Search Zhenyu Yang, Ke Tang and Xin Yao Abstract In this aer we investigate several self-adative mechanisms to imrove our revious work on [], which
More informationEXACTLY PERIODIC SUBSPACE DECOMPOSITION BASED APPROACH FOR IDENTIFYING TANDEM REPEATS IN DNA SEQUENCES
EXACTLY ERIODIC SUBSACE DECOMOSITION BASED AROACH FOR IDENTIFYING TANDEM REEATS IN DNA SEUENCES Ravi Guta, Divya Sarthi, Ankush Mittal, and Kuldi Singh Deartment of Electronics & Comuter Engineering, Indian
More informationGenetic Algorithm Based PID Optimization in Batch Process Control
International Conference on Comuter Alications and Industrial Electronics (ICCAIE ) Genetic Algorithm Based PID Otimization in Batch Process Control.K. Tan Y.K. Chin H.J. Tham K.T.K. Teo odelling, Simulation
More informationSTABILITY ANALYSIS TOOL FOR TUNING UNCONSTRAINED DECENTRALIZED MODEL PREDICTIVE CONTROLLERS
STABILITY ANALYSIS TOOL FOR TUNING UNCONSTRAINED DECENTRALIZED MODEL PREDICTIVE CONTROLLERS Massimo Vaccarini Sauro Longhi M. Reza Katebi D.I.I.G.A., Università Politecnica delle Marche, Ancona, Italy
More informationFinite-State Verification or Model Checking. Finite State Verification (FSV) or Model Checking
Finite-State Verification or Model Checking Finite State Verification (FSV) or Model Checking Holds the romise of roviding a cost effective way of verifying imortant roerties about a system Not all faults
More informationPCA fused NN approach for drill wear prediction in drilling mild steel specimen
PCA fused NN aroach for drill wear rediction in drilling mild steel secimen S.S. Panda Deartment of Mechanical Engineering, Indian Institute of Technology Patna, Bihar-8000, India, Tel No.: 9-99056477(M);
More informationNUMERICAL AND THEORETICAL INVESTIGATIONS ON DETONATION- INERT CONFINEMENT INTERACTIONS
NUMERICAL AND THEORETICAL INVESTIGATIONS ON DETONATION- INERT CONFINEMENT INTERACTIONS Tariq D. Aslam and John B. Bdzil Los Alamos National Laboratory Los Alamos, NM 87545 hone: 1-55-667-1367, fax: 1-55-667-6372
More informationCHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules
CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules. Introduction: The is widely used in industry to monitor the number of fraction nonconforming units. A nonconforming unit is
More informationCOMMUNICATION BETWEEN SHAREHOLDERS 1
COMMUNICATION BTWN SHARHOLDRS 1 A B. O A : A D Lemma B.1. U to µ Z r 2 σ2 Z + σ2 X 2r ω 2 an additive constant that does not deend on a or θ, the agents ayoffs can be written as: 2r rθa ω2 + θ µ Y rcov
More informationA New Method of DDB Logical Structure Synthesis Using Distributed Tabu Search
A New Method of DDB Logical Structure Synthesis Using Distributed Tabu Search Eduard Babkin and Margarita Karunina 2, National Research University Higher School of Economics Det of nformation Systems and
More informationInformation collection on a graph
Information collection on a grah Ilya O. Ryzhov Warren Powell February 10, 2010 Abstract We derive a knowledge gradient olicy for an otimal learning roblem on a grah, in which we use sequential measurements
More informationState Estimation with ARMarkov Models
Deartment of Mechanical and Aerosace Engineering Technical Reort No. 3046, October 1998. Princeton University, Princeton, NJ. State Estimation with ARMarkov Models Ryoung K. Lim 1 Columbia University,
More informationDynamic System Eigenvalue Extraction using a Linear Echo State Network for Small-Signal Stability Analysis a Novel Application
Dynamic System Eigenvalue Extraction using a Linear Echo State Network for Small-Signal Stability Analysis a Novel Alication Jiaqi Liang, Jing Dai, Ganesh K. Venayagamoorthy, and Ronald G. Harley Abstract
More informationResearch of PMU Optimal Placement in Power Systems
Proceedings of the 5th WSEAS/IASME Int. Conf. on SYSTEMS THEORY and SCIENTIFIC COMPUTATION, Malta, Setember 15-17, 2005 (38-43) Research of PMU Otimal Placement in Power Systems TIAN-TIAN CAI, QIAN AI
More informationMultivariable PID Control Design For Wastewater Systems
Proceedings of the 5th Mediterranean Conference on Control & Automation, July 7-9, 7, Athens - Greece 4- Multivariable PID Control Design For Wastewater Systems N A Wahab, M R Katebi and J Balderud Industrial
More informationCombining Logistic Regression with Kriging for Mapping the Risk of Occurrence of Unexploded Ordnance (UXO)
Combining Logistic Regression with Kriging for Maing the Risk of Occurrence of Unexloded Ordnance (UXO) H. Saito (), P. Goovaerts (), S. A. McKenna (2) Environmental and Water Resources Engineering, Deartment
More informationCOMPARISON OF VARIOUS OPTIMIZATION TECHNIQUES FOR DESIGN FIR DIGITAL FILTERS
NCCI 1 -National Conference on Comutational Instrumentation CSIO Chandigarh, INDIA, 19- March 1 COMPARISON OF VARIOUS OPIMIZAION ECHNIQUES FOR DESIGN FIR DIGIAL FILERS Amanjeet Panghal 1, Nitin Mittal,Devender
More information#A64 INTEGERS 18 (2018) APPLYING MODULAR ARITHMETIC TO DIOPHANTINE EQUATIONS
#A64 INTEGERS 18 (2018) APPLYING MODULAR ARITHMETIC TO DIOPHANTINE EQUATIONS Ramy F. Taki ElDin Physics and Engineering Mathematics Deartment, Faculty of Engineering, Ain Shams University, Cairo, Egyt
More informationComputer arithmetic. Intensive Computation. Annalisa Massini 2017/2018
Comuter arithmetic Intensive Comutation Annalisa Massini 7/8 Intensive Comutation - 7/8 References Comuter Architecture - A Quantitative Aroach Hennessy Patterson Aendix J Intensive Comutation - 7/8 3
More informationLower Confidence Bound for Process-Yield Index S pk with Autocorrelated Process Data
Quality Technology & Quantitative Management Vol. 1, No.,. 51-65, 15 QTQM IAQM 15 Lower onfidence Bound for Process-Yield Index with Autocorrelated Process Data Fu-Kwun Wang * and Yeneneh Tamirat Deartment
More informationGenetic Algorithms, Selection Schemes, and the Varying Eects of Noise. IlliGAL Report No November Department of General Engineering
Genetic Algorithms, Selection Schemes, and the Varying Eects of Noise Brad L. Miller Det. of Comuter Science University of Illinois at Urbana-Chamaign David E. Goldberg Det. of General Engineering University
More informationLorenz Wind Disturbance Model Based on Grey Generated Components
Energies 214, 7, 7178-7193; doi:1.339/en7117178 Article OPEN ACCESS energies ISSN 1996-173 www.mdi.com/journal/energies Lorenz Wind Disturbance Model Based on Grey Generated Comonents Yagang Zhang 1,2,
More informationEstimating function analysis for a class of Tweedie regression models
Title Estimating function analysis for a class of Tweedie regression models Author Wagner Hugo Bonat Deartamento de Estatística - DEST, Laboratório de Estatística e Geoinformação - LEG, Universidade Federal
More informationUniversal Finite Memory Coding of Binary Sequences
Deartment of Electrical Engineering Systems Universal Finite Memory Coding of Binary Sequences Thesis submitted towards the degree of Master of Science in Electrical and Electronic Engineering in Tel-Aviv
More information1-way quantum finite automata: strengths, weaknesses and generalizations
1-way quantum finite automata: strengths, weaknesses and generalizations arxiv:quant-h/9802062v3 30 Se 1998 Andris Ambainis UC Berkeley Abstract Rūsiņš Freivalds University of Latvia We study 1-way quantum
More informationRe-entry Protocols for Seismically Active Mines Using Statistical Analysis of Aftershock Sequences
Re-entry Protocols for Seismically Active Mines Using Statistical Analysis of Aftershock Sequences J.A. Vallejos & S.M. McKinnon Queen s University, Kingston, ON, Canada ABSTRACT: Re-entry rotocols are
More informationOn Line Parameter Estimation of Electric Systems using the Bacterial Foraging Algorithm
On Line Parameter Estimation of Electric Systems using the Bacterial Foraging Algorithm Gabriel Noriega, José Restreo, Víctor Guzmán, Maribel Giménez and José Aller Universidad Simón Bolívar Valle de Sartenejas,
More informationCOMPLEX-VALUED ASSOCIATIVE MEMORIES WITH PROJECTION AND ITERATIVE LEARNING RULES
JAISCR, 2018, Vol 8, o 3, 237 249 101515/jaiscr-2018-0015 COMPLEX-VALUED ASSOCIATIVE MEMORIES WITH PROJECTIO AD ITERATIVE LEARIG RULES Teijiro Isokawa 1, Hiroki Yamamoto 1, Haruhiko ishimura 2, Takayuki
More informationBest approximation by linear combinations of characteristic functions of half-spaces
Best aroximation by linear combinations of characteristic functions of half-saces Paul C. Kainen Deartment of Mathematics Georgetown University Washington, D.C. 20057-1233, USA Věra Kůrková Institute of
More informationModeling and Estimation of Full-Chip Leakage Current Considering Within-Die Correlation
6.3 Modeling and Estimation of Full-Chi Leaage Current Considering Within-Die Correlation Khaled R. eloue, Navid Azizi, Farid N. Najm Deartment of ECE, University of Toronto,Toronto, Ontario, Canada {haled,nazizi,najm}@eecg.utoronto.ca
More informationDEPARTMENT OF ECONOMICS ISSN DISCUSSION PAPER 20/07 TWO NEW EXPONENTIAL FAMILIES OF LORENZ CURVES
DEPARTMENT OF ECONOMICS ISSN 1441-549 DISCUSSION PAPER /7 TWO NEW EXPONENTIAL FAMILIES OF LORENZ CURVES ZuXiang Wang * & Russell Smyth ABSTRACT We resent two new Lorenz curve families by using the basic
More informationAR PROCESSES AND SOURCES CAN BE RECONSTRUCTED FROM. Radu Balan, Alexander Jourjine, Justinian Rosca. Siemens Corporation Research
AR PROCESSES AND SOURCES CAN BE RECONSTRUCTED FROM DEGENERATE MIXTURES Radu Balan, Alexander Jourjine, Justinian Rosca Siemens Cororation Research 7 College Road East Princeton, NJ 8 fradu,jourjine,roscag@scr.siemens.com
More informationAn efficient method for generalized linear multiplicative programming problem with multiplicative constraints
DOI 0.86/s40064-06-2984-9 RESEARCH An efficient method for generalized linear multilicative rogramming roblem with multilicative constraints Yingfeng Zhao,2* and Sanyang Liu Oen Access *Corresondence:
More informationHidden Predictors: A Factor Analysis Primer
Hidden Predictors: A Factor Analysis Primer Ryan C Sanchez Western Washington University Factor Analysis is a owerful statistical method in the modern research sychologist s toolbag When used roerly, factor
More information