Prediction of chenille yarn and fabric abrasion resistance using radial basis function neural network models

Size: px
Start display at page:

Download "Prediction of chenille yarn and fabric abrasion resistance using radial basis function neural network models"

Transcription

1 Neural Comput & Applic (7) 6: DOI.7/s ORIGINAL ARTICLE Erhan Kenan C even Æ Sezai Tokat Æ O zcan O zdemir Prediction of chenille yarn and fabric abrasion resistance using radial basis function neural network models Received: 5 June 5 / Accepted: 3 March 6 / Published online: April 6 Ó Springer-Verlag London Limited 6 Abstract The abrasion resistance of chenille yarn is crucially important in particular because the effect sought is always that of the velvety feel of the pile. Thus, various methods have been developed to predict chenille yarn and fabric abrasion properties. Statistical models yielded reasonably good abrasion resistance predictions. However, there is a lack of study that encompasses the scope for predicting the chenille yarn abrasion resistance with artificial neural network (ANN) models. This paper presents an intelligent modeling methodology based on ANNs for predicting the abrasion resistance of chenille yarns and fabrics. Constituent chenille yarn parameters like yarn count, pile length, twist level and pile yarn material type are used as inputs to the model. The intelligent method is based on a special kind of ANN, which uses radial basis functions as activation functions. The predictive power of the ANN model is compared with different statistical models. It is shown that the intelligent model improves prediction performance with respect to statistical models. Keywords Prediction Æ Artificial neural networks Æ Radial basis functions Æ Chenille yarn Æ Abrasion resistance Abbreviations a k : Coefficient of the kth independent variable Æ n: Number of independent variables Æ b: Bias term of multiple-regression models Æ b i : Bias term of the ith hidden neuron Æ y : Fabric abrasion Æ y : Yarn E. K. C even Æ O.O zdemir Department of Textile Engineering, Uludag University Gorukle, Bursa 659, Turkey rceven@uludag.edu.tr ozdemir@uludag.edu.tr S. Tokat (&) Department of Computer Engineering, Pamukkale University, Bilgisayar Muhendisligi Bolumu, Morfoloji Binası, Kinikli, Denizli 7, Turkey stokat@pamukkale.edu.tr Tel.: Fax: abrasion Æ x : Yarn count Æ x : Pile length Æ x 3 : Twist level Æ x 4 : Pile yarn material type Æ c ik : Coefficient of the interaction term x ik Æ a: Column data vector Æ a: Normalized counterpart of vector a Æ m: Number of columns of a data vector a Æ ones(m,): m column vector with all elements as one Æ max(a): Maximum element of vector a Æ min(a): Minimum element of vector a Æ a: Normalized counterpart of data vector a Æ d i : Input vector of the radial basis function R i Æ v i : Center of the radial basis function R i Æ r i : Width of the radial basis function R i Æ w ji : Weight of the ith hidden unit for the jth output node Æ w ji : Weight of the ith input x i for the jth hidden unit Æ l: Learning rate Æ J: Cost function Æ DJ: Change in cost function Æ k: Adjusted parameter Introduction Chenille yarn is composed of two highly twisted core yarns and short lengths of a pile (effect) yarn which project out from the core to give a hairy effect [5]. The short lengths are called the pile, and the highly twisted yarns are called the core. The result is a yarn with a velvet-like or pile surface [7, 7]. Chenille yarns are used to produce special knitted and woven fabrics with high added value. A disadvantage of chenille yarn is its distinct weakness. Thus, it does not have a good inherent abrasion resistance. Any removal of the effect yarn forming the pile, either during further processing or during the eventual end-use, will expose the ground yarns, which in turn will result in a bare appearance []. Abrasion resistance, like other yarn properties, is chiefly influenced by fiber properties, yarn twist and yarn count [, 3, 5, 8]. The literature survey shows that there are few studies about the fundamental parameters that characterize chenille yarns. Statistical models (analysis of variance) yielded reasonably good abrasion resistance predictions [7]. It would be helpful if a prediction model could forecast chenille yarn and fabric abrasion resistance accurately.

2 4 A well-known method is to use statistical models for function approximation. Function approximation generalizes the experience from a small subset of examples to develop an approximation over a larger subset. In function approximation an unknown function is found using the observed data. Traditionally, linear or polynomial regression models were used. Artificial neural networks (ANNs), on the other hand, use a non-parametric estimation technique for the same purpose. ANNs are characterized by the fact that they have a universal approximation property and make it possible to work without an explicitly formulated mathematical model []. Selecting a statistical or intelligent method is a design criterion and a suitable selection results from the performance goal to be achieved [4]. The application of ANN tools to the optimization of textile manufacturing is an active research area. In recent years multi-layer ANN models with backpropagation learning have been widely used to predict various yarn properties. For instance, the breaking elongation of rotor-spun yarns and ring-spun cotton yarns has been predicted using statistical and ANN models in [3, 4], respectively. The relationships among fiber properties and their impacts on spinning performance and textile product quality have been predicted using ANNs [9, 8]. Zhu and Ethridge [] used ANN models for predicting ring or rotor yarn hairiness. Yarn strength is predicted using different process conditions and material parameters in [6, 9, ]. In general, error back-propagation multi-layer ANNs have been used in all of the above studies. The multilayer ANN structures have some disadvantages: the slow learning speed and the local minimal convergence behavior. To accelerate the learning speed of the error back-propagation algorithm, one method is to modify the error function to improve the error back-propagation algorithm [6]. Another alternative is to use radial basis function neural networks (), which have been widely used to represent nonlinear mappings between inputs and outputs of nonlinear systems []. In this work, models are used to predict the abrasion resistance of chenille yarns and chenille fabrics produced with these yarns. The performance of the model is compared with statistical models to denote the improvement. Gathering raw data Nm 4 and Nm 6 count chenille yarns were produced using viscose, acrylics with fiber fineness of.9 and.3 dtex, combed cotton, carded cotton and open end cotton pile yarn materials, with.7 and. mm pile lengths and with 7 and 85 turns/m twists. Acrylic core yarns were used for the production of chenille yarns. Upholstery fabrics were produced by using these yarns as filling yarn. For all fabrics, the same weave was chosen. Abrasion tests of the chenille fabric specimens were conducted on the Martindale wear and abrasion tester. Abrasion performances of chenille fabrics were determined in accordance with the test method ASTM D []. Fabrics were abraded to, cycles and mass loss ratios were determined. In order to observe the relationship between fabric and yarn abrasion resistance we designed a chenille yarn abrasion testing device by making some modifications on the Crockmeter that has been explained in an earlier paper [7]. The abrasion (% mass loss) values for chenille fabrics and chenille yarns are given in Table. 3 Statistical methods The most common type of multiple-regression is linear multiple-regression where a simple model relates several independent variables and a dependent variable with a straight line given as y k ¼ a x þ a x þþa n x n þ b; ðþ where a i are the coefficients of the independent variables and b is the bias term. The dependent variable y k is modeled by a linear relation. Nonlinear multiple-regression is similar to linear regression, except that a curve is fitted to the data set instead of fitting a straight line to the data set. Just as in the linear scenario, the sum of the squares of the horizontal and vertical distances between the line and the points is to be minimized. A nonlinear multiple-regression model is the polynomial regression model with interaction terms given as y k ¼ a x þþa n x n þ c x x þ þ c n x x n þþc n ;n x n x n þ b; ðþ where a i are the coefficients of the independent variables, c ij are the coefficients of the interaction terms, b is the bias term, and y k is the dependent variable. In this study for the aim of data regression, the linear model given in () and the nonlinear model with interaction terms given in () are used. In the related models, n=4 and the independent variables are yarn count (x ), pile length (x ), twist level (x 3 ) and pile yarn material type (x 4 ). The multiple-regression models given in (), () are multi-input single-input systems. Therefore, to obtain each dependent variable y k, a separate multiple-regression model is used. In this study the dependent variables that are predicted using the given models are fabric abrasion (mass loss, %) (y ) and yarn abrasion (mass loss, %) (y ). The coefficients for linear and nonlinear multiple-regression models are given in Table. 4 Intelligent method For the intelligent case, we aimed to obtain both dependent variables using an ANN structure. The ANN design process involves four steps. These are normalizing the

3 4 Table Abrasion (% mass loss) values for: (a) chenille fabrics, (b) chenille yarns [4] Yarn count (Nm) 4 6 Pile length (mm) Twist level (T/m) No 7 No 85 No 7 No 85 No 7 No 85 No 7 No 85 (a) Chenille fabric abrasion Material type Viscose Acrylic (.9 dtex) Acrylic (.3 dtex) Combed cotton Carded cotton Open end cotton (b) Chenille yarn abrasion Material type Viscose Acrylic (.9 dtex) Acrylic (.3 dtex) Combed cotton Carded cotton Open end cotton gathered data, selecting the ANN architecture, training the network and testing the network. 4. Normalization The training and test data are chosen from the gathered data given in Table. The inputs are yarn count, pile length, twist level and pile yarn material type and the outputs are yarn abrasion and fabric abrasion. From Table, the data with chenille yarn numbers, 8, 5,, 9, 36, 37 and 44 are randomly selected as the test data and the rest are taken as the training data. Normalization should be done such that the higher values do not suppress the influence of the lower values and the symmetry of the activation function is retained. In this study, linear normalization is used by dividing each variable to its maximum value in the training and test data space. Thus, all inputs and outputs are normalized to the same range of values which are taken as [, ]. For a column vector a, the normalized data a having values in the interval [, ] can be obtained as a minðaþ:onesðm; Þ a ¼ ; ð3þ maxðaþ minðaþ where ones (m,) is a column vector with the same dimension as vector a having all elements as. Denormalization should be used to convert the network output back into the original units. For (3), the denormalization could be obtained as a ¼ a þ ðmaxðaþ minðaþþ þ minðaþonesðm;þ: ð4þ 4. Selecting the artificial neural network architecture Artificial neural networks can learn in real-time, and thus can adapt to changing environments flexibly, making reasonable, rational decisions based on incomplete and ambiguous information with intuitive thinking. A large number of ANN structures are proposed in the literature and these are available in various forms such as multi-layer perceptron, Hopfield net, Kohonen Table Coefficients for multiple-regression models: (a) linear model, (b) interaction model a a a 3 a 4 b (a) model y y a a a 3 a 4 c c 3 c 4 c 3 c 4 c 34 b (b) model y y

4 4 self organizing map, etc []. The objective of all the above structures is to model the input/output behavior of the system in question. ANNs are trained so that a particular input leads to a specific output. In this study, is chosen as the ANN structure. s were first introduced in the solution of multi-variable interpolation problems and could be seen as a sensible alternative to the mentioned attempts to use complex polynomials for function approximation. The schematic diagram of the is given in Fig. with four inputs, p hidden units and two outputs. There is only one hidden layer that uses radial basis function (RBF) neurons. A function is RBF if its output depends on the distance of the input from a given stored vector. Different functions such as multiquadrics, inverse multiquadrics and biharmonics could be used as a RBF. A typical selection is a Gaussian function for which the output of the ith hidden unit is written as h R i ðd i Þ¼exp kd i v i k. i r i ; i ¼ ; ; :::; p; ð5þ where v i and r i are the centers and widths of the RBF at the ith hidden neuron, p is the number of hidden units and R i (.) is the ith hidden unit response with a single maximum at the origin and d < px is the weighted input vector where d i ¼½w i w i w 3i w 4i Š½x x x 3 x 4 Š T þ b i : ð6þ Here b i is the bias term of the ith hidden neuron. Thus, the RBF neuron computed by the ith hidden unit is maximum when the input vector d i is near the center v i of that unit. Simply, the output of a can be computed as a weighted sum of the hidden units. y j ¼ Xp i¼ w ji R iðd i Þ; ð7þ where w ji is the weight of the ith hidden unit for the jth output node. An is completely determined by choosing the dimension of the input output data, number of RBF neurons, values of v i, r i, w ji and w ji. The function approximation of is obtained by adjusting these parameters. The dimension of the input output data is problem dependent. The number of RBF neurons is a critical choice and, depending on the approach, can be determined a priori or adaptively. The values of v i, r i, w ji and w ji can be adjusted adaptively using a learning algorithm. A basic learning algorithm is the gradient descent technique where the learning rule takes the form Dk ; ð8þ where l is the learning rate, k describes one of the adjusted parameters v i, r i, w ji, w ji,andjis the cost function. In practice, a common choice for the cost function is the root mean square error (RMSE) which is given as vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi! u X n RMSE ¼ t y j y j d ð9þ n j¼ which is calculated for each output. In (9), y d j is the desired normalized output of the jth output. The actual output y j is calculated using the de-normalization equation given in (4). Training and testing the network for the gathered data and comparisons with statistical methods are given in the next section. 5 Simulation results Choosing the RBF neurons is an important step for the algorithm. Therefore, it is trained using different numbers of RBF neurons. The RMSE performances are given in Fig. for both the training and test data separately. As can be seen from Fig. a, the training performance improves as the number of RBF neurons is increased. However, the same observation is not valid for the test data. The test performance improves for a number of RBF neurons but later degrades visibly as it keeps increasing. The reason is that, if an excessive number of neurons are used, the ANN learns too much in the training process and the smoothness of the output function disappears. This overfitting phenomenon is a Fig. The structure of the proposed single layer perceptron ANN x x w b b d d R R w w w w y x x 3 4 w p4 w p b p d p R p w p w p y

5 43 (a) RMSE (b) Fabric abrasion Yarn abrasion number of RBF neurons (a) fabric abrasion (mass loss, %) Fabric abrasion Yarn abrasion RMSE number of RBF neurons Fig. Radial basis function neural networks performances for different numbers of RBF neurons: a training data, b test data critical problem in almost all ANN architectures. It is seen in Fig. a that the training performance improvement after RBF neurons is small. Also the test performance in Fig. b is at its optimum when the number of RBF neurons is between and 4. Thus, in order to have a minimum number of neurons, the number of RBF neurons is chosen as. The performances of the models are graphically presented in Figs. 3 and 4 for the training and test data, respectively. It is seen that the model best fits the training and test data. To show the improvement in more detail, a number of performance criteria are also investigated for comparison. These are the RMSE value, the correlation coefficient between the actual and predicted abrasions (R) and the mean absolute error (MAE). These values were used to judge the predictive power of various methods. The RMSE values, correlation coefficients and MAEs of the methods for both the dependent variables yarn abrasion (y ) and fabric abrasion (y ) are given in Table 3. When the intelligent method and the nonlinear multiple-regression methods are compared, it is seen from the table that both the training and test data have the best RMSE performance for the intelligent method that uses with RBF neurons. The higher the correlation coefficient R in magnitude, the better the predicted data fit the actual data. It is evident from Table 3 that the model has the highest R values which means that the model output best fits the actual data. Mean absolute error is an important performance criterion. It is the sum of the absolute errors divided by the number of observations, where absolute error is the (b) Yarn abrasion (mass loss, %) absolute value of the difference between the predicted and actual data. MAE is in some sense the reciprocal of accuracy, because the closer the MAE is to zero, the more accurate the predictions are. In Table 3, has the smallest MAE values for both fabric abrasion and yarn abrasion and for both the training and test data. When the results for both the training and test data are compared, it is observed that both the RMSE and MAE values of the three methods for yarn abrasion were smaller than that for the fabric abrasion. And also R values of the three methods for yarn abrasion were higher than that for the fabric abrasion. 6 Conclusions Fig. 3 Root mean square error performances of the methods for the training data: a fabric abrasion, b yarn abrasion In this study, abrasion resistances of chenille yarns and fabrics have been predicted using statistical and ANN models. The function approximation property of the

6 44 Table 3 Root mean square error performance of given statistical and intelligent methods for: (a) training data, (b) test data Fabric abrasion (y ) Yarn abrasion (y ) (a) Training data RMSE R MAE (b) Test data RMSE R MAE was made use of to analyze mass loss in the abrasion process. From the simulations, it has been found that the prediction performance of the intelligent model is better than the given statistical models. Performance criteria indicate that the prediction performance of the intelligent method is high despite the availability of only a relatively small data set for training, (a) fabric abrasion (mass loss, %) (b) Yarn abrasion (mass loss, %) Fig. 4 Root mean square error performances of the methods for the test data: a fabric abrasion, b yarn abrasion and in each case the performance criteria are better than the given statistical models. As a result, it has been shown that such a method has a property of providing the most proper machine setting and material type selection before the production process. As further research study, new experimental work for predicting the abrasion properties of fabrics having different weave types could be suggested. References. ASTM D Standard Test Method for Abrasion Resistance of Textile Fabrics (Martindale Abrasion Tester Method) Annual Book of ASTM Standards Vol 7.. Brorens PH, Lappage J, Bedford J, Ranford SL (99) Studies on the abrasion resistance of weaving yarns. J Text Inst 8(): Bryan ED, Yu C, Oxenham W (999) The abrasion characteristics of ring spun and open-end yarns. In: Proceedings of the th EFS research forum, pp 3 4. C even EK () An investigation about the effect of some production parameters on yarn properties at chenille spinning machines. Msc Thesis, The University of Uludag, pp Cheng L, Adams DL (995) Yarn strength prediction using neural networks. Textile Res J 65(9): Cheng KPS, Lam HLI (3) Evaluating and comparing the physical properties of spliced yarns by regression and neural network techniques. Text Res J 73(): Chenille Background Brochure () Chenille International Manufacturer s Association (CIMA), Italy, pp 3 8. Choi KF, Kim KL (4) Fiber segment length distribution on the yarn surface in relation to yarn abrasion resistance. Text Res J 74(7): Ethridge D, Zhu R (996) Prediction of rotor spun cotton yarn quality: a comparison of neural network and regression algorithm. In: Proceedings of the beltwide cotton conference, vol, pp Gong RH, Wright RM () Fancy yarns, their manufacture and application. Woodhead Publishing Limited, UK. Haykin S (994) Neural networks: a comprehensive foundation. Macmillan Publishing Company, New York. Liu Y, Ying Y, Lu H (4) Optical method for predicting total soluble solids in pears using radial basis function networks. In: Proceedings of SPIE the international society for optical engineering, vol 5587, pp Majumdar PK, Majumdar A (4) Predicting the breaking elongation of ring spun cotton yarns using mathematical, statistical, and artificial neural network models. Text Res J 74(7): Majumdar A, Majumdar P K, Sarkar B (5) Application of linear regression, artificial neural network and neuro-fuzzy algorithms to predict the breaking elongation of rotor-spun yarns. Indian J Fibre Textile Res 3():9 5

7 45 5. McIntyre JE, Daniels PN (995) Textile terms and definitions. The Textile Terms and Definitions Committee, Biddles Limited, UK 6. Oh S-H (997) Improving the error backpropagation algorithm with a modified error function. IEEE Trans Neural Netw 8(3): O zdemir O, C even E K (4) Influence of chenille yarn manufacturing parameters on yarn and upholstery fabric abrasion resistance. Text Res J 74(6): Pynckels F, Kiekens P, Sette S, Langenhove LV, Impe K (997) The use of neural nets to simulate the spinning process. J Text Inst 88(4): Rajamanickam R, Hansen SM, Jayaraman S (997) Analysis of the modeling methodologies for predicting the strength of airjet spun yarns. Text Res J 67(): Ramesh MC, Rajamanickam R, Jayaraman S (995) Prediction of yarn tensile properties using artificial neural networks. J Text Inst 86: Zhu R, Ethridge D (997) Predicting hairiness for ring and rotor spun yarns and analyzing the impact of fiber properties. Text Res J 67:

Mohsen Shanbeh, Hossein Hasani, and Somayeh Akhavan Tabatabaei

Mohsen Shanbeh, Hossein Hasani, and Somayeh Akhavan Tabatabaei Modelling and Simulation in Engineering Volume 2011, Article ID 591905, 8 pages doi:10.1155/2011/591905 Research Article Modelling and Predicting the Breaking Strength and Mass Irregularity of Cotton Rotor-Spun

More information

Modelling of tensile properties of needle-punched nonwovens using artificial neural networks

Modelling of tensile properties of needle-punched nonwovens using artificial neural networks Indian Journal of Fibre & Textile Research Vol. 25, March 2000, pp.3 1-36 Modelling of tensile properties of needle-punched nonwovens using artificial neural networks S Debnath, M Madhusoothanan" & V R

More information

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption ANDRÉ NUNES DE SOUZA, JOSÉ ALFREDO C. ULSON, IVAN NUNES

More information

Prediction of thermo-physiological properties of plated knits by different neural network architectures

Prediction of thermo-physiological properties of plated knits by different neural network architectures Indian Journal of Fibre & Textile Research Vol. 43, March 2018, pp. 44-52 Prediction of thermo-physiological properties of plated knits by different neural network architectures Y Jhanji 1,a, D Gupta 2

More information

POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH

POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH Abstract POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH A.H.M.A.Rahim S.K.Chakravarthy Department of Electrical Engineering K.F. University of Petroleum and Minerals Dhahran. Dynamic

More information

Prediction of fabric hand characteristics using extraction principle

Prediction of fabric hand characteristics using extraction principle Indian Journal of Fibre & Textile Research Vol. 41, March 2016, pp. 33-39 Prediction of fabric hand characteristics using extraction principle Apurba Das a, Abhijit Majumdar & Sukumar Roy Department of

More information

Combination of M-Estimators and Neural Network Model to Analyze Inside/Outside Bark Tree Diameters

Combination of M-Estimators and Neural Network Model to Analyze Inside/Outside Bark Tree Diameters Combination of M-Estimators and Neural Network Model to Analyze Inside/Outside Bark Tree Diameters Kyriaki Kitikidou, Elias Milios, Lazaros Iliadis, and Minas Kaymakis Democritus University of Thrace,

More information

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists 3,900 116,000 120M Open access books available International authors and editors Downloads Our

More information

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function

More information

4. Multilayer Perceptrons

4. Multilayer Perceptrons 4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) Human Brain Neurons Input-Output Transformation Input Spikes Output Spike Spike (= a brief pulse) (Excitatory Post-Synaptic Potential)

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

Computational statistics

Computational statistics Computational statistics Lecture 3: Neural networks Thierry Denœux 5 March, 2016 Neural networks A class of learning methods that was developed separately in different fields statistics and artificial

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks Stephan Dreiseitl University of Applied Sciences Upper Austria at Hagenberg Harvard-MIT Division of Health Sciences and Technology HST.951J: Medical Decision Support Knowledge

More information

A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation

A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation 1 Introduction A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation J Wesley Hines Nuclear Engineering Department The University of Tennessee Knoxville, Tennessee,

More information

Prediction of Yarn Strength from Fiber Properties Using Fuzzy ARTMAP

Prediction of Yarn Strength from Fiber Properties Using Fuzzy ARTMAP Missouri University of Science and Technology Scholars' Mine Electrical and Computer Engineering Faculty Research & Creative Works Electrical and Computer Engineering 1-1-1997 Prediction of Yarn Strength

More information

Multilayer Neural Networks

Multilayer Neural Networks Multilayer Neural Networks Multilayer Neural Networks Discriminant function flexibility NON-Linear But with sets of linear parameters at each layer Provably general function approximators for sufficient

More information

Artificial Neural Networks. Edward Gatt

Artificial Neural Networks. Edward Gatt Artificial Neural Networks Edward Gatt What are Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning Very

More information

Artifical Neural Networks

Artifical Neural Networks Neural Networks Artifical Neural Networks Neural Networks Biological Neural Networks.................................. Artificial Neural Networks................................... 3 ANN Structure...........................................

More information

Learning and Memory in Neural Networks

Learning and Memory in Neural Networks Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units

More information

Artificial Neural Networks

Artificial Neural Networks Introduction ANN in Action Final Observations Application: Poverty Detection Artificial Neural Networks Alvaro J. Riascos Villegas University of los Andes and Quantil July 6 2018 Artificial Neural Networks

More information

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas Midterm Review CS 6375: Machine Learning Vibhav Gogate The University of Texas at Dallas Machine Learning Supervised Learning Unsupervised Learning Reinforcement Learning Parametric Y Continuous Non-parametric

More information

Course 395: Machine Learning - Lectures

Course 395: Machine Learning - Lectures Course 395: Machine Learning - Lectures Lecture 1-2: Concept Learning (M. Pantic) Lecture 3-4: Decision Trees & CBC Intro (M. Pantic & S. Petridis) Lecture 5-6: Evaluating Hypotheses (S. Petridis) Lecture

More information

Multilayer Perceptrons (MLPs)

Multilayer Perceptrons (MLPs) CSE 5526: Introduction to Neural Networks Multilayer Perceptrons (MLPs) 1 Motivation Multilayer networks are more powerful than singlelayer nets Example: XOR problem x 2 1 AND x o x 1 x 2 +1-1 o x x 1-1

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

Multilayer Neural Networks

Multilayer Neural Networks Multilayer Neural Networks Introduction Goal: Classify objects by learning nonlinearity There are many problems for which linear discriminants are insufficient for minimum error In previous methods, the

More information

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,

More information

Machine Learning

Machine Learning Machine Learning 10-601 Maria Florina Balcan Machine Learning Department Carnegie Mellon University 02/10/2016 Today: Artificial neural networks Backpropagation Reading: Mitchell: Chapter 4 Bishop: Chapter

More information

Introduction to Neural Networks

Introduction to Neural Networks Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning

More information

Artificial Neural Network

Artificial Neural Network Artificial Neural Network Eung Je Woo Department of Biomedical Engineering Impedance Imaging Research Center (IIRC) Kyung Hee University Korea ejwoo@khu.ac.kr Neuron and Neuron Model McCulloch and Pitts

More information

A STATE-SPACE NEURAL NETWORK FOR MODELING DYNAMICAL NONLINEAR SYSTEMS

A STATE-SPACE NEURAL NETWORK FOR MODELING DYNAMICAL NONLINEAR SYSTEMS A STATE-SPACE NEURAL NETWORK FOR MODELING DYNAMICAL NONLINEAR SYSTEMS Karima Amoura Patrice Wira and Said Djennoune Laboratoire CCSP Université Mouloud Mammeri Tizi Ouzou Algeria Laboratoire MIPS Université

More information

Prediction of polyester/cotton blended rotor-spun yarns hairiness based on the machine parameters

Prediction of polyester/cotton blended rotor-spun yarns hairiness based on the machine parameters Indian Journal of Fibre & Textile Research Vol. 41, March 2016, pp. 19-25 Prediction of polyester/cotton blended rotor-spun yarns hairiness based on the machine parameters Vahid Ghorbani 1, Morteza Vadood

More information

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis Introduction to Natural Computation Lecture 9 Multilayer Perceptrons and Backpropagation Peter Lewis 1 / 25 Overview of the Lecture Why multilayer perceptrons? Some applications of multilayer perceptrons.

More information

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000

More information

Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!

Artificial Neural Networks and Nonparametric Methods CMPSCI 383 Nov 17, 2011! Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011! 1 Todayʼs lecture" How the brain works (!)! Artificial neural networks! Perceptrons! Multilayer feed-forward networks! Error

More information

Neural Networks and the Back-propagation Algorithm

Neural Networks and the Back-propagation Algorithm Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely

More information

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen Neural Networks - I Henrik I Christensen Robotics & Intelligent Machines @ GT Georgia Institute of Technology, Atlanta, GA 30332-0280 hic@cc.gatech.edu Henrik I Christensen (RIM@GT) Neural Networks 1 /

More information

CS:4420 Artificial Intelligence

CS:4420 Artificial Intelligence CS:4420 Artificial Intelligence Spring 2018 Neural Networks Cesare Tinelli The University of Iowa Copyright 2004 18, Cesare Tinelli and Stuart Russell a a These notes were originally developed by Stuart

More information

Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso

Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso xsu@utep.edu Fall, 2018 Outline Introduction A Brief History ANN Architecture Terminology

More information

Statistical Machine Learning from Data

Statistical Machine Learning from Data January 17, 2006 Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Multi-Layer Perceptrons Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole

More information

Estimation of Inelastic Response Spectra Using Artificial Neural Networks

Estimation of Inelastic Response Spectra Using Artificial Neural Networks Estimation of Inelastic Response Spectra Using Artificial Neural Networks J. Bojórquez & S.E. Ruiz Universidad Nacional Autónoma de México, México E. Bojórquez Universidad Autónoma de Sinaloa, México SUMMARY:

More information

FAST METHODS FOR EVALUATING THE ELECTRIC FIELD LEVEL IN 2D-INDOOR ENVIRONMENTS

FAST METHODS FOR EVALUATING THE ELECTRIC FIELD LEVEL IN 2D-INDOOR ENVIRONMENTS Progress In Electromagnetics Research, PIER 69, 247 255, 2007 FAST METHODS FOR EVALUATING THE ELECTRIC FIELD LEVEL IN 2D-INDOOR ENVIRONMENTS D. Martinez, F. Las-Heras, and R. G. Ayestaran Department of

More information

Lab 5: 16 th April Exercises on Neural Networks

Lab 5: 16 th April Exercises on Neural Networks Lab 5: 16 th April 01 Exercises on Neural Networks 1. What are the values of weights w 0, w 1, and w for the perceptron whose decision surface is illustrated in the figure? Assume the surface crosses the

More information

Neural networks. Chapter 19, Sections 1 5 1

Neural networks. Chapter 19, Sections 1 5 1 Neural networks Chapter 19, Sections 1 5 Chapter 19, Sections 1 5 1 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 19, Sections 1 5 2 Brains 10

More information

In the Name of God. Lectures 15&16: Radial Basis Function Networks

In the Name of God. Lectures 15&16: Radial Basis Function Networks 1 In the Name of God Lectures 15&16: Radial Basis Function Networks Some Historical Notes Learning is equivalent to finding a surface in a multidimensional space that provides a best fit to the training

More information

An artificial neural networks (ANNs) model is a functional abstraction of the

An artificial neural networks (ANNs) model is a functional abstraction of the CHAPER 3 3. Introduction An artificial neural networs (ANNs) model is a functional abstraction of the biological neural structures of the central nervous system. hey are composed of many simple and highly

More information

Unit 8: Introduction to neural networks. Perceptrons

Unit 8: Introduction to neural networks. Perceptrons Unit 8: Introduction to neural networks. Perceptrons D. Balbontín Noval F. J. Martín Mateos J. L. Ruiz Reina A. Riscos Núñez Departamento de Ciencias de la Computación e Inteligencia Artificial Universidad

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks Oliver Schulte - CMPT 310 Neural Networks Neural networks arise from attempts to model human/animal brains Many models, many claims of biological plausibility We will focus on

More information

ARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC ALGORITHM FOR NONLINEAR MIMO MODEL OF MACHINING PROCESSES

ARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC ALGORITHM FOR NONLINEAR MIMO MODEL OF MACHINING PROCESSES International Journal of Innovative Computing, Information and Control ICIC International c 2013 ISSN 1349-4198 Volume 9, Number 4, April 2013 pp. 1455 1475 ARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC

More information

Comfort characteristics of textiles - Objective evaluation and prediction by soft computing techniques

Comfort characteristics of textiles - Objective evaluation and prediction by soft computing techniques Comfort characteristics of textiles - Objective evaluation and prediction by soft computing techniques Yamini Jhanji 1,Shelly Khanna 1 & Amandeep Manocha 1 1 Department of Fashion & Apparel Engineering,

More information

A Feature Based Neural Network Model for Weather Forecasting

A Feature Based Neural Network Model for Weather Forecasting World Academy of Science, Engineering and Technology 4 2 A Feature Based Neural Network Model for Weather Forecasting Paras, Sanjay Mathur, Avinash Kumar, and Mahesh Chandra Abstract Weather forecasting

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks 鮑興國 Ph.D. National Taiwan University of Science and Technology Outline Perceptrons Gradient descent Multi-layer networks Backpropagation Hidden layer representations Examples

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression

More information

Lecture 4: Perceptrons and Multilayer Perceptrons

Lecture 4: Perceptrons and Multilayer Perceptrons Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons

More information

Intelligent Modular Neural Network for Dynamic System Parameter Estimation

Intelligent Modular Neural Network for Dynamic System Parameter Estimation Intelligent Modular Neural Network for Dynamic System Parameter Estimation Andrzej Materka Technical University of Lodz, Institute of Electronics Stefanowskiego 18, 9-537 Lodz, Poland Abstract: A technique

More information

Radial-Basis Function Networks

Radial-Basis Function Networks Radial-Basis Function etworks A function is radial basis () if its output depends on (is a non-increasing function of) the distance of the input from a given stored vector. s represent local receptors,

More information

International Journal of Scientific Research and Reviews

International Journal of Scientific Research and Reviews Research article Available online www.ijsrr.org ISSN: 2279 0543 International Journal of Scientific Research and Reviews Prediction of Compressive Strength of Concrete using Artificial Neural Network ABSTRACT

More information

Advanced statistical methods for data analysis Lecture 2

Advanced statistical methods for data analysis Lecture 2 Advanced statistical methods for data analysis Lecture 2 RHUL Physics www.pp.rhul.ac.uk/~cowan Universität Mainz Klausurtagung des GK Eichtheorien exp. Tests... Bullay/Mosel 15 17 September, 2008 1 Outline

More information

USTER QUANTUM 3 APPLICATION REPORT. Comparison of the clearer generations THE YARN QUALITY ASSURANCE SYSTEM

USTER QUANTUM 3 APPLICATION REPORT. Comparison of the clearer generations THE YARN QUALITY ASSURANCE SYSTEM USTER QUANTUM 3 APPLICATION REPORT Comparison of the clearer generations THE YARN QUALITY ASSURANCE SYSTEM S. Dönmez Kretzschmar / U. Schneider September 2010 / Version 2 SE 655 Copyright 2010 by Uster

More information

Artificial Neural Networks (ANN)

Artificial Neural Networks (ANN) Artificial Neural Networks (ANN) Edmondo Trentin April 17, 2013 ANN: Definition The definition of ANN is given in 3.1 points. Indeed, an ANN is a machine that is completely specified once we define its:

More information

Midterm Review CS 7301: Advanced Machine Learning. Vibhav Gogate The University of Texas at Dallas

Midterm Review CS 7301: Advanced Machine Learning. Vibhav Gogate The University of Texas at Dallas Midterm Review CS 7301: Advanced Machine Learning Vibhav Gogate The University of Texas at Dallas Supervised Learning Issues in supervised learning What makes learning hard Point Estimation: MLE vs Bayesian

More information

Radial-Basis Function Networks

Radial-Basis Function Networks Radial-Basis Function etworks A function is radial () if its output depends on (is a nonincreasing function of) the distance of the input from a given stored vector. s represent local receptors, as illustrated

More information

Predicting Air Permeability of Nylon Parachute Fabrics

Predicting Air Permeability of Nylon Parachute Fabrics 235 El Shakankery et al Predicting Air Permeability of Nylon Parachute Fabrics Mahmoud H. El Shakankery Spinning and Weaving Engineering Dept., Textile Research Division, National Research Centre, Mohmed

More information

ANN Control of Non-Linear and Unstable System and its Implementation on Inverted Pendulum

ANN Control of Non-Linear and Unstable System and its Implementation on Inverted Pendulum Research Article International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347-5161 2014 INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet ANN

More information

Explaining Results of Neural Networks by Contextual Importance and Utility

Explaining Results of Neural Networks by Contextual Importance and Utility Explaining Results of Neural Networks by Contextual Importance and Utility Kary FRÄMLING Dep. SIMADE, Ecole des Mines, 158 cours Fauriel, 42023 Saint-Etienne Cedex 2, FRANCE framling@emse.fr, tel.: +33-77.42.66.09

More information

Tutorial on Machine Learning for Advanced Electronics

Tutorial on Machine Learning for Advanced Electronics Tutorial on Machine Learning for Advanced Electronics Maxim Raginsky March 2017 Part I (Some) Theory and Principles Machine Learning: estimation of dependencies from empirical data (V. Vapnik) enabling

More information

Neural Networks DWML, /25

Neural Networks DWML, /25 DWML, 2007 /25 Neural networks: Biological and artificial Consider humans: Neuron switching time 0.00 second Number of neurons 0 0 Connections per neuron 0 4-0 5 Scene recognition time 0. sec 00 inference

More information

MODELLING OF TOOL LIFE, TORQUE AND THRUST FORCE IN DRILLING: A NEURO-FUZZY APPROACH

MODELLING OF TOOL LIFE, TORQUE AND THRUST FORCE IN DRILLING: A NEURO-FUZZY APPROACH ISSN 1726-4529 Int j simul model 9 (2010) 2, 74-85 Original scientific paper MODELLING OF TOOL LIFE, TORQUE AND THRUST FORCE IN DRILLING: A NEURO-FUZZY APPROACH Roy, S. S. Department of Mechanical Engineering,

More information

Learning with Ensembles: How. over-tting can be useful. Anders Krogh Copenhagen, Denmark. Abstract

Learning with Ensembles: How. over-tting can be useful. Anders Krogh Copenhagen, Denmark. Abstract Published in: Advances in Neural Information Processing Systems 8, D S Touretzky, M C Mozer, and M E Hasselmo (eds.), MIT Press, Cambridge, MA, pages 190-196, 1996. Learning with Ensembles: How over-tting

More information

Neural Networks and Deep Learning

Neural Networks and Deep Learning Neural Networks and Deep Learning Professor Ameet Talwalkar November 12, 2015 Professor Ameet Talwalkar Neural Networks and Deep Learning November 12, 2015 1 / 16 Outline 1 Review of last lecture AdaBoost

More information

ESTIMATION OF HOURLY MEAN AMBIENT TEMPERATURES WITH ARTIFICIAL NEURAL NETWORKS 1. INTRODUCTION

ESTIMATION OF HOURLY MEAN AMBIENT TEMPERATURES WITH ARTIFICIAL NEURAL NETWORKS 1. INTRODUCTION Mathematical and Computational Applications, Vol. 11, No. 3, pp. 215-224, 2006. Association for Scientific Research ESTIMATION OF HOURLY MEAN AMBIENT TEMPERATURES WITH ARTIFICIAL NEURAL NETWORKS Ömer Altan

More information

Mr. Harshit K. Dave 1, Dr. Keyur P. Desai 2, Dr. Harit K. Raval 3

Mr. Harshit K. Dave 1, Dr. Keyur P. Desai 2, Dr. Harit K. Raval 3 Investigations on Prediction of MRR and Surface Roughness on Electro Discharge Machine Using Regression Analysis and Artificial Neural Network Programming Mr. Harshit K. Dave 1, Dr. Keyur P. Desai 2, Dr.

More information

Portugaliae Electrochimica Acta 26/4 (2008)

Portugaliae Electrochimica Acta 26/4 (2008) Portugaliae Electrochimica Acta 6/4 (008) 6-68 PORTUGALIAE ELECTROCHIMICA ACTA Comparison of Regression Model and Artificial Neural Network Model for the Prediction of Volume Percent of Diamond Deposition

More information

USTER LABORATORY SYSTEMS

USTER LABORATORY SYSTEMS USTER LABORATORY SYSTEMS APPLICATION REPORT Measurement of slub yarns with the USTER TESTER 4 THE STANDARD FROM FIBER TO FABRIC Richard Furter October 2003 SE 580 Copyright 2007 by Uster Technologies AG

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement

More information

Feed-forward Networks Network Training Error Backpropagation Applications. Neural Networks. Oliver Schulte - CMPT 726. Bishop PRML Ch.

Feed-forward Networks Network Training Error Backpropagation Applications. Neural Networks. Oliver Schulte - CMPT 726. Bishop PRML Ch. Neural Networks Oliver Schulte - CMPT 726 Bishop PRML Ch. 5 Neural Networks Neural networks arise from attempts to model human/animal brains Many models, many claims of biological plausibility We will

More information

Lecture 5: Logistic Regression. Neural Networks

Lecture 5: Logistic Regression. Neural Networks Lecture 5: Logistic Regression. Neural Networks Logistic regression Comparison with generative models Feed-forward neural networks Backpropagation Tricks for training neural networks COMP-652, Lecture

More information

Multilayer Perceptrons and Backpropagation

Multilayer Perceptrons and Backpropagation Multilayer Perceptrons and Backpropagation Informatics 1 CG: Lecture 7 Chris Lucas School of Informatics University of Edinburgh January 31, 2017 (Slides adapted from Mirella Lapata s.) 1 / 33 Reading:

More information

Introduction Biologically Motivated Crude Model Backpropagation

Introduction Biologically Motivated Crude Model Backpropagation Introduction Biologically Motivated Crude Model Backpropagation 1 McCulloch-Pitts Neurons In 1943 Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, published A logical calculus of the

More information

Neural networks. Chapter 20. Chapter 20 1

Neural networks. Chapter 20. Chapter 20 1 Neural networks Chapter 20 Chapter 20 1 Outline Brains Neural networks Perceptrons Multilayer networks Applications of neural networks Chapter 20 2 Brains 10 11 neurons of > 20 types, 10 14 synapses, 1ms

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

Investigation of Performance Properties of Graphene Coated Fabrics

Investigation of Performance Properties of Graphene Coated Fabrics Investigation of Performance Properties of Graphene Coated Fabrics Rumeysa Celen, Gizem Manasoglu, Mehmet Kanik, Yusuf Ulcay (Department of Textile Engineering, Bursa Uludag University, TURKEY) ABSTRACT:

More information

Feedforward Neural Nets and Backpropagation

Feedforward Neural Nets and Backpropagation Feedforward Neural Nets and Backpropagation Julie Nutini University of British Columbia MLRG September 28 th, 2016 1 / 23 Supervised Learning Roadmap Supervised Learning: Assume that we are given the features

More information

Simulating Neural Networks. Lawrence Ward P465A

Simulating Neural Networks. Lawrence Ward P465A Simulating Neural Networks Lawrence Ward P465A 1. Neural network (or Parallel Distributed Processing, PDP) models are used for a wide variety of roles, including recognizing patterns, implementing logic,

More information

RAINFALL RUNOFF MODELING USING SUPPORT VECTOR REGRESSION AND ARTIFICIAL NEURAL NETWORKS

RAINFALL RUNOFF MODELING USING SUPPORT VECTOR REGRESSION AND ARTIFICIAL NEURAL NETWORKS CEST2011 Rhodes, Greece Ref no: XXX RAINFALL RUNOFF MODELING USING SUPPORT VECTOR REGRESSION AND ARTIFICIAL NEURAL NETWORKS D. BOTSIS1 1, P. LATINOPOULOS 2 and K. DIAMANTARAS 3 1&2 Department of Civil

More information

This paper presents the

This paper presents the ISESCO JOURNAL of Science and Technology Volume 8 - Number 14 - November 2012 (2-8) A Novel Ensemble Neural Network based Short-term Wind Power Generation Forecasting in a Microgrid Aymen Chaouachi and

More information

Forecasting of Rain Fall in Mirzapur District, Uttar Pradesh, India Using Feed-Forward Artificial Neural Network

Forecasting of Rain Fall in Mirzapur District, Uttar Pradesh, India Using Feed-Forward Artificial Neural Network International Journal of Engineering Science Invention ISSN (Online): 2319 6734, ISSN (Print): 2319 6726 Volume 2 Issue 8ǁ August. 2013 ǁ PP.87-93 Forecasting of Rain Fall in Mirzapur District, Uttar Pradesh,

More information

Heterogeneous mixture-of-experts for fusion of locally valid knowledge-based submodels

Heterogeneous mixture-of-experts for fusion of locally valid knowledge-based submodels ESANN'29 proceedings, European Symposium on Artificial Neural Networks - Advances in Computational Intelligence and Learning. Bruges Belgium), 22-24 April 29, d-side publi., ISBN 2-9337-9-9. Heterogeneous

More information

COMPARING PERFORMANCE OF NEURAL NETWORKS RECOGNIZING MACHINE GENERATED CHARACTERS

COMPARING PERFORMANCE OF NEURAL NETWORKS RECOGNIZING MACHINE GENERATED CHARACTERS Proceedings of the First Southern Symposium on Computing The University of Southern Mississippi, December 4-5, 1998 COMPARING PERFORMANCE OF NEURAL NETWORKS RECOGNIZING MACHINE GENERATED CHARACTERS SEAN

More information

Neural Networks. Volker Tresp Summer 2015

Neural Networks. Volker Tresp Summer 2015 Neural Networks Volker Tresp Summer 2015 1 Introduction The performance of a classifier or a regression model critically depends on the choice of appropriate basis functions The problem with generic basis

More information

Christian Mohr

Christian Mohr Christian Mohr 20.12.2011 Recurrent Networks Networks in which units may have connections to units in the same or preceding layers Also connections to the unit itself possible Already covered: Hopfield

More information

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

COMP 551 Applied Machine Learning Lecture 14: Neural Networks COMP 551 Applied Machine Learning Lecture 14: Neural Networks Instructor: Ryan Lowe (ryan.lowe@mail.mcgill.ca) Slides mostly by: Class web page: www.cs.mcgill.ca/~hvanho2/comp551 Unless otherwise noted,

More information

Machine Learning. Neural Networks

Machine Learning. Neural Networks Machine Learning Neural Networks Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 Biological Analogy Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 THE

More information

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Neural Networks Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Brains as Computational Devices Brains advantages with respect to digital computers: Massively parallel Fault-tolerant Reliable

More information

Part 8: Neural Networks

Part 8: Neural Networks METU Informatics Institute Min720 Pattern Classification ith Bio-Medical Applications Part 8: Neural Netors - INTRODUCTION: BIOLOGICAL VS. ARTIFICIAL Biological Neural Netors A Neuron: - A nerve cell as

More information

Supporting Information

Supporting Information Supporting Information Convolutional Embedding of Attributed Molecular Graphs for Physical Property Prediction Connor W. Coley a, Regina Barzilay b, William H. Green a, Tommi S. Jaakkola b, Klavs F. Jensen

More information

Old painting digital color restoration

Old painting digital color restoration Old painting digital color restoration Michail Pappas Ioannis Pitas Dept. of Informatics, Aristotle University of Thessaloniki GR-54643 Thessaloniki, Greece Abstract Many old paintings suffer from the

More information

Reading, UK 1 2 Abstract

Reading, UK 1 2 Abstract , pp.45-54 http://dx.doi.org/10.14257/ijseia.2013.7.5.05 A Case Study on the Application of Computational Intelligence to Identifying Relationships between Land use Characteristics and Damages caused by

More information

From perceptrons to word embeddings. Simon Šuster University of Groningen

From perceptrons to word embeddings. Simon Šuster University of Groningen From perceptrons to word embeddings Simon Šuster University of Groningen Outline A basic computational unit Weighting some input to produce an output: classification Perceptron Classify tweets Written

More information