PREDICTION THE JOMINY CURVES BY MEANS OF NEURAL NETWORKS

Size: px
Start display at page:

Download "PREDICTION THE JOMINY CURVES BY MEANS OF NEURAL NETWORKS"

Transcription

1 Tomislav Filetin, Dubravko Majetić, Irena Žmak Faculty of Mechanical Engineering and Naval Architecture, University of Zagreb, Croatia PREDICTION THE JOMINY CURVES BY MEANS OF NEURAL NETWORKS ABSTRACT: Accurate prediction of hardenability based on the chemical composition is very important for steel production as well as for its users. An attempt has been made to establish a non-linear static discrete-time neuron model, the so-called Static Elementary Processor (SEP). Based on the SEP neurons, a Static Multi Layer Perceptron Neural Network is proposed to predict a Jominy hardness curve from chemical composition. To accelerate the convergence of proposed static error-back propagation learning algorithm, the momentum method is applied. The learning results are presented in terms that are insensitive to the learning data range and allow easy comparison with other learning algorithms, independent of machine architecture or simulator implementation. In the learning process datasets with heats are used comprising samples from 40 steel grades with different chemical composition. The mean error between measured and predicted hardness data and standard deviation for testing dataset (60 heats samples from 03 heats in question) is comparable with other published methods of prediction. The additional testing of three smaller groups Cr-steels; Cr-Ni-Mo (Ni-Cr-Mo) steels for hardening and tempering and Cr-Mo, Cr-Ni (Ni-Cr), Cr-Ni-Mo (Ni-Cr-Mo) steels for carburizing shows better accuracy then by testing with heterogeneous dataset. KEYWORDS: Steels, Jominy curve, Prediction of properties, Artificial neural network,. INTRODUCTION In order to predict a Jominy hardness curve many authors /,,3,4/ have established regression formulae or methods to calculate hardness at different Jominy distances, by means of statistical analysis of a great number of steel heats. Degree of accuracy when comparing between a measured and calculated or predicted hardness depends on the prediction methods and the source of measured Jominy data. The use of computational neural network (NN) as a method of artificial intelligence has rapidly increased over the past 7 years in different science and technology fields: chemical science, design of molecular structure and prediction of polymer properties /5/, prediction of weld deposits structures and properties as a function of a very large number of variables /6/, process control etc. After learning the basic relationships between input factors and output the NN method enable to generate the output variables. This method is very suitable for predicting materials properties in the case when some of the relevant influence factors are unknown, and for solving many complex phenomena for which physical models do not exist. This contribution is based on results of first attempt in application of SEP neurones. A Static Multi Layer Perceptron Neural Network is proposed to predict the Jominy hardness curve from chemical composition. In the learning and testing process datasets of 00 heats of 40 different steel grades are used. The intention of developing this approach is to establish a unique method for predicting Jominy hardness values for a wide range of chemical compositions (steel grades in question).. DESCRIPTION OF NEURAL NETWORK

2 Since artificial neural networks can effectively represent complex non-linear functions, they proved to be a very useful tool in prediction and identifying of highly non-linear systems. The neurone models most commonly applied are the Feed Forward Perceptron used in multi layer networks, and the Radial Basis Function neurone (RBF). Both networks are proved to be universal approximator of any static non-linear mapping. They are capable of identifying any non-linear unique state function to arbitrary desired accuracy. Several learning methods for feedforward neural networks have been proposed in literature. Most of these methods rely on the gradient methodology and involve the computation of partial derivatives, or sensitivity functions. In this sense, the well known error back propagation algorithm for feedforward neural network is used in adaptation of weights /8/, /9/. Theoretical works by several researchers, including /0/ and //, have proven that, even with one hidden layer, artificial neural networks can uniformly approximate any continuous function over a compact domain, provided the network has sufficient number of units, or neurones. Thus the network proposed in this study as plotted in Fig.. has three layers /9/. Each i-th neurone in the first, input layer has single input that represents the external input to the neural network. The second layer, which has no direct connections to the external world, is usually referred to as a hidden layer also consisting of static neurone presented by Fig.. Each j-th static neurone in hidden layer has an input from every neurone in the first layer, and one additional input with a fixed value of unity usually named as Bias. Each k-th neurone in the third, output layer, has input from every neurone in the second layer, and like the second layer, one additional input with fixed value of unity (Bias). The output of the third layer is the external output of the neural network. O O OK..... K Output layer..... J Hidden layer..... I Input layer U Static neuron model U UI Fig.. Static neural network The structure of a proposed static neuron model is plotted in Fig.. The non-linear activation function input and output at time instant (n) are given in () and () respectively: J = net ( n ) w j u j. () j= The non-linear continuous bipolar activation function is described in (): y ( n ) = γ( net( n )) =, () net( n ) + e

3 where u J = represents a threshold unit, also called Bias. u J = BIAS w J u u w w net γ y u 3 w 3 u J w J Fig.. Static neuron model Learning algorithm for optimal parameters The goal of the learning algorithm is to adjust the neural network parameters (the weights) based on a given set of input and desired output pairs (supervised learning) and to determine the optimal parameter set that minimises a performance index E as follows: N E = ( Od ( n ) O( n )), (3) n= where N is the training set size, and the error is the signal defined as difference between the desired response Od ( n) and the actual neurone response O( n). This error, which is calculated at the output layer, is propagated back to the input layer through the static neurons in hidden layer. The result is a well-known error-back propagation learning algorithm. The adjustment of weights occurs for each input-output data pair (pattern or stochastic learning procedure). The linear activation function given in (4) is a chosen transfer (activation) function for static neurones in output layer: O ( n ) = γ ( net ( n )) net ( n ), (4) k k k = k where k =,,..., K is the number of neural network outputs. To determine the optimal network parameters that minimise the performance index E, a gradient method can be applied. Iteratively, the optimal parameters (the weights coefficients) are approximated by moving in the direction of steepest descent /8/, /9/: ϑnew = ϑold + ϑ, (5) E ϑ = η E = η, (6) ϑ where η is a user-selected positive learning constant (learning rate). The choice of the learning constant depends strongly on the class of the learning problem and on the network architecture. The learning rate values ranging from 0-3 to 0 have been reported throughout the technical literature as successful for many computational back-propagation experiments. For large constants, the learning speed can be drastically increased; however, the

4 learning may not be exact, with tendencies to overshoot, or it may be never stabilised at any minimum. Finally, a measure of performance must be specified. All error measures will be reported using nondimensional error index NRMS, Normalised Root Mean Square error, given in (7). Normalised means that the root mean square is divided by the standard deviation of the target data (σ dn ) //. Thus the resulting error index, or index of accuracy is insensitive to the dynamic range of the learning data, and allows easy comparison with other learning algorithms, independent of machine architecture or simulator implementation. Training started with random weights values between - and +, and the networks were trained with η = and user-selected positive momentum constant α = NRMS = N n= ( O O d n N σ d n n ). (7) 3. RESULTS OF PREDICTION The dataset for learning can be compiled from any source of chemical analysis data and measured Jominy hardness data. Data from a single source should produce the best results. Our dataset for learning and testing is derived from a single source and contains very heterogeneous groups of nonboron constructional steel grades for hardening and tempering and for carburizing (40 steel grades - 03 heats): - Unalloyed steels - Cr - Cr-Mo - Cr-V - Cr-Mo-V - Cr-Ni (Ni-Cr) - Cr-Ni-Mo (Ni-Cr-Mo) - Mn-Cr - Mo-Cr. It has been aimed as much as possible to find a generally applicable approach for predicting the Jominy hardness from large range of chemical compositions. The ranges of chemical composition of the heats in question are: 0,-0,70 %C; 0,-,4 %Si; 0,-, %Mn; 0,4-,96 %Cr; 0,4-,76 %Ni; 0,08-0,3 %V; 0,0-0,35 %Mo; 0,00-0,34 %Cu; 0,00-0,34 %Al. The main problem in this investigation was the selection of representative dataset of samples for learning. The dataset we used for training contains heats (about 60 % of all steel heats in question). Input data are the results of chemical analysis of melt (wt. % of 9 chemical elements - C, Si, Mn, Cr, Ni, Mo, Cu, Al and V) and output data are the hardnesses at J-distances:.5, 3, 5, 7, 9,, 3, 5, 0, 5, 30, 40 and 50 mm.

5 The second open problem is the normalising of input and output data. For normalising the possible minimum and maximum (range of data) in testing dataset and the predictive influence of each input data on outputs has to be known. The difference between the predicted and measured Jominy hardness, for different heats at each Jominy distance, was calculated and standard deviation of errors was determined for each heat and each J-distance. It is obvious that accuracy of the results depends on the steel type and J-distance. The results of testing with learning dataset on learning dataset show (Table ) that the mean difference between measured and learned data as well as standard deviations are relatively small. That points to good consistency of dataset for learning in relation to all expected heats for testing. Table: Mean errors and standard deviation of errors for hardenability results of testing with learning data on learning dataset J-distance:, HRC 0,0035-0,03 0,0 0,037 0,033 0,034 0,047 0,073 0,03-0,038-0,074-0,04-0,037 HRC for entire dataset = 0,0045 σ,060 0,999,94,98,74,57,474,43,5,73,808,785,956 σ for entire dataset =,63 N = NRMS = 0,3 The results of prediction for 60 samples with different chemical composition using learning dataset with heats show that the mean differences between measured and predicted hardness and standard deviations are small and comparable with other methods /3,4,5/ (Table ). The C and Crsteels have a greater mean difference and standard deviation than steel grades with higher hardenability, particularly at J-distances where hardness sheer falls down. Table : Mean errors and standard deviation of errors for hardenability results of predicting with 60 different heats J-distance:, HRC 0,8 0,7 0,7-0,59-0,87-0,033-0,05 0,06-0,96-0,9-0,099 0,054-0,0 HRC for entire dataset = -0,045 σ 0,97 0,879,50,96,470,60,706,539,384,485,848,950 3,084 σ for entire dataset =,35 N = 60 NRMS = 0,7 Figure 3 illustrates the differences between the measured and predicted Jominy curves for two steels: 30 NiCrMo and 0 MnCr 5.

6 NiCrMo Hardness, HRC MnCr 5 Fig. 3 Comparison between measured and predicted Jominy curves with NN of two steels: 30 NiCrMo and 0 MnCr 5 Besides, the accuracy of prediction based on relatively similar chemical composition dataset of defined steel group was estimated (Table 3). Hardenability data are predicted for three datasets: - Cr-steels for hardening and tempering - Cr-Ni-Mo (Ni-Cr-Mo) steels for hardening and tempering 3 - Cr-Mo, Cr-Ni (Ni-Cr) and Cr-Ni-Mo (Ni-Cr-Mo) steels for carburizing, with learning dataset of heats The results are shown in table 4, 5 and 6. measured predicted Table 3: The limits of chemical composition of tested steel groups Wt. % Cr-steels Cr-Ni-Mo for hardening and tempering Cr-Mo, Cr-Ni, Cr-Ni-Mo for carburizing C 0,33-0,45 0,7-0,43 0,-0,8 Si 0,3-0,37 0,9-0,8 0,7-0,3 Mn 0,60-0,85 0,36-0,8 0,3-,09 Cr 0,87-,7 0,45-,96 0,47-,9 Ni 0,09-0,5 0,46-,89 0,09-,74 Mo 0,0-0,06 0,6-0,43 0,0-0,8 Cu 0,4-0,34 0,08-0,9 0,5-0,34 Al 0,004-0,043 0,008-0,03 0,003-0,03 Table 4: The results of predicting with different heats of Cr-steels Jominy distance, mm J-distance:, HRC 0,663 0,458-0,4-0,444-0,76-0,398-0,9 0,036 0,574, 3,38,74 -,35 HRC for entire dataset = 0,55 σ 0,67 0,38,00,79,35,9,70,3,49,36 0,75,9 3, σ for entire dataset =,5 N = 6 NRMS = 0,06

7 The standard deviations of errors for steels with 0,-0,4 % C and 0,8-, % Cr at different J- distances resulted from the NN method are comparable with the same from published regression equations /4,5/ and Database Method /6/ (Fig. 4). The greatest standard deviation derived from NN method occurs near inflection point of the Jominy curve. Standard Deviation of Error σ (HRC) Regression Equations /5/ 3.0 Database Calculation /6/ NN Method Jominy distance, mm Fig. 4 Comparison of standard deviation of error for hardenability predictions performed using neural network (NN) and published Database Method /6/ as well as regression derived equations /5/ using steels with 0,-0,4 % C and 0,8-, % Cr. Table 5: The results of predicting with different heats of Cr-Ni-Mo (Ni-Cr-Mo) steels for hardening and tempering J-distance:, HRC -0,08-0,39-0,08-0,7 -,089 -,039 -,066-0,95-0,96 -,36 -,396 -,7 -,46 HRC for entire dataset = -0,859 σ 0,87 0,6 0,65,0,33,49,83,00,9,95,56,78,54 σ for entire dataset =,5 N = 3 NRMS = 0,64 Table 6: The results of predicting with different heats of Cr-Mo, Cr-Ni (Ni-Cr) and Cr-Ni-Mo (Ni-Cr-Mo) steels for carburizing J-distance:, HRC -0,459-0,388-0,45-0,375 0,38 0,404 0,47 0,765 0,75 0,377 0,346 0,09 0,05 HRC for entire dataset = 0,3 σ 0,83 0,87,06,77,,6,9,88,87,89,96,,0 σ for entire dataset =,8 N = 9 NRMS = 0,6 The mean standard deviation of errors for all of the three groups is smaller than from prediction with the group of 60 different steels. Fig. 5 shows the comparison of standard deviation of errors at different distances from quenched end of the Jominy probe for three tested steel groups.

8 Standard Deviation of Error σ (HRC) Cr steels (N=6) Jominy distance, mm Cr-Ni-Mo steels for hardening and tempering (N=3) Cr-Mo, Cr-Ni, Cr-Ni-Mo steels for carburizing (N=9) Fig. 5 Comparison of standard deviation of error for hardenability predictions using neural network for three groups of steels For the time being the number of heats for testing with NN method is too small for more accurate comparison of these three methods and for drawing final conclusions. 4. CONCLUSION The application of the neural network method for predicting the Jominy hardenability curve, as shown by presented testing, encourage further investigation. The accuracy of prediction depends on the accuracy (standard deviation) of measured data. The measured data should reflect real relations between chemical composition and Jominy hardness. This preliminary experience shows following evident benefits of application of the NN for predicting the Jominy hardenability in a steel production: - accurate prediction of hardenability for each new heat in the production based on own learned dataset (from own metallurgical history), - possibility of optimizing the chemical composition of steel for required hardenability, - avoiding the Jominy testing after the production. To achieve better accuracy in application of NN, additional activities are needed in following directions:

9 a) selecting the optimum size of learning dataset with the representative data for expected heats in question. The learning datasets have to be as wide as possible and should contain enough different shapes of Jominy curves; b) testing the other approaches in normalising of data; c) testing with a greater number of heats from different sources and comparison of those results; d) application of other NN algorithms (e.g., Radial Basis Function Neural Networks - RBF). Acknowledgements The authors wish to thank dr. F. Grešovnik from Željezarna Ravne Slovenia for the help in collecting the hardenability data and to Ministry of Science and Technology of Republic of Croatia for financial support of this research within the project titled Computerised simulation and materials development. 4. REFERENCES // B. G. Sumpter, D. W. Noid, Proceedings from ANTEC 95, p // H. Bhadeshia, Materials World, Nov. 996, p /3/ E. Just, Met. Prog., Vol 96, 969, p. 87- /4/ H. Gulden K. Kriger, D. Lepper, A. Lubben, H. Rohloff, P. Schuler, V. Schuler and H.J. Wieland, Stahl und Eisen, Vol 09 (No. ), 989, p. 3-7 /5/ H. Gulden K. Kriger, D. Lepper, A. Lubben, H. Rohloff, P. Schuler, V. Schuler and H.J. Wieland, Stahl und Eisen, Vol (No. 7), 99, p. 0-0 /6/ W.T. Cook, P.F. Morris, L. Woollard, J. of Materials Engineering and Performance, (997)6, p /7/ J. S. Kirkaldy, S.E. Feldman, J. Heat Treat. (989)Vol 7, p /8/ J.M. Zurada, Artificial Neural Systems, W.P. Company, USA, (99). /9/ D.Majetic,V.Kecman,Technical Report TR93-YUSA-0, MIT, Cambridge, USA, (993), p /0/ G.Cybenko, Mathematics of Control, Signals, and Systems, Vol., (989), p // K.Funahashi, Neural Networks, Vol., (989), p // A.Lapedes,R.Farber,Technical Report, Los Alamos National Laboratory, Los Alamos, New Mexico, (987)

MODIFIED DYNAMIC NEURON MODEL

MODIFIED DYNAMIC NEURON MODEL MODIFIED DYNMIC NEURON MODEL Dubravko Majetić, Danko Brezak, Josip Kasać, Branko Novaković Prof.dr.sc. D. Majetić, University of Zagreb, FSB, I. Lucica 5, 0000 Zagreb Mr.sc. D. Brezak, University of Zagreb,

More information

Lecture 10. Neural networks and optimization. Machine Learning and Data Mining November Nando de Freitas UBC. Nonlinear Supervised Learning

Lecture 10. Neural networks and optimization. Machine Learning and Data Mining November Nando de Freitas UBC. Nonlinear Supervised Learning Lecture 0 Neural networks and optimization Machine Learning and Data Mining November 2009 UBC Gradient Searching for a good solution can be interpreted as looking for a minimum of some error (loss) function

More information

Artificial Neural Networks

Artificial Neural Networks Introduction ANN in Action Final Observations Application: Poverty Detection Artificial Neural Networks Alvaro J. Riascos Villegas University of los Andes and Quantil July 6 2018 Artificial Neural Networks

More information

ECE521 Lectures 9 Fully Connected Neural Networks

ECE521 Lectures 9 Fully Connected Neural Networks ECE521 Lectures 9 Fully Connected Neural Networks Outline Multi-class classification Learning multi-layer neural networks 2 Measuring distance in probability space We learnt that the squared L2 distance

More information

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

COMP 551 Applied Machine Learning Lecture 14: Neural Networks COMP 551 Applied Machine Learning Lecture 14: Neural Networks Instructor: Ryan Lowe (ryan.lowe@mail.mcgill.ca) Slides mostly by: Class web page: www.cs.mcgill.ca/~hvanho2/comp551 Unless otherwise noted,

More information

On the convergence speed of artificial neural networks in the solving of linear systems

On the convergence speed of artificial neural networks in the solving of linear systems Available online at http://ijimsrbiauacir/ Int J Industrial Mathematics (ISSN 8-56) Vol 7, No, 5 Article ID IJIM-479, 9 pages Research Article On the convergence speed of artificial neural networks in

More information

Course 395: Machine Learning - Lectures

Course 395: Machine Learning - Lectures Course 395: Machine Learning - Lectures Lecture 1-2: Concept Learning (M. Pantic) Lecture 3-4: Decision Trees & CBC Intro (M. Pantic & S. Petridis) Lecture 5-6: Evaluating Hypotheses (S. Petridis) Lecture

More information

Neural Networks, Computation Graphs. CMSC 470 Marine Carpuat

Neural Networks, Computation Graphs. CMSC 470 Marine Carpuat Neural Networks, Computation Graphs CMSC 470 Marine Carpuat Binary Classification with a Multi-layer Perceptron φ A = 1 φ site = 1 φ located = 1 φ Maizuru = 1 φ, = 2 φ in = 1 φ Kyoto = 1 φ priest = 0 φ

More information

Neural Networks Introduction

Neural Networks Introduction Neural Networks Introduction H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011 H. A. Talebi, Farzaneh Abdollahi Neural Networks 1/22 Biological

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

Introduction to Neural Networks

Introduction to Neural Networks Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning

More information

Artificial Neural Networks. Edward Gatt

Artificial Neural Networks. Edward Gatt Artificial Neural Networks Edward Gatt What are Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning Very

More information

Lab 5: 16 th April Exercises on Neural Networks

Lab 5: 16 th April Exercises on Neural Networks Lab 5: 16 th April 01 Exercises on Neural Networks 1. What are the values of weights w 0, w 1, and w for the perceptron whose decision surface is illustrated in the figure? Assume the surface crosses the

More information

Artificial Neural Network : Training

Artificial Neural Network : Training Artificial Neural Networ : Training Debasis Samanta IIT Kharagpur debasis.samanta.iitgp@gmail.com 06.04.2018 Debasis Samanta (IIT Kharagpur) Soft Computing Applications 06.04.2018 1 / 49 Learning of neural

More information

Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions

Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions BACK-PROPAGATION NETWORKS Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks Cannot approximate (learn) non-linear functions Difficult (if not impossible) to design

More information

Multilayer Perceptron

Multilayer Perceptron Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Single Perceptron 3 Boolean Function Learning 4

More information

Lecture 7 Artificial neural networks: Supervised learning

Lecture 7 Artificial neural networks: Supervised learning Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

Computational statistics

Computational statistics Computational statistics Lecture 3: Neural networks Thierry Denœux 5 March, 2016 Neural networks A class of learning methods that was developed separately in different fields statistics and artificial

More information

Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses about the label (Top-5 error) No Bounding Box

Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses about the label (Top-5 error) No Bounding Box ImageNet Classification with Deep Convolutional Neural Networks Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton Motivation Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses

More information

MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October,

MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October, MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October, 23 2013 The exam is closed book. You are allowed a one-page cheat sheet. Answer the questions in the spaces provided on the question sheets. If you run

More information

Unit III. A Survey of Neural Network Model

Unit III. A Survey of Neural Network Model Unit III A Survey of Neural Network Model 1 Single Layer Perceptron Perceptron the first adaptive network architecture was invented by Frank Rosenblatt in 1957. It can be used for the classification of

More information

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis Introduction to Natural Computation Lecture 9 Multilayer Perceptrons and Backpropagation Peter Lewis 1 / 25 Overview of the Lecture Why multilayer perceptrons? Some applications of multilayer perceptrons.

More information

Neural Networks DWML, /25

Neural Networks DWML, /25 DWML, 2007 /25 Neural networks: Biological and artificial Consider humans: Neuron switching time 0.00 second Number of neurons 0 0 Connections per neuron 0 4-0 5 Scene recognition time 0. sec 00 inference

More information

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000

More information

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen Neural Networks - I Henrik I Christensen Robotics & Intelligent Machines @ GT Georgia Institute of Technology, Atlanta, GA 30332-0280 hic@cc.gatech.edu Henrik I Christensen (RIM@GT) Neural Networks 1 /

More information

y(x n, w) t n 2. (1)

y(x n, w) t n 2. (1) Network training: Training a neural network involves determining the weight parameter vector w that minimizes a cost function. Given a training set comprising a set of input vector {x n }, n = 1,...N,

More information

Artificial Neural Network

Artificial Neural Network Artificial Neural Network Eung Je Woo Department of Biomedical Engineering Impedance Imaging Research Center (IIRC) Kyung Hee University Korea ejwoo@khu.ac.kr Neuron and Neuron Model McCulloch and Pitts

More information

PMR5406 Redes Neurais e Lógica Fuzzy Aula 3 Single Layer Percetron

PMR5406 Redes Neurais e Lógica Fuzzy Aula 3 Single Layer Percetron PMR5406 Redes Neurais e Aula 3 Single Layer Percetron Baseado em: Neural Networks, Simon Haykin, Prentice-Hall, 2 nd edition Slides do curso por Elena Marchiori, Vrije Unviersity Architecture We consider

More information

A summary of Deep Learning without Poor Local Minima

A summary of Deep Learning without Poor Local Minima A summary of Deep Learning without Poor Local Minima by Kenji Kawaguchi MIT oral presentation at NIPS 2016 Learning Supervised (or Predictive) learning Learn a mapping from inputs x to outputs y, given

More information

Neural Networks and the Back-propagation Algorithm

Neural Networks and the Back-propagation Algorithm Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely

More information

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann Neural Networks with Applications to Vision and Language Feedforward Networks Marco Kuhlmann Feedforward networks Linear separability x 2 x 2 0 1 0 1 0 0 x 1 1 0 x 1 linearly separable not linearly separable

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement

More information

4. Multilayer Perceptrons

4. Multilayer Perceptrons 4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output

More information

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller 2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks Todd W. Neller Machine Learning Learning is such an important part of what we consider "intelligence" that

More information

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes CS 6501: Deep Learning for Computer Graphics Basics of Neural Networks Connelly Barnes Overview Simple neural networks Perceptron Feedforward neural networks Multilayer perceptron and properties Autoencoders

More information

Deep Feedforward Networks

Deep Feedforward Networks Deep Feedforward Networks Liu Yang March 30, 2017 Liu Yang Short title March 30, 2017 1 / 24 Overview 1 Background A general introduction Example 2 Gradient based learning Cost functions Output Units 3

More information

Lecture 4: Perceptrons and Multilayer Perceptrons

Lecture 4: Perceptrons and Multilayer Perceptrons Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons

More information

Classification with Perceptrons. Reading:

Classification with Perceptrons. Reading: Classification with Perceptrons Reading: Chapters 1-3 of Michael Nielsen's online book on neural networks covers the basics of perceptrons and multilayer neural networks We will cover material in Chapters

More information

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington Neural Networks CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 Perceptrons x 0 = 1 x 1 x 2 z = h w T x Output: z x D A perceptron

More information

EPL442: Computational

EPL442: Computational EPL442: Computational Learning Systems Lab 2 Vassilis Vassiliades Department of Computer Science University of Cyprus Outline Artificial Neuron Feedforward Neural Network Back-propagation Algorithm Notes

More information

Part 8: Neural Networks

Part 8: Neural Networks METU Informatics Institute Min720 Pattern Classification ith Bio-Medical Applications Part 8: Neural Netors - INTRODUCTION: BIOLOGICAL VS. ARTIFICIAL Biological Neural Netors A Neuron: - A nerve cell as

More information

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption ANDRÉ NUNES DE SOUZA, JOSÉ ALFREDO C. ULSON, IVAN NUNES

More information

Artificial Neural Networks (ANN)

Artificial Neural Networks (ANN) Artificial Neural Networks (ANN) Edmondo Trentin April 17, 2013 ANN: Definition The definition of ANN is given in 3.1 points. Indeed, an ANN is a machine that is completely specified once we define its:

More information

In the Name of God. Lecture 11: Single Layer Perceptrons

In the Name of God. Lecture 11: Single Layer Perceptrons 1 In the Name of God Lecture 11: Single Layer Perceptrons Perceptron: architecture We consider the architecture: feed-forward NN with one layer It is sufficient to study single layer perceptrons with just

More information

Neural networks (NN) 1

Neural networks (NN) 1 Neural networks (NN) 1 Hedibert F. Lopes Insper Institute of Education and Research São Paulo, Brazil 1 Slides based on Chapter 11 of Hastie, Tibshirani and Friedman s book The Elements of Statistical

More information

Artifical Neural Networks

Artifical Neural Networks Neural Networks Artifical Neural Networks Neural Networks Biological Neural Networks.................................. Artificial Neural Networks................................... 3 ANN Structure...........................................

More information

Address for Correspondence

Address for Correspondence Research Article APPLICATION OF ARTIFICIAL NEURAL NETWORK FOR INTERFERENCE STUDIES OF LOW-RISE BUILDINGS 1 Narayan K*, 2 Gairola A Address for Correspondence 1 Associate Professor, Department of Civil

More information

Artificial Neural Network

Artificial Neural Network Artificial Neural Network Contents 2 What is ANN? Biological Neuron Structure of Neuron Types of Neuron Models of Neuron Analogy with human NN Perceptron OCR Multilayer Neural Network Back propagation

More information

Deep Learning book, by Ian Goodfellow, Yoshua Bengio and Aaron Courville

Deep Learning book, by Ian Goodfellow, Yoshua Bengio and Aaron Courville Deep Learning book, by Ian Goodfellow, Yoshua Bengio and Aaron Courville Chapter 6 :Deep Feedforward Networks Benoit Massé Dionyssos Kounades-Bastian Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward

More information

Unit 8: Introduction to neural networks. Perceptrons

Unit 8: Introduction to neural networks. Perceptrons Unit 8: Introduction to neural networks. Perceptrons D. Balbontín Noval F. J. Martín Mateos J. L. Ruiz Reina A. Riscos Núñez Departamento de Ciencias de la Computación e Inteligencia Artificial Universidad

More information

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6 Machine Learning for Large-Scale Data Analysis and Decision Making 80-629-17A Neural Networks Week #6 Today Neural Networks A. Modeling B. Fitting C. Deep neural networks Today s material is (adapted)

More information

Neural Networks biological neuron artificial neuron 1

Neural Networks biological neuron artificial neuron 1 Neural Networks biological neuron artificial neuron 1 A two-layer neural network Output layer (activation represents classification) Weighted connections Hidden layer ( internal representation ) Input

More information

Feedforward Neural Nets and Backpropagation

Feedforward Neural Nets and Backpropagation Feedforward Neural Nets and Backpropagation Julie Nutini University of British Columbia MLRG September 28 th, 2016 1 / 23 Supervised Learning Roadmap Supervised Learning: Assume that we are given the features

More information

Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!

Artificial Neural Networks and Nonparametric Methods CMPSCI 383 Nov 17, 2011! Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011! 1 Todayʼs lecture" How the brain works (!)! Artificial neural networks! Perceptrons! Multilayer feed-forward networks! Error

More information

Reading Group on Deep Learning Session 1

Reading Group on Deep Learning Session 1 Reading Group on Deep Learning Session 1 Stephane Lathuiliere & Pablo Mesejo 2 June 2016 1/31 Contents Introduction to Artificial Neural Networks to understand, and to be able to efficiently use, the popular

More information

Neuro-Fuzzy Comp. Ch. 4 March 24, R p

Neuro-Fuzzy Comp. Ch. 4 March 24, R p 4 Feedforward Multilayer Neural Networks part I Feedforward multilayer neural networks (introduced in sec 17) with supervised error correcting learning are used to approximate (synthesise) a non-linear

More information

Training Multi-Layer Neural Networks. - the Back-Propagation Method. (c) Marcin Sydow

Training Multi-Layer Neural Networks. - the Back-Propagation Method. (c) Marcin Sydow Plan training single neuron with continuous activation function training 1-layer of continuous neurons training multi-layer network - back-propagation method single neuron with continuous activation function

More information

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur Module 12 Machine Learning Lesson 39 Neural Networks - III 12.4.4 Multi-Layer Perceptrons In contrast to perceptrons, multilayer networks can learn not only multiple decision boundaries, but the boundaries

More information

<Special Topics in VLSI> Learning for Deep Neural Networks (Back-propagation)

<Special Topics in VLSI> Learning for Deep Neural Networks (Back-propagation) Learning for Deep Neural Networks (Back-propagation) Outline Summary of Previous Standford Lecture Universal Approximation Theorem Inference vs Training Gradient Descent Back-Propagation

More information

Mr. Harshit K. Dave 1, Dr. Keyur P. Desai 2, Dr. Harit K. Raval 3

Mr. Harshit K. Dave 1, Dr. Keyur P. Desai 2, Dr. Harit K. Raval 3 Investigations on Prediction of MRR and Surface Roughness on Electro Discharge Machine Using Regression Analysis and Artificial Neural Network Programming Mr. Harshit K. Dave 1, Dr. Keyur P. Desai 2, Dr.

More information

An artificial neural networks (ANNs) model is a functional abstraction of the

An artificial neural networks (ANNs) model is a functional abstraction of the CHAPER 3 3. Introduction An artificial neural networs (ANNs) model is a functional abstraction of the biological neural structures of the central nervous system. hey are composed of many simple and highly

More information

Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso

Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso xsu@utep.edu Fall, 2018 Outline Introduction A Brief History ANN Architecture Terminology

More information

CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning

CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Learning Neural Networks Classifier Short Presentation INPUT: classification data, i.e. it contains an classification (class) attribute.

More information

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks Topics in Machine Learning-EE 5359 Neural Networks 1 The Perceptron Output: A perceptron is a function that maps D-dimensional vectors to real numbers. For notational convenience, we add a zero-th dimension

More information

Neural Networks. Advanced data-mining. Yongdai Kim. Department of Statistics, Seoul National University, South Korea

Neural Networks. Advanced data-mining. Yongdai Kim. Department of Statistics, Seoul National University, South Korea Neural Networks Advanced data-mining Yongdai Kim Department of Statistics, Seoul National University, South Korea What is Neural Networks? One of supervised learning method using one or more hidden layer.

More information

Chapter ML:VI (continued)

Chapter ML:VI (continued) Chapter ML:VI (continued) VI Neural Networks Perceptron Learning Gradient Descent Multilayer Perceptron Radial asis Functions ML:VI-64 Neural Networks STEIN 2005-2018 Definition 1 (Linear Separability)

More information

Artificial Neural Networks. MGS Lecture 2

Artificial Neural Networks. MGS Lecture 2 Artificial Neural Networks MGS 2018 - Lecture 2 OVERVIEW Biological Neural Networks Cell Topology: Input, Output, and Hidden Layers Functional description Cost functions Training ANNs Back-Propagation

More information

Introduction to Neural Networks

Introduction to Neural Networks CUONG TUAN NGUYEN SEIJI HOTTA MASAKI NAKAGAWA Tokyo University of Agriculture and Technology Copyright by Nguyen, Hotta and Nakagawa 1 Pattern classification Which category of an input? Example: Character

More information

Lecture 16: Introduction to Neural Networks

Lecture 16: Introduction to Neural Networks Lecture 16: Introduction to Neural Networs Instructor: Aditya Bhasara Scribe: Philippe David CS 5966/6966: Theory of Machine Learning March 20 th, 2017 Abstract In this lecture, we consider Bacpropagation,

More information

Artificial Neuron (Perceptron)

Artificial Neuron (Perceptron) 9/6/208 Gradient Descent (GD) Hantao Zhang Deep Learning with Python Reading: https://en.wikipedia.org/wiki/gradient_descent Artificial Neuron (Perceptron) = w T = w 0 0 + + w 2 2 + + w d d where

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

Machine Learning. Boris

Machine Learning. Boris Machine Learning Boris Nadion boris@astrails.com @borisnadion @borisnadion boris@astrails.com astrails http://astrails.com awesome web and mobile apps since 2005 terms AI (artificial intelligence)

More information

Chapter ML:VI (continued)

Chapter ML:VI (continued) Chapter ML:VI (continued) VI. Neural Networks Perceptron Learning Gradient Descent Multilayer Perceptron Radial asis Functions ML:VI-56 Neural Networks STEIN 2005-2013 Definition 1 (Linear Separability)

More information

Multilayer Perceptrons and Backpropagation

Multilayer Perceptrons and Backpropagation Multilayer Perceptrons and Backpropagation Informatics 1 CG: Lecture 7 Chris Lucas School of Informatics University of Edinburgh January 31, 2017 (Slides adapted from Mirella Lapata s.) 1 / 33 Reading:

More information

Multilayer Perceptron Tutorial

Multilayer Perceptron Tutorial Multilayer Perceptron Tutorial Leonardo Noriega School of Computing Staffordshire University Beaconside Staffordshire ST18 0DG email: l.a.noriega@staffs.ac.uk November 17, 2005 1 Introduction to Neural

More information

Least Mean Squares Regression. Machine Learning Fall 2018

Least Mean Squares Regression. Machine Learning Fall 2018 Least Mean Squares Regression Machine Learning Fall 2018 1 Where are we? Least Squares Method for regression Examples The LMS objective Gradient descent Incremental/stochastic gradient descent Exercises

More information

Intelligent Control. Module I- Neural Networks Lecture 7 Adaptive Learning Rate. Laxmidhar Behera

Intelligent Control. Module I- Neural Networks Lecture 7 Adaptive Learning Rate. Laxmidhar Behera Intelligent Control Module I- Neural Networks Lecture 7 Adaptive Learning Rate Laxmidhar Behera Department of Electrical Engineering Indian Institute of Technology, Kanpur Recurrent Networks p.1/40 Subjects

More information

Artificial Neural Networks 2

Artificial Neural Networks 2 CSC2515 Machine Learning Sam Roweis Artificial Neural s 2 We saw neural nets for classification. Same idea for regression. ANNs are just adaptive basis regression machines of the form: y k = j w kj σ(b

More information

Computational Intelligence Winter Term 2017/18

Computational Intelligence Winter Term 2017/18 Computational Intelligence Winter Term 207/8 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS ) Fakultät für Informatik TU Dortmund Plan for Today Single-Layer Perceptron Accelerated Learning

More information

Neural Nets in PR. Pattern Recognition XII. Michal Haindl. Outline. Neural Nets in PR 2

Neural Nets in PR. Pattern Recognition XII. Michal Haindl. Outline. Neural Nets in PR 2 Neural Nets in PR NM P F Outline Motivation: Pattern Recognition XII human brain study complex cognitive tasks Michal Haindl Faculty of Information Technology, KTI Czech Technical University in Prague

More information

Stochastic gradient descent; Classification

Stochastic gradient descent; Classification Stochastic gradient descent; Classification Steve Renals Machine Learning Practical MLP Lecture 2 28 September 2016 MLP Lecture 2 Stochastic gradient descent; Classification 1 Single Layer Networks MLP

More information

Multilayer Perceptron = FeedForward Neural Network

Multilayer Perceptron = FeedForward Neural Network Multilayer Perceptron = FeedForward Neural Networ History Definition Classification = feedforward operation Learning = bacpropagation = local optimization in the space of weights Pattern Classification

More information

Neural Networks: Backpropagation

Neural Networks: Backpropagation Neural Networks: Backpropagation Machine Learning Fall 2017 Based on slides and material from Geoffrey Hinton, Richard Socher, Dan Roth, Yoav Goldberg, Shai Shalev-Shwartz and Shai Ben-David, and others

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks Threshold units Gradient descent Multilayer networks Backpropagation Hidden layer representations Example: Face Recognition Advanced topics 1 Connectionist Models Consider humans:

More information

Deep Feedforward Networks

Deep Feedforward Networks Deep Feedforward Networks Liu Yang March 30, 2017 Liu Yang Short title March 30, 2017 1 / 24 Overview 1 Background A general introduction Example 2 Gradient based learning Cost functions Output Units 3

More information

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks INFOB2KI 2017-2018 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Artificial Neural Networks Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html

More information

Computational Intelligence

Computational Intelligence Plan for Today Single-Layer Perceptron Computational Intelligence Winter Term 00/ Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS ) Fakultät für Informatik TU Dortmund Accelerated Learning

More information

Comments. Assignment 3 code released. Thought questions 3 due this week. Mini-project: hopefully you have started. implement classification algorithms

Comments. Assignment 3 code released. Thought questions 3 due this week. Mini-project: hopefully you have started. implement classification algorithms Neural networks Comments Assignment 3 code released implement classification algorithms use kernels for census dataset Thought questions 3 due this week Mini-project: hopefully you have started 2 Example:

More information

Neural Networks and Deep Learning

Neural Networks and Deep Learning Neural Networks and Deep Learning Professor Ameet Talwalkar November 12, 2015 Professor Ameet Talwalkar Neural Networks and Deep Learning November 12, 2015 1 / 16 Outline 1 Review of last lecture AdaBoost

More information

CMSC 421: Neural Computation. Applications of Neural Networks

CMSC 421: Neural Computation. Applications of Neural Networks CMSC 42: Neural Computation definition synonyms neural networks artificial neural networks neural modeling connectionist models parallel distributed processing AI perspective Applications of Neural Networks

More information

Revision: Neural Network

Revision: Neural Network Revision: Neural Network Exercise 1 Tell whether each of the following statements is true or false by checking the appropriate box. Statement True False a) A perceptron is guaranteed to perfectly learn

More information

Artificial Neural Network Method of Rock Mass Blastability Classification

Artificial Neural Network Method of Rock Mass Blastability Classification Artificial Neural Network Method of Rock Mass Blastability Classification Jiang Han, Xu Weiya, Xie Shouyi Research Institute of Geotechnical Engineering, Hohai University, Nanjing, Jiangshu, P.R.China

More information

Mark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer.

Mark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer. University of Cambridge Engineering Part IIB & EIST Part II Paper I0: Advanced Pattern Processing Handouts 4 & 5: Multi-Layer Perceptron: Introduction and Training x y (x) Inputs x 2 y (x) 2 Outputs x

More information

Back-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples

Back-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples Back-Propagation Algorithm Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples 1 Inner-product net =< w, x >= w x cos(θ) net = n i=1 w i x i A measure

More information

COGS Q250 Fall Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November.

COGS Q250 Fall Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November. COGS Q250 Fall 2012 Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November. For the first two questions of the homework you will need to understand the learning algorithm using the delta

More information

AI Programming CS F-20 Neural Networks

AI Programming CS F-20 Neural Networks AI Programming CS662-2008F-20 Neural Networks David Galles Department of Computer Science University of San Francisco 20-0: Symbolic AI Most of this class has been focused on Symbolic AI Focus or symbols

More information

Artificial Neural Networks Examination, June 2005

Artificial Neural Networks Examination, June 2005 Artificial Neural Networks Examination, June 2005 Instructions There are SIXTY questions. (The pass mark is 30 out of 60). For each question, please select a maximum of ONE of the given answers (either

More information

Neural networks. Chapter 20. Chapter 20 1

Neural networks. Chapter 20. Chapter 20 1 Neural networks Chapter 20 Chapter 20 1 Outline Brains Neural networks Perceptrons Multilayer networks Applications of neural networks Chapter 20 2 Brains 10 11 neurons of > 20 types, 10 14 synapses, 1ms

More information

Multilayer Neural Networks

Multilayer Neural Networks Multilayer Neural Networks Multilayer Neural Networks Discriminant function flexibility NON-Linear But with sets of linear parameters at each layer Provably general function approximators for sufficient

More information

POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH

POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH Abstract POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH A.H.M.A.Rahim S.K.Chakravarthy Department of Electrical Engineering K.F. University of Petroleum and Minerals Dhahran. Dynamic

More information