Modeling Economic Time Series Using a Focused Time Lagged FeedForward Neural Network

Size: px
Start display at page:

Download "Modeling Economic Time Series Using a Focused Time Lagged FeedForward Neural Network"

Transcription

1 Proceedings of Student Research Day, CSIS, Pace University, May 9th, 23 Modeling Economic Time Series Using a Focused Time Lagged FeedForward Neural Network N. Moseley ABSTRACT, - Artificial neural networks (ANN) are simplified mathematical representations of some aspects of the functioning of the human brain. ANN s based on the Multilayer Perceptron (MLP) the base architecture of layered networks, have been shown to be a powerful tool for input-output mapping and have been used extensively in many disciplines. In this paper we demonstrate the use of a neural network to model univariate economic time series,. For this thesis, a MLP based network simulator was designed and implemented in the C programming language. Specifically a Time Focused Feed forward layered network (TFLN) trained with standard back propagation algorithm with momentum is the chosen architecture. Focused Time Lagged Feed Forward Networks acquire temporal processing ability through the realization of short-term memory. The neural network generates estimates of the time series after training, additionally the ability of the network to discover nonlinear relationships was used to investigate the interaction between two key economic indicators from their representation as total sales and total inventories time series. A model validation regime predicated digital signal processing methodology was developed. The results of these studies demonstrate that the application of neural networks to time series data seems to hold promise as an effective tool for analysis and forecasting.. INTRODUCTION A discrete- time signal or time series x(n) is basically a sequence of real or complex number samples. The key characteristics of a time series are that the observations are ordered in time and adacent observations are dependent (related). When successive observations of the series are dependent, we may use past observations to predict future values. Modeling and predicting economic data by using traditional statistical approaches has only been partially successful. Accordingly, researchers in recent times have turned to alternative approaches, most notably Artificial Neural Networks (ANN) which constitute a class of non-linear models. Non-linear models, by definition have more meaningful applicability but they present an added difficulty in that their supplementary degrees of freedom which lead to a better fitting of the model to data may result in a reduction in generalization capabilities. This learninggeneralization dilemma is the main limitation of ANN s. A set of input and target samples are presented to the learning system, which is used to discover the statistical behavior of the input environment After training the fitted model must be validated with a validatiion set: a set of data not contained in the training set that provides a way to measure the capacity of the model to generalize what it has learned to include other data sets. Real world economic data is often nonlinear, comprising high frequency multipolynomial components and is piecewise continuous. Modeling economic data presents difficulties. A number of issues arise when working with the traditional techniques of linear function approximation: The system of interest may be intrinsically nonlinear or the wrong linear model may be selected. As the number of input variables increase the number of free parameters grow, which requires many more samples in order to prevent the model from specializing in the noise or other features of the training data. Additionally, polynomials become less efficient predictors as the number of input variables increase. A study found that the sum of squares error falls off as O (/M) where M is the sum of hidden units in a neural network, 2 regardless of the number of input variables. Error decreases O (/ M /d), where d is the number of input variables, for polynomials or any other series expansion.[2]. The motivation for analysis of time series using neural networks in this thesis is driven by the following features: (a) neural networks rely purely on the input observations the data is allowed to speak,(b) Multilayer feedforward networks with at least one hidden layer and a sufficient number of hidden units are capable of approximating any measurable function [2]., this makes them versatile enough to represent any form of time series (c) the capacity to generalize allows ANNs to extract statistical information even in the case of missing or noisy data. (d) Ann s have the capacity to represent nonlinearities in time series 4.

2 2. NEURAL NETWORKS 2. Multilayer Perceptron Neural networks generally consist of a number of interconnected nonlinear processing elements or neurons in which the nonlinearity is distributed throughout the network. The manner in which the interneuron connections are arranged and the nature of the connections determines the structure of the network. Fig. is a representation of a FeedForward Multilayer Perceptron. Inputs Outputs Input Hidden Output Layer Layer Layer Figure. Representation of a Feed forward Neural Network Showing input source nodes, and feedforward propagation through processing elements in hidden layer on to processing elements in outer layer The learning algorithm of the MLP determines the degree to which the connections are adusted during training in order to achieve a desired network behavior. In a FeedForward Multilayer Perceptron, the neurons are arranged in a feedforward mode so that the outputs of nodes in a layer form the input to nodes in subsequent layers. Therefore, signals flow unidirectionally from the input layer through each internal layer of the network to the output layer. Between the input and the output layers are the hidden layers. The network is given nonlinear properties through the use of a nonlinear transfer function associated with each processing element. The hidden layer may be visualized as creating a map relating an input pattern to its desired response. This ability allows MLPs to discriminate between nonlinearly separable categories. Masters (993) suggests, If a function consists of a finite collection of points, a 3 layer network is capable of learning it. For a ANN to model its environment, it is necessary that it s strengths/weights of the interneuron connections be adusted according to the difference between the desired and actual outputs corresponding to a given input condition. The adustments to the weights are effected under the influence of a learning algorithm with the following points in mind: The algorithm starts from an arbitrary setting of the neuron s synaptic weights. Adustments to the synaptic weights in response to statical variations in the system s behavior are made on a continuous basis. Computations of adustments to the synaptic weights are completed inside a time interval that is one sampling period long. 2.2 BACKPROPAGATION ALGORITHM The Back-propagation (BP) learning algorithm has emerged as the standard for the training of MLP. The partial derivatives of the cost function (performance measure) with respect to the free parameters (synaptic weights and biases)of the network are determined by back-propagating the error signals (computed by the output neurons) through the network layer by layer. In the application of the BP algorithm there are two distinct passes. In the forward pass the 4.2

3 synaptic weights remain unaltered throughout the network, and the function signals of the network are computed on a neuron by neuron basis. The function signals occurring at the output of a neuron is computed as y = f( v ) where v is the induced local field of neuron, defined by m v = w yi (2) i= where m is the total number of inputs applied to neuron and y i () is the synaptic weight connecting neuron i to neuron, is the input signal of neuron. If neuron is in the first hidden layer of the network the index i refers i to the ith input terminal of the network for which y = x i i where i is the i th element of the input vector (pattern). If neuron is in the output layer then the index refers to the th output terminal of the network where where o d (n) wi (3) y = o (4) is the th element of the output vector pattern The output is compared with the desired response obtaining the error signal e for the th output neuron. The backward pass starts at the output layer by passing the error signal leftward through the network layer by layer, and recursively computing the δ (local gradient) for each neuron. This recursive process permits the synaptic weights of the network to undergo changes in accordance with the following delta rule ϕ ' ( v ) Weight learning local inputsignal correction = rate. gradient. neuron wi η δ yi The local gradient is dependent on the location of the neuron. For neurons located in the output layer the local gradient is where δ ' e ϕ ( v) = (6) is the derivative of the activation function called on its argument the induced local field. The activation function f can be a simple threshold function, a sigmoid, or a hyperbolic tangent function and for neurons located in the hidden layer ' δ = ϕ ( v ) δ w (7) k k where the weighted sum of the δ s computed for the neurons in the next hidden or output layer that are connected to neuron is included. For the presentation of each training example the input pattern is fixed throughout the round trip process, encompassing the forward pass followed by the backward pass. k (5) 2.3 A FOCUSED TIME LAGGED FEEDFORWARD NETWORK In the Time focused feedforward network (TFLN) a static MLP acquires temporal processing capability, it sees the time series x xn in the form of many mappings of an input vector to an output val This technique was presented by Haykin.S[2]. The TFLN is a non-linear filter consisting of a tapped delay line memory of order p and a multiplayer perceptron. The TFLN used in the proect had a sigmoid activation function the logistic function: ϕ ( v) = a > and < v ( + exp( av ) (8) n) < 4.3

4 where v is the induced local field of neuron. input vector for each iteration of the algorithm represented as and the network output represented as [ ] x = x, x( n ),... x( n p) T (9) m yn ( ) = wy i = = wϕ w() l x( n l) + b +bo Figure 2a and 2b show sample sets of the data utilized, showing time variations over the period of interest. The decision as to the size of the layers in the network was determined using the constructive method. Constructive methods determine the topology of the network during training as an integral part of the learning algorithm. The approach is to begin with a small network, train the network until the performance criterion has been reached, continue adding nodes and training until a global performance has been reached in terms of an acceptable error criterion. subset. The training and test sets were used for modeling and the validation set used for extrapolation over unseen data points for validation of the model Figure 2. Sales and Inventory time series after transformation to remove seasonality and trend. () a b.8.8 scaled.6.4 scaled months months 3. MODELING THE DATA Four sets of time series from the Federal Reserve Archives were employed in this investigation the monthly totals inventories and sales (Billion$) from Jan 97 to December 2, and monthly totals inventories and (Billion$) from Jan 967 to December 2. Two set are shown above(see figures 2a and 2b)..All data sets were preprocessed in order 4.4

5 to eliminate trends and seasonality influences from the data. The resulting data was partitioned into estimation and validation sets. The estimation set was further partitioned into a training and test.it is necessary to investigate how well the obtained model captures the key features of the data as demonstrated in the agreement between the model output and the observed data in an least square error statistical sense. The existence of any structure in the residual or prediction error signal indicates a misfit between the model and the data. Hence, a key validation technique is to check whether the residual process, is a realization of white noise. Autocorrelation test. (ACT) The autocorrelation sequence of a stationary random signal is given[] by N rxy () l = lim N > x y ( n l) () n= N 2N + () It was shown (Kendall and Stuart 983) that when N is sufficiently large, the distribution of the estimated Autocorrelation coefficients ρ(l) = r(l)/r() is approximately Gaussian with zero mean and variance of /N. The approximate 95 percent confidence limits are ±.96/ N. Any estimated values of ρ(l) that fall outside these limits are significantly different from zero with 95 percent confidence. Values well beyond these limits indicate nonwhiteness of the residual signal. Power spectrum density test. (PSDT) Given a set of data { xn)} ( N, the standardized cumulative periodogram is defined by: Ik ( ) = k i= K i= e R N e R N 2πi 2πi (2) and K is the integer part of N/2. If the process x( n ) is white Gaussian noise (WGN), then the random variables I( k ), k =, 2,..., K, are independently and uniformly distributed in the interval (, ), and the plot of I( k ) should be approximately linear with respect to k (Jenkins and Watts 968), The hypothesis is reected at level.5 if I( k ) exits the boundaries specified by where k K ( k ) ( K ) Ib( k) = ±.36 ( K ) 2 Figure c is a plot of the standardized cumulative periodogram for the residuals being considered. The plot shows a linear relationship in the least square sense and approaches that of a monotonic increasing function lying within the limits. Partial Autocorrelation test. (PACT) Given the residual process x(n), it was shown (Kendall and Stuart 983) that when N is sufficiently large, the partial autocorrelation sequence (PACS) values {kl} for lag l are approximately independent with distribution WN (, /N). This means that roughly 95 percent of the PACS values fall within the bounds ±.96/ N. If we observe values consistently well beyond this range for N sufficiently large, it would indicate nonwhiteness of the signal. (3) 4.5

6 3. RESULTS Training data > Modeling--- Figure 3 showing Inventory training data and modeled output representing network prediction The performance of a MLP for predictive modeling (function approximation) was observed using two sets of economic data.the neural network used a feed forward multilayer perceptron with sigmoid activation function. The supervised training of this artificial neural network was conducted using a set of 27 data points randomly chosen from a distribution of 4 points and network architecture consisting of an input later of 3 nodes, a single hidden layer with 2 nodes and a single output node. Standard back propagation with a combination of learning rate neta =.65 and momentum constant alpha =.9 was found to give the best result. Figures 3 and 4 display the networks modeling of the 2 sets of time series. Figure 5 display the results of model validation tests. 4.6

7 Training data <------Modeling----- Figure4 showing sales training data and modeled out put representing network prediction 4. DISCUSSIONS In order to be able to make any definitive statements about the model s performance capabilities the model validation tests presented earlier were used as the criteria. The results of these statistical tests due to (Brockwell and Davis 99; Bendat and Piersol 986) presented the following picture: Figures 5 show model validation results for inventory, both the Autocorrelation sequence and PACS tests for lags to 2 showed over 9% of values falling within the confidence limits discounting the unity value at lag. The PSD test showed approximate linear behavior the cumulative values occurring within the confidence limits. The sales plot presented similar results for the time series with 4 data points. The plots for ACS, PACS and PSD all are incompliance with the pre-defined validation criteria for the residuals to be characterized as instances of White Gaussian Noise (WGN). On the basis of these tests there was a high probability that the residual generated by the NN model when modeling sales and inventory having been trained on these individual sets was a white noise source. The residual arising from the difference between the prediction of inventory from sales and the original data showed no significant structure there which indicated that the NN model parameterization did adequately encode the statistical information contained in the sales inventory training sample The model output therefore is a reasonable representation of the original signal. 5. CONCLUSIONS Forecasts are a prerequisite for most decisions that are based on planning, consequently. the quality of the forecast must be evaluated considering its possible impact on the decision. Invariably the quality of decision making is measured in 4.7

8 monetary cost. Costs from over and underproduction can be significant with far reaching consequences. In medical inventory management the cost of erroneously predicting the amount of needed units of blood of a specific blood group can be devastating. Over prediction may result in inventory holding costs while under prediction may be fatal. A Time Lagged Feed forward neural network has been presented as being capable of predictively modeling univariate economic time series. This neural network simulator proved to be viable and adequate to learn the statistics of a nonlinear environment and to be able to act as a predictor. The spectral analysis was able to identify similarities between modeled output and source data. It was observed that on average the neural network exhibited excellent generalization and function approximation capabilities. Residual a Samples c.5 ACR COEFFICIENTS lags d b Ik(SCP).5 Amplitde FREQUENCY(CYCLES/SAMPLE) lags Figure 5. Model validation test results (a) Residuals - the difference between actual and network output, (b) and (d) autocorrelation (b) and partial autocorrelation tests (c) power spectral density test. REFERENCES. Manolakis, D., Ingle V. and Kogon, S. 2 Statistical and Adaptive Signal Processing. McGraw Hill 2. Haykin.S 999 Neural Networks: A comphrensive Foundation. Haykin, S 996Adaptive Filter Theory third edition : 4.8

9 2. Neurocontrol of Nonlinear Dynamical Systems with kalman filter trained recurrent networks: IEEE Transactions on Neural Networks vol5 No2 March Puskorius, G. and Feldkamp,L. Decoupled Extended kalman Filter training of FeedForward Layered Networks::IEEE c99 ISBN /9/ Koopmans,L. 995The Spectral Analysis of Time series,974 by Academic Press, Inc. 5. Fourier Analysis of Time series: An Introduction 996 John Wiley & sons Inc. 6. Grover Brown, R Hwang. Y.C. Introduction to Random Signals and Applied kalman Filtering997 John Wiley & sons. 7. Ellman, J.L.Finding Structure in Time: Cognitive Science,79 2, Spatial Predictive Modeling: a Neural Network Approach: 9. Ning Zhang S., huxiang Xu. Neuron Adaptive Higher Order Neural Network Models for Automated Financial Data Modeling. IEEE Transactions on Neural Networks vol3.nojanuary 22.. Feldkamp, l., and Puskorius,G. A Signal Processing Framework based on Dynamic Neural Networks with Application to problems in Adaptation, Filtering and Classification: Transactions of IEEE vol86.. Proakis and Manolakis Digital Signal Processing 2. Chakraborty K., Mehrota K., Mohan C., and Ranka S. Forecasting the behaviour of Multivariate Time Series using Neural Networks 4.9

Neural Networks and the Back-propagation Algorithm

Neural Networks and the Back-propagation Algorithm Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely

More information

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

Lecture 4: Perceptrons and Multilayer Perceptrons

Lecture 4: Perceptrons and Multilayer Perceptrons Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons

More information

4. Multilayer Perceptrons

4. Multilayer Perceptrons 4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output

More information

Neural Networks. Nethra Sambamoorthi, Ph.D. Jan CRMportals Inc., Nethra Sambamoorthi, Ph.D. Phone:

Neural Networks. Nethra Sambamoorthi, Ph.D. Jan CRMportals Inc., Nethra Sambamoorthi, Ph.D. Phone: Neural Networks Nethra Sambamoorthi, Ph.D Jan 2003 CRMportals Inc., Nethra Sambamoorthi, Ph.D Phone: 732-972-8969 Nethra@crmportals.com What? Saying it Again in Different ways Artificial neural network

More information

Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions

Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions Artem Chernodub, Institute of Mathematical Machines and Systems NASU, Neurotechnologies

More information

Artificial Neural Network

Artificial Neural Network Artificial Neural Network Contents 2 What is ANN? Biological Neuron Structure of Neuron Types of Neuron Models of Neuron Analogy with human NN Perceptron OCR Multilayer Neural Network Back propagation

More information

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement

More information

Unit III. A Survey of Neural Network Model

Unit III. A Survey of Neural Network Model Unit III A Survey of Neural Network Model 1 Single Layer Perceptron Perceptron the first adaptive network architecture was invented by Frank Rosenblatt in 1957. It can be used for the classification of

More information

Combination of M-Estimators and Neural Network Model to Analyze Inside/Outside Bark Tree Diameters

Combination of M-Estimators and Neural Network Model to Analyze Inside/Outside Bark Tree Diameters Combination of M-Estimators and Neural Network Model to Analyze Inside/Outside Bark Tree Diameters Kyriaki Kitikidou, Elias Milios, Lazaros Iliadis, and Minas Kaymakis Democritus University of Thrace,

More information

Introduction to Neural Networks

Introduction to Neural Networks Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning

More information

An artificial neural networks (ANNs) model is a functional abstraction of the

An artificial neural networks (ANNs) model is a functional abstraction of the CHAPER 3 3. Introduction An artificial neural networs (ANNs) model is a functional abstraction of the biological neural structures of the central nervous system. hey are composed of many simple and highly

More information

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis Introduction to Natural Computation Lecture 9 Multilayer Perceptrons and Backpropagation Peter Lewis 1 / 25 Overview of the Lecture Why multilayer perceptrons? Some applications of multilayer perceptrons.

More information

Analysis of Multilayer Neural Network Modeling and Long Short-Term Memory

Analysis of Multilayer Neural Network Modeling and Long Short-Term Memory Analysis of Multilayer Neural Network Modeling and Long Short-Term Memory Danilo López, Nelson Vera, Luis Pedraza International Science Index, Mathematical and Computational Sciences waset.org/publication/10006216

More information

Part 8: Neural Networks

Part 8: Neural Networks METU Informatics Institute Min720 Pattern Classification ith Bio-Medical Applications Part 8: Neural Netors - INTRODUCTION: BIOLOGICAL VS. ARTIFICIAL Biological Neural Netors A Neuron: - A nerve cell as

More information

Lecture 7 Artificial neural networks: Supervised learning

Lecture 7 Artificial neural networks: Supervised learning Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

Wavelet Neural Networks for Nonlinear Time Series Analysis

Wavelet Neural Networks for Nonlinear Time Series Analysis Applied Mathematical Sciences, Vol. 4, 2010, no. 50, 2485-2495 Wavelet Neural Networks for Nonlinear Time Series Analysis K. K. Minu, M. C. Lineesh and C. Jessy John Department of Mathematics National

More information

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes CS 6501: Deep Learning for Computer Graphics Basics of Neural Networks Connelly Barnes Overview Simple neural networks Perceptron Feedforward neural networks Multilayer perceptron and properties Autoencoders

More information

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, 2012 Sasidharan Sreedharan www.sasidharan.webs.com 3/1/2012 1 Syllabus Artificial Intelligence Systems- Neural Networks, fuzzy logic,

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks 鮑興國 Ph.D. National Taiwan University of Science and Technology Outline Perceptrons Gradient descent Multi-layer networks Backpropagation Hidden layer representations Examples

More information

T Machine Learning and Neural Networks

T Machine Learning and Neural Networks T-61.5130 Machine Learning and Neural Networks (5 cr) Lecture 11: Processing of Temporal Information Prof. Juha Karhunen https://mycourses.aalto.fi/ Aalto University School of Science, Espoo, Finland 1

More information

Multilayer Perceptron

Multilayer Perceptron Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Single Perceptron 3 Boolean Function Learning 4

More information

NN V: The generalized delta learning rule

NN V: The generalized delta learning rule NN V: The generalized delta learning rule We now focus on generalizing the delta learning rule for feedforward layered neural networks. The architecture of the two-layer network considered below is shown

More information

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption ANDRÉ NUNES DE SOUZA, JOSÉ ALFREDO C. ULSON, IVAN NUNES

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

Artificial Neural Network

Artificial Neural Network Artificial Neural Network Eung Je Woo Department of Biomedical Engineering Impedance Imaging Research Center (IIRC) Kyung Hee University Korea ejwoo@khu.ac.kr Neuron and Neuron Model McCulloch and Pitts

More information

Portugaliae Electrochimica Acta 26/4 (2008)

Portugaliae Electrochimica Acta 26/4 (2008) Portugaliae Electrochimica Acta 6/4 (008) 6-68 PORTUGALIAE ELECTROCHIMICA ACTA Comparison of Regression Model and Artificial Neural Network Model for the Prediction of Volume Percent of Diamond Deposition

More information

CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning

CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Learning Neural Networks Classifier Short Presentation INPUT: classification data, i.e. it contains an classification (class) attribute.

More information

Reservoir Computing and Echo State Networks

Reservoir Computing and Echo State Networks An Introduction to: Reservoir Computing and Echo State Networks Claudio Gallicchio gallicch@di.unipi.it Outline Focus: Supervised learning in domain of sequences Recurrent Neural networks for supervised

More information

Artificial Neural Networks. Edward Gatt

Artificial Neural Networks. Edward Gatt Artificial Neural Networks Edward Gatt What are Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning Very

More information

ECE 471/571 - Lecture 17. Types of NN. History. Back Propagation. Recurrent (feedback during operation) Feedforward

ECE 471/571 - Lecture 17. Types of NN. History. Back Propagation. Recurrent (feedback during operation) Feedforward ECE 47/57 - Lecture 7 Back Propagation Types of NN Recurrent (feedback during operation) n Hopfield n Kohonen n Associative memory Feedforward n No feedback during operation or testing (only during determination

More information

ECE662: Pattern Recognition and Decision Making Processes: HW TWO

ECE662: Pattern Recognition and Decision Making Processes: HW TWO ECE662: Pattern Recognition and Decision Making Processes: HW TWO Purdue University Department of Electrical and Computer Engineering West Lafayette, INDIANA, USA Abstract. In this report experiments are

More information

Address for Correspondence

Address for Correspondence Research Article APPLICATION OF ARTIFICIAL NEURAL NETWORK FOR INTERFERENCE STUDIES OF LOW-RISE BUILDINGS 1 Narayan K*, 2 Gairola A Address for Correspondence 1 Associate Professor, Department of Civil

More information

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks INFOB2KI 2017-2018 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Artificial Neural Networks Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html

More information

A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation

A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation 1 Introduction A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation J Wesley Hines Nuclear Engineering Department The University of Tennessee Knoxville, Tennessee,

More information

Multilayer Perceptrons and Backpropagation

Multilayer Perceptrons and Backpropagation Multilayer Perceptrons and Backpropagation Informatics 1 CG: Lecture 7 Chris Lucas School of Informatics University of Edinburgh January 31, 2017 (Slides adapted from Mirella Lapata s.) 1 / 33 Reading:

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

Artificial Neural Network Method of Rock Mass Blastability Classification

Artificial Neural Network Method of Rock Mass Blastability Classification Artificial Neural Network Method of Rock Mass Blastability Classification Jiang Han, Xu Weiya, Xie Shouyi Research Institute of Geotechnical Engineering, Hohai University, Nanjing, Jiangshu, P.R.China

More information

Multilayer Feedforward Networks. Berlin Chen, 2002

Multilayer Feedforward Networks. Berlin Chen, 2002 Multilayer Feedforard Netors Berlin Chen, 00 Introduction The single-layer perceptron classifiers discussed previously can only deal ith linearly separable sets of patterns The multilayer netors to be

More information

AI Programming CS F-20 Neural Networks

AI Programming CS F-20 Neural Networks AI Programming CS662-2008F-20 Neural Networks David Galles Department of Computer Science University of San Francisco 20-0: Symbolic AI Most of this class has been focused on Symbolic AI Focus or symbols

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

Chapter 2 Single Layer Feedforward Networks

Chapter 2 Single Layer Feedforward Networks Chapter 2 Single Layer Feedforward Networks By Rosenblatt (1962) Perceptrons For modeling visual perception (retina) A feedforward network of three layers of units: Sensory, Association, and Response Learning

More information

Artificial Neural Network : Training

Artificial Neural Network : Training Artificial Neural Networ : Training Debasis Samanta IIT Kharagpur debasis.samanta.iitgp@gmail.com 06.04.2018 Debasis Samanta (IIT Kharagpur) Soft Computing Applications 06.04.2018 1 / 49 Learning of neural

More information

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6 Machine Learning for Large-Scale Data Analysis and Decision Making 80-629-17A Neural Networks Week #6 Today Neural Networks A. Modeling B. Fitting C. Deep neural networks Today s material is (adapted)

More information

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function

More information

Pattern Classification

Pattern Classification Pattern Classification All materials in these slides were taen from Pattern Classification (2nd ed) by R. O. Duda,, P. E. Hart and D. G. Stor, John Wiley & Sons, 2000 with the permission of the authors

More information

Engineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 6: Multi-Layer Perceptrons I

Engineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 6: Multi-Layer Perceptrons I Engineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 6: Multi-Layer Perceptrons I Phil Woodland: pcw@eng.cam.ac.uk Michaelmas 2012 Engineering Part IIB: Module 4F10 Introduction In

More information

Machine Learning and Data Mining. Multi-layer Perceptrons & Neural Networks: Basics. Prof. Alexander Ihler

Machine Learning and Data Mining. Multi-layer Perceptrons & Neural Networks: Basics. Prof. Alexander Ihler + Machine Learning and Data Mining Multi-layer Perceptrons & Neural Networks: Basics Prof. Alexander Ihler Linear Classifiers (Perceptrons) Linear Classifiers a linear classifier is a mapping which partitions

More information

Neural Networks biological neuron artificial neuron 1

Neural Networks biological neuron artificial neuron 1 Neural Networks biological neuron artificial neuron 1 A two-layer neural network Output layer (activation represents classification) Weighted connections Hidden layer ( internal representation ) Input

More information

Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!

Artificial Neural Networks and Nonparametric Methods CMPSCI 383 Nov 17, 2011! Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011! 1 Todayʼs lecture" How the brain works (!)! Artificial neural networks! Perceptrons! Multilayer feed-forward networks! Error

More information

Chapter 15. Dynamically Driven Recurrent Networks

Chapter 15. Dynamically Driven Recurrent Networks Chapter 15. Dynamically Driven Recurrent Networks Neural Networks and Learning Machines (Haykin) Lecture Notes on Self-learning Neural Algorithms Byoung-Tak Zhang School of Computer Science and Engineering

More information

18.6 Regression and Classification with Linear Models

18.6 Regression and Classification with Linear Models 18.6 Regression and Classification with Linear Models 352 The hypothesis space of linear functions of continuous-valued inputs has been used for hundreds of years A univariate linear function (a straight

More information

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm Volume 4, Issue 5, May 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Huffman Encoding

More information

Mark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer.

Mark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer. University of Cambridge Engineering Part IIB & EIST Part II Paper I0: Advanced Pattern Processing Handouts 4 & 5: Multi-Layer Perceptron: Introduction and Training x y (x) Inputs x 2 y (x) 2 Outputs x

More information

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur Module 12 Machine Learning Lesson 39 Neural Networks - III 12.4.4 Multi-Layer Perceptrons In contrast to perceptrons, multilayer networks can learn not only multiple decision boundaries, but the boundaries

More information

Deep Feedforward Networks

Deep Feedforward Networks Deep Feedforward Networks Liu Yang March 30, 2017 Liu Yang Short title March 30, 2017 1 / 24 Overview 1 Background A general introduction Example 2 Gradient based learning Cost functions Output Units 3

More information

Artificial Neural Networks (ANN)

Artificial Neural Networks (ANN) Artificial Neural Networks (ANN) Edmondo Trentin April 17, 2013 ANN: Definition The definition of ANN is given in 3.1 points. Indeed, an ANN is a machine that is completely specified once we define its:

More information

Multilayer Neural Networks

Multilayer Neural Networks Multilayer Neural Networks Multilayer Neural Networks Discriminant function flexibility NON-Linear But with sets of linear parameters at each layer Provably general function approximators for sufficient

More information

Artificial Neural Networks Examination, June 2005

Artificial Neural Networks Examination, June 2005 Artificial Neural Networks Examination, June 2005 Instructions There are SIXTY questions. (The pass mark is 30 out of 60). For each question, please select a maximum of ONE of the given answers (either

More information

Time Series and Forecasting

Time Series and Forecasting Time Series and Forecasting Introduction to Forecasting n What is forecasting? n Primary Function is to Predict the Future using (time series related or other) data we have in hand n Why are we interested?

More information

Neural networks. Chapter 20. Chapter 20 1

Neural networks. Chapter 20. Chapter 20 1 Neural networks Chapter 20 Chapter 20 1 Outline Brains Neural networks Perceptrons Multilayer networks Applications of neural networks Chapter 20 2 Brains 10 11 neurons of > 20 types, 10 14 synapses, 1ms

More information

Neuro-Fuzzy Comp. Ch. 4 March 24, R p

Neuro-Fuzzy Comp. Ch. 4 March 24, R p 4 Feedforward Multilayer Neural Networks part I Feedforward multilayer neural networks (introduced in sec 17) with supervised error correcting learning are used to approximate (synthesise) a non-linear

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Neural Networks Varun Chandola x x 5 Input Outline Contents February 2, 207 Extending Perceptrons 2 Multi Layered Perceptrons 2 2. Generalizing to Multiple Labels.................

More information

Simple Neural Nets For Pattern Classification

Simple Neural Nets For Pattern Classification CHAPTER 2 Simple Neural Nets For Pattern Classification Neural Networks General Discussion One of the simplest tasks that neural nets can be trained to perform is pattern classification. In pattern classification

More information

Neural Networks. Intro to AI Bert Huang Virginia Tech

Neural Networks. Intro to AI Bert Huang Virginia Tech Neural Networks Intro to AI Bert Huang Virginia Tech Outline Biological inspiration for artificial neural networks Linear vs. nonlinear functions Learning with neural networks: back propagation https://en.wikipedia.org/wiki/neuron#/media/file:chemical_synapse_schema_cropped.jpg

More information

Artificial Neural Networks

Artificial Neural Networks Introduction ANN in Action Final Observations Application: Poverty Detection Artificial Neural Networks Alvaro J. Riascos Villegas University of los Andes and Quantil July 6 2018 Artificial Neural Networks

More information

Artificial Neural Networks. MGS Lecture 2

Artificial Neural Networks. MGS Lecture 2 Artificial Neural Networks MGS 2018 - Lecture 2 OVERVIEW Biological Neural Networks Cell Topology: Input, Output, and Hidden Layers Functional description Cost functions Training ANNs Back-Propagation

More information

Chapter 4 Neural Networks in System Identification

Chapter 4 Neural Networks in System Identification Chapter 4 Neural Networks in System Identification Gábor HORVÁTH Department of Measurement and Information Systems Budapest University of Technology and Economics Magyar tudósok körútja 2, 52 Budapest,

More information

Neural networks. Chapter 19, Sections 1 5 1

Neural networks. Chapter 19, Sections 1 5 1 Neural networks Chapter 19, Sections 1 5 Chapter 19, Sections 1 5 1 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 19, Sections 1 5 2 Brains 10

More information

A Wavelet Neural Network Forecasting Model Based On ARIMA

A Wavelet Neural Network Forecasting Model Based On ARIMA A Wavelet Neural Network Forecasting Model Based On ARIMA Wang Bin*, Hao Wen-ning, Chen Gang, He Deng-chao, Feng Bo PLA University of Science &Technology Nanjing 210007, China e-mail:lgdwangbin@163.com

More information

Recursive Neural Filters and Dynamical Range Transformers

Recursive Neural Filters and Dynamical Range Transformers Recursive Neural Filters and Dynamical Range Transformers JAMES T. LO AND LEI YU Invited Paper A recursive neural filter employs a recursive neural network to process a measurement process to estimate

More information

Pattern Matching and Neural Networks based Hybrid Forecasting System

Pattern Matching and Neural Networks based Hybrid Forecasting System Pattern Matching and Neural Networks based Hybrid Forecasting System Sameer Singh and Jonathan Fieldsend PA Research, Department of Computer Science, University of Exeter, Exeter, UK Abstract In this paper

More information

Lecture 5: Logistic Regression. Neural Networks

Lecture 5: Logistic Regression. Neural Networks Lecture 5: Logistic Regression. Neural Networks Logistic regression Comparison with generative models Feed-forward neural networks Backpropagation Tricks for training neural networks COMP-652, Lecture

More information

Adaptive Inverse Control based on Linear and Nonlinear Adaptive Filtering

Adaptive Inverse Control based on Linear and Nonlinear Adaptive Filtering Adaptive Inverse Control based on Linear and Nonlinear Adaptive Filtering Bernard Widrow and Gregory L. Plett Department of Electrical Engineering, Stanford University, Stanford, CA 94305-9510 Abstract

More information

Artificial neural networks

Artificial neural networks Artificial neural networks Chapter 8, Section 7 Artificial Intelligence, spring 203, Peter Ljunglöf; based on AIMA Slides c Stuart Russel and Peter Norvig, 2004 Chapter 8, Section 7 Outline Brains Neural

More information

100 inference steps doesn't seem like enough. Many neuron-like threshold switching units. Many weighted interconnections among units

100 inference steps doesn't seem like enough. Many neuron-like threshold switching units. Many weighted interconnections among units Connectionist Models Consider humans: Neuron switching time ~ :001 second Number of neurons ~ 10 10 Connections per neuron ~ 10 4 5 Scene recognition time ~ :1 second 100 inference steps doesn't seem like

More information

Multilayer Perceptrons (MLPs)

Multilayer Perceptrons (MLPs) CSE 5526: Introduction to Neural Networks Multilayer Perceptrons (MLPs) 1 Motivation Multilayer networks are more powerful than singlelayer nets Example: XOR problem x 2 1 AND x o x 1 x 2 +1-1 o x x 1-1

More information

Artificial Neural Network and Fuzzy Logic

Artificial Neural Network and Fuzzy Logic Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks

More information

Mr. Harshit K. Dave 1, Dr. Keyur P. Desai 2, Dr. Harit K. Raval 3

Mr. Harshit K. Dave 1, Dr. Keyur P. Desai 2, Dr. Harit K. Raval 3 Investigations on Prediction of MRR and Surface Roughness on Electro Discharge Machine Using Regression Analysis and Artificial Neural Network Programming Mr. Harshit K. Dave 1, Dr. Keyur P. Desai 2, Dr.

More information

Automatic modelling of neural networks for time series prediction in search of a uniform methodology across varying time frequencies

Automatic modelling of neural networks for time series prediction in search of a uniform methodology across varying time frequencies Automatic modelling of neural networks for time series prediction in search of a uniform methodology across varying time frequencies Nikolaos Kourentzes and Sven F. Crone Lancaster University Management

More information

FORECASTING YIELD PER HECTARE OF RICE IN ANDHRA PRADESH

FORECASTING YIELD PER HECTARE OF RICE IN ANDHRA PRADESH International Journal of Mathematics and Computer Applications Research (IJMCAR) ISSN 49-6955 Vol. 3, Issue 1, Mar 013, 9-14 TJPRC Pvt. Ltd. FORECASTING YIELD PER HECTARE OF RICE IN ANDHRA PRADESH R. RAMAKRISHNA

More information

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore Lecture - 27 Multilayer Feedforward Neural networks with Sigmoidal

More information

DETECTING PROCESS STATE CHANGES BY NONLINEAR BLIND SOURCE SEPARATION. Alexandre Iline, Harri Valpola and Erkki Oja

DETECTING PROCESS STATE CHANGES BY NONLINEAR BLIND SOURCE SEPARATION. Alexandre Iline, Harri Valpola and Erkki Oja DETECTING PROCESS STATE CHANGES BY NONLINEAR BLIND SOURCE SEPARATION Alexandre Iline, Harri Valpola and Erkki Oja Laboratory of Computer and Information Science Helsinki University of Technology P.O.Box

More information

Unit 8: Introduction to neural networks. Perceptrons

Unit 8: Introduction to neural networks. Perceptrons Unit 8: Introduction to neural networks. Perceptrons D. Balbontín Noval F. J. Martín Mateos J. L. Ruiz Reina A. Riscos Núñez Departamento de Ciencias de la Computación e Inteligencia Artificial Universidad

More information

Temporal Backpropagation for FIR Neural Networks

Temporal Backpropagation for FIR Neural Networks Temporal Backpropagation for FIR Neural Networks Eric A. Wan Stanford University Department of Electrical Engineering, Stanford, CA 94305-4055 Abstract The traditional feedforward neural network is a static

More information

A STATE-SPACE NEURAL NETWORK FOR MODELING DYNAMICAL NONLINEAR SYSTEMS

A STATE-SPACE NEURAL NETWORK FOR MODELING DYNAMICAL NONLINEAR SYSTEMS A STATE-SPACE NEURAL NETWORK FOR MODELING DYNAMICAL NONLINEAR SYSTEMS Karima Amoura Patrice Wira and Said Djennoune Laboratoire CCSP Université Mouloud Mammeri Tizi Ouzou Algeria Laboratoire MIPS Université

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks Threshold units Gradient descent Multilayer networks Backpropagation Hidden layer representations Example: Face Recognition Advanced topics 1 Connectionist Models Consider humans:

More information

From perceptrons to word embeddings. Simon Šuster University of Groningen

From perceptrons to word embeddings. Simon Šuster University of Groningen From perceptrons to word embeddings Simon Šuster University of Groningen Outline A basic computational unit Weighting some input to produce an output: classification Perceptron Classify tweets Written

More information

Neural Networks Introduction

Neural Networks Introduction Neural Networks Introduction H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011 H. A. Talebi, Farzaneh Abdollahi Neural Networks 1/22 Biological

More information

Neural Network Based Response Surface Methods a Comparative Study

Neural Network Based Response Surface Methods a Comparative Study . LS-DYNA Anwenderforum, Ulm Robustheit / Optimierung II Neural Network Based Response Surface Methods a Comparative Study Wolfram Beyer, Martin Liebscher, Michael Beer, Wolfgang Graf TU Dresden, Germany

More information

Artificial Neural Networks. Historical description

Artificial Neural Networks. Historical description Artificial Neural Networks Historical description Victor G. Lopez 1 / 23 Artificial Neural Networks (ANN) An artificial neural network is a computational model that attempts to emulate the functions of

More information

Chapter 3 Supervised learning:

Chapter 3 Supervised learning: Chapter 3 Supervised learning: Multilayer Networks I Backpropagation Learning Architecture: Feedforward network of at least one layer of non-linear hidden nodes, e.g., # of layers L 2 (not counting the

More information

Retrieval of Cloud Top Pressure

Retrieval of Cloud Top Pressure Master Thesis in Statistics and Data Mining Retrieval of Cloud Top Pressure Claudia Adok Division of Statistics and Machine Learning Department of Computer and Information Science Linköping University

More information

Neural Networks (Part 1) Goals for the lecture

Neural Networks (Part 1) Goals for the lecture Neural Networks (Part ) Mark Craven and David Page Computer Sciences 760 Spring 208 www.biostat.wisc.edu/~craven/cs760/ Some of the slides in these lectures have been adapted/borrowed from materials developed

More information

Radial Basis Function Networks. Ravi Kaushik Project 1 CSC Neural Networks and Pattern Recognition

Radial Basis Function Networks. Ravi Kaushik Project 1 CSC Neural Networks and Pattern Recognition Radial Basis Function Networks Ravi Kaushik Project 1 CSC 84010 Neural Networks and Pattern Recognition History Radial Basis Function (RBF) emerged in late 1980 s as a variant of artificial neural network.

More information

Feedforward Neural Nets and Backpropagation

Feedforward Neural Nets and Backpropagation Feedforward Neural Nets and Backpropagation Julie Nutini University of British Columbia MLRG September 28 th, 2016 1 / 23 Supervised Learning Roadmap Supervised Learning: Assume that we are given the features

More information

MODELLING ENERGY DEMAND FORECASTING USING NEURAL NETWORKS WITH UNIVARIATE TIME SERIES

MODELLING ENERGY DEMAND FORECASTING USING NEURAL NETWORKS WITH UNIVARIATE TIME SERIES MODELLING ENERGY DEMAND FORECASTING USING NEURAL NETWORKS WITH UNIVARIATE TIME SERIES S. Cankurt 1, M. Yasin 2 1&2 Ishik University Erbil, Iraq 1 s.cankurt@ishik.edu.iq, 2 m.yasin@ishik.edu.iq doi:10.23918/iec2018.26

More information

Artificial Neural Network Based Approach for Design of RCC Columns

Artificial Neural Network Based Approach for Design of RCC Columns Artificial Neural Network Based Approach for Design of RCC Columns Dr T illai, ember I Karthekeyan, Non-member Recent developments in artificial neural network have opened up new possibilities in the field

More information