Examining the gold coin market of Iran in the rational expectations framework using the artificial neural network

Size: px
Start display at page:

Download "Examining the gold coin market of Iran in the rational expectations framework using the artificial neural network"

Transcription

1 Examining the gold coin market of Iran in the rational expectations framework using the artificial neural network Mohammad Kavoosi-Kalashami Assistant Professor, Department of Agricultural Economics, Faculty of Agricultural Sciences, University of Guilan, Iran. Mehdi Shabanzadeh PhD Student, Faculty of Agricultural Economics and Development, University of Tehran, Iran. Mohammad Reza Pakravan PhD Student, Faculty of Agricultural Economics and Development, University of Tehran, Iran. Hamid Reza Alipour Department of Business Management, Islamic Azad University, Rasht branch, Rasht, Iran. Abstract In this paper, we used MLP neural networks model to survey the gold coin market of Iran in the framework of rational expectations. Lag of gold coin price, exchange rate, gold price in world market and oil price have been considered as the affective factors in this function. All data are obtained daily from 9 February 2009 to 9 February They were collected from the Central Bank of Iran and the Organization of Petroleum Exporting Countries (OPEC). The results of this study showed that although the gold coin market cannot be surveyed in the complete rational expectations framework, it could be examined in the bounded rational expectations framework. Therefore, agents in the gold coin market can approximate a rational expectations function. Also, the sensitivity analysis method can be a powerful tool to help agents to understand the affective factors on gold coin market. Keywords: Bounded rational expectations; Gold coin market; MLP neural networks 1

2 Introduction Modern economic theory recognizes that the central difference between economics and natural sciences lies in the forward-looking decisions made by economic agents. In every segment of macroeconomics, expectations play a key role. In consumption theory, the paradigm life-cycle and permanent income approaches stress the role of expected future incomes. In investment decisions, present-value calculations are conditional on expected future prices and sales. Asset prices (equity prices, interest rates, and exchange rates) clearly depend on expected future prices. Many other examples can be given. A fundamental aspect is that expectations influence the time path of the economy, and one might reasonably hypothesize that the time path of the economy influences expectations. The current standard methodology for modeling expectations is to assume rational expectations (RE), which is in fact equilibrium in this two-sided relationship. Formally, in dynamic stochastic models, RE is usually defined as the mathematical conditional expectation of the relevant variables (Evansand and Honkapohja, 2001 (. Lucas (1973) states that agents typically obtain information freely as a by-product of their normal economic activity. Long after Lucas, Thomas Sergeant (1993) concludes that rational expectation models assign much more knowledge to the agents within the model than is possessed by an econometrician, who faces estimation and inference problems that the agents in the models have somehow solved. This means that the agents' knowledge of their economic environment is such that the objective distributions of the related variables are known to them. Thus, their expectations are based upon these objective distributions. In this extreme form, the hypothesis of rational expectations is obviously subject to criticism. One possible approach is to assume that the agents do not have any prior knowledge about the objective distributions of relevant variables, but instead are equipped with an auxiliary model describing the perceived relationship between these variables. In this case, agents are in a similar condition as the econometrician in the above quotation: They have to estimate the unknown parameters of their auxiliary model in order to form the relevant expectations based on this model. This is but one possible notion of bounded rationality that is discussed in the economics literature. In the boundedly rational learning framework, the auxiliary model of the agents is correctly specified at best in the sense that it correctly depicts the relationship between the relevant variables within the rational expectations equilibrium, but the model is misspecified during the learning process. This means that during the learning process the relationship between the variables observed by the agents will change as long as the agents change their expectations scheme. In the case of linear models, i.e. models where the rational expectations equilibrium is a linear function of exogenous and lagged endogenous variables, recursive least squares can be used to estimate the parameters of the auxiliary model. Regarding this case, there are a number of contributions where conditions for converging the learning process towards rational expectations are derived (Bray and Savin, 1986; Fourgeaud et al., 1986; Marcet and Sargent, 1989). In the nonlinear case, when the rational expectations equilibrium is not represented by a linear function of exogenous and lagged endogenous variables, the analysis of adaptive learning procedures is more complicated. The difficulty is that the boundedly rational learning approach requires assumptions regarding the auxiliary model of the agents. It is obvious that the assumption of a correctly specified auxiliary model is a very strong one, because such an assumption presupposes extraordinary a priori knowledge of the agents regarding their economic environment. But the elimination of this assumption requires that the auxiliary model of the agents is flexible enough to represent various kinds of possible relationships between the relevant variables. One way to get this flexibility is to assume that the auxiliary model of the agents is a neural network. Neural networks might be well suited for this duty because they are, as will be shown below, able to approximate a wide class of functions at any desired degree of accuracy. Thus, if the auxiliary model of the agents is given by a neural network, they may possibly be able to learn the shape of rational expectations, without the requirement of knowledge regarding the specific relationship of the relevant 2

3 variables (Heinemann, 2000). As was mentioned in the primary section, price expectations play an important role in the market asset. Gold market or in other words precious metals market as part of the property market is not an exception to this rule. Basically, investment in this market is done with different objectives. Some of the gold market investors consider it as the profitable investment in the future (or in other words anti-inflammatory commodity), some of them as save assets (namely money), and other group of the investors it known as protection against the financial crisis. Whatever is the goal, agent expectations of condition future market play an important role in this case. So far, various studies have been done on the precious metals market and especially the gold market; in these studies the forecasting ability of linear and nonlinear models for the prices of this metal have been studied. All in all, it can be said that in these studies the neural network models had better performance as compared with alternative models. These studies include: Baker and Van Tassel (1985), Ntungo and Boyd (1998), Agnon et al (1999), Smith (2002), Parisi et al (2003), Dunis et al(2005), Sarfaraz, and Afsar (2007), Khashei, Hejazi and Bijari (2008) and Matroushi and Samarasinghe (2009). It can be said that any of these studies has not within the framework of rational expectations. Using the lag variables for prediction, we plan to study this question: can artificial neural network models be used as auxiliary models for agents to approximate the rational expectations function of the gold coins market. In other words, we investigate the gold coin market in the framework of boundedly rational expectations. To achieve this goal, we look at different structures of the neural network. In this paper, we used the MLP neural networks model since it has a high ability to approximate various functions. Also, the function of rational expectations for the gold coins market, considering the economic conditions of Iran, is considered as a function of gold coins price with a lag, exchange rates, oil price and gold price in the world market. The remainder of this paper is organized as follows: the boundedly rational expectation model is reviewed in Section 2. Basic concepts of the MLP neural networks model are explained in Section 3. In Section 4, results and discussion are considered and in Section 5, the conclusions will be provided. Boundedly rational expectation model Zenner (1996) gives a survey of boundedly rational learning models. Following his classification, the model considered here is the simplest form of a static model, because the exogenous variables are assumed to be stationary and serially uncorrelated. p t = ap t e + g(x t ) + ε t (1) Here p t e denotes the agents' expectation of the endogenous variable p t in period t and p t denotes the actual value the endogenous variable takes in period t. x t is a k-dimensional vector of exogenous variables, which can be observed before the expectation p t e is formed. It is assumed implicitly that x t is for all t a vector of independent and identically distributed random variables. Additionally it is assumed that x t is bounded for all t, i.e. x t takes only values in a set Ω x R k. E t is, like the elements of x t, for all t an independent and identically distributed random variable that satisfies Ε[ε t ] = 0, Ε[ε t 2 ] = δ ε 2 and Ε(ε t x t ) = 0 Like x t, ε t is bounded for all t and takes values in a set ε t ε Ω ε R. Dissimilar to the elements of x t, ε t is not observable by the agents, such that expectations regarding the endogenous variable cannot be conditioned on this variable. Finally, the function g(x t ) is a continuous function for all x ε Ω x. Reduced form (1) may be viewed as a special case of a more general class of nonlinear models, where the value of the endogenous variable p in period t is given by p t = G(z t e, y t, ε t ). Where at this juncture z t e is a vector of expectations regarding future values of the endogenous variable, y t is a vector of variables 3

4 predetermined in period t, ε t is an unobservable error and G may be any continuous function. Note that y t may contain exogenous as well as lagged endogenous variables. As shown by Kuan and White (1994a), the methodology used here to analyze learning processes based on the reduced form (1), can also be used to analyze this more general case. But contrary to the simple model (1), the required restrictions are quite abstract and it is not possible to derive conditions for convergence, which may be interpreted from an economic viewpoint. A more general reduced form than (1) will be considered below. Given the reduced form (1) and because Ε(ε t x t ) = 0, rational expectations are given by: p e t = (p t x t ) = ( g(x t )+ε t x 1 a t ) = g(x t ) = φ(x 1 a t) (2) If a 1, there exists a unique rational expectation of p t whatever value is taken by the exogenous variables x t. Considering what was said, φ(x t ) denotes the rational expectations function that gives this unique rational expectation of the endogenous variable for all x ε Ω x. It is obvious that the agents may not be able to form rational expectations if they do not know the reduced form of the model and especially the form of the function g(x t ). The question is, whether the agents can learn to form rational expectations, using observations of the variables p t 1, p t 2, and x t 1, x t 2,. This means that the agents are at least aware of the relevant variables that determine the endogenous variable p t in rational expectations equilibrium. It is assumed that the agents have an auxiliary model at their disposal representing the relationship between the exogenous variables and the endogenous variable. As the function g(x t ) from (1) does not need to be linear in x t, the rational expectations function φ(x t ) need not be a linear function either. Thus, if one assumes that agents use parametrically specified auxiliary models and have no prior knowledge regarding the functional form of φ(x t ), it would be suitable to take an auxiliary model that is flexible enough to approximate various possible functional forms at least sufficiently well. The next subsection establishes that neural networks may be auxiliary models having the desired property (Heinemann, 2000). Artificial Neural Network Artificial neural network (ANN) is a suitable prediction paradigm based on the behavior of neurons. It has different applications, especially in learning a linear or nonlinear mapping. As a simple intelligence model, it has several attributes including learning ability, generalization, parallel processing and robustness to noises (Mohebbi, 2007). Considering its possible structure, ANN can be categorized as either feed forward or recurrent. Multilayer feed forward networks are an important class of neural networks, consisting of a set of units which constitute the input layer, one or more hidden layers and an output layer, each composed of one or more computation nodes. Perceptron, a mathematical model of neurons in the brain, is a very simple neural net that accepts only binary inputs and outputs. One of the commonly used feed forward ANN architectures is the multilayer perceptron (MLP) network. The main advantages of MLP compared to other architectures are easy implementation and approximation of any input / output mapping (Menhaj, 1998). In a common three-layer perceptron, the neurons are grouped in sequentially connected layers: the input, output and hidden layer(s). Each neuron in the hidden and output layers are activated by an activation function which relies on the weighted sum of its n m Y = h[ j=1 w j. g( i=1 w ij. x i + b j )] + b i (3) 4

5 The connection strengths link the hidden neurons to Y; the connection strengths link the input neurons to the hidden neurons. Thus, it is seen that the connection strengths are summed and filtered through another activation function h (.). Some common used nonlinear activation function and related equations are depicted in Figure 1. Figure 1. Some commonly used nonlinearities 5

6 Resource: neuro solution help, 2010 Multilayer perceptrons (MLP) have been applied successfully to solve many default problems in a supervised manner with a highly popular algorithm known as back propagation (BP) based on the error correction learning rule. BP is a supervised learning algorithm which computes output error and modification of the weight of nodes in a backward direction. It is a gradient descent procedure that only uses the local information and maybe caught in local minima. Additionally, BP is an inherently noise procedure with poor estimation of the gradient and, therefore, slow convergence. To speed up and stabilize the convergence, a memory term is used in momentum learning: in the gradient descent learning, each weight can be measured as follows: w ij (n + 1) = w ij (n) + ηδ i (n)x j (n) (4) Where, the local error (n) can be directly computed from (n) at the output PE or can be computed as a weighted sum of errors at the internal PEs. The constant η is called the step size. In momentum learning, the equation for updating the weight becomes: w ij (n + 1) = w ij (n) + ηδ i (n)x j (n) + α (w ij (n) w ij (n 1)) (5) Where α is the momentum and normally that should be set between 0.1 and 0.9 (neuro solution help, 2010). Determining the optimum number of neurons in each hidden layer is a necessity for successful modeling 6

7 and can be carried out by trial / error procedure based on minimizing the difference between estimated ANN outputs and experimental values. In this research, an MLP was engaged to develop a model for prediction of gold coin price output from gold coin price with a lag, exchange rate, gold price in world market and oil price as inputs, as depicted in Figure 2. The data set with 366 observations collected from Central Bank of Iran and the Organization of Petroleum Exporting Countries (OPEC) was applied in this work. The database to be introduced to the neural network was broken into three groups as depicted in Table 1. Forty percent, 30 and 30% of data set were used for performing the training of the network (training data), evaluating the prediction quality of the network during training (cross-validation data) and estimating the performance of the trained network on new data which was never seen by the network (test data). Table 1. Data division for MLP Model Subset Period Observations Test 05/07/ /10/ Cross-Validation 23/10/ /09/ Training 09/02/2009 4/07/ The training process was carried out for 1000 epochs. Testing was carried out with the best weights stored during the training. Mean square error (MSE), normalized mean squared error (NMSE), the mean absolute error (MAE), and correlation coefficient (R) were calculated by using the following equations: MSE = 1 N (T N i=1 i P i ) 2 (6) NMSE = 1 σ 2 1 N (T i P i ) 2 N i=1 (7) MAE = Ti Pi T i N R = 1 N i=1 (T i P i ) 2 N i=1(t i P m ) 2 (8) (9) The complexity of the network depends on the number of layers and the number of neurons in each layer. A three layer neural network was applied for modeling in the current work, while for finding the best topology, 19 networks with different neurons, from 2 to 20, were built and evaluated. During training, the momentum value was fixed at 0.7, and the learning rate was determined at level 1 on the hidden layer and 0.1 at the output layer. The training process was carried on for 1,000 epochs or until the cross-validation data s mean-squared error (MSE), calculated by Eq. 7, did not improve for 100 epochs to avoid over-fitting of the network. Eventually, sensitivity analysis was carried out to examine the contribution of each input, namely gold coin price with a lag, exchange rate, gold price in world market and oil price, to gold coin price. In this study, the ANN models were constructed by Neurosolution for Excel Software release 5, produced by NeuroDimension, Inc. Results and discussion 7

8 The optimal number of neurons in a single hidden layer network using a trial and error method is shown in Table 2. The training results for the ANN on the cross-validation data showed the lowest MSE when the number of hidden neurons was 12. It was found that architecture was the best model in terms of MSE, which meant 4, 12 and 1 neurons in the input, hidden and output layers, respectively. Table 2. Number of the neurons of a single hidden layer network Numbers of PEs in hidden layer Cross-validation error (MSE) Performance efficiency of the network was evaluated using the experimental and ANN estimated values. The ANN performances in terms of: mean squared error (MSE), normalized mean-squared error (NMSE), mean absolute error (MAE) minimum absolute error and maximum absolute error, and the linear correlation coefficient (R) between the coin price and neural network output is depicted in Table 3. Table 3. Performance results from the best architecture Performance measure Coin Price MSE

9 NMSE MAE Min Abs Error Max Abs Error R Figure 2 shows the actual gold coin values in comparison with the network's estimation/prediction for each sample. Testing the results presented in Table 3 reveals that the correlation coefficient between the actual gold coin and ANN output is Therefore, the network is able to predict/estimate gold coin values comparable to that of the actual gold coins. On the other hand, the capabilities of the approximation neural networks in pattern recognition were established. Figure 2. Desired output and actual network output Figure 3 shows the sensitivity analysis method for extracting the cause and effect relationship between input and output networks. As this figure shows, the gold price in the world market and the oil price have the highest and lowest effect on the network product, respectively. 9

10 Sensitivity About the Mean Sensitivity Coin Price 0 Coin Price w ith a Log Exchange Rate Gold Price Oil Price Input Name Figure 3. Sensitivity analysis between input and output networks Conclusions In this paper, an artificial neural network model as the auxiliary model agent was used to examine the gold coins market within the rational expectations framework neural networks might be well suited for this task because they are able to approximate a wide class of functions at any desired degree of accuracy. Thus, if the auxiliary model of the agents is given by a neural network, agents may possibly be able to learn the formation of rational expectations in the gold coins market, without the requirement of knowledge regarding the specific relationship of the relevant variables. As was shown in previous sections, although the perfect approximation of the rational expectations function is not possible, agents can hope that the convergence towards to the approximate rational expectations occurs, i.e. expectations to improve the criteria mentioned above. Also, the results of Figure 4 show that the sensitivity analysis method can be a powerful tool to help agents understand the factors affect gold coins market. References Agnon, Y., Golan, A., and Shearer, M. (1999), Nonparametric, Nonlinear, Short-Term Forecasting: Theory and Evidence for Nonlinearities in the Commodity Markets, Economic Letters, Vol. 65, pp Baker, S.A., and van Tassel, R.C., Forecasting the price of gold: A fundamentalist approach. Atlantic Economic journal 13, Bray, M., Savin, N., Rational expectations equilibria, learning and model specification. Econometrica 54, Central Bank of Iran (CBI), Dunis, C. L, Laws, J. and Evans, B. (2005), Modeling with Recurrent and Higher Order Networks: A Comparative Analysis, Neural Network World, Vol. 6 (5), pp Evans, G. W., and Honkapohja, S., Learning and expectations in macroeconomics. Princeton, N.J.: Princeton University Press. 10

11 Fourgeaud, C., Gourieroux, C., Pradel, J., Learning procedures and convergence to rationality. Econometrica 54, Heinemann, M., Adaptive learning of rational expectations using neural networks. Journal of Economic Dynamics & Control 24, Kuan, C. M., White, H., 1994a. Adaptive learning with nonlinear dynamics driven by dependent processes. Econometrica 62, Lucas, R.E., Some international evidence on output-inflation and unemployment tradeoffs. American Economic Review 63, Marcet, A., Sargent, T., Convergence of least squares learning mechanisms in self referential linear stochastic models. Journal of Economic Theory 48, Matroushi, M.S., and Samarasinghe, S., Building a hybrid neural network model for gold price forecasting. 18 th World IMACS / MODSIM Congress, Cairns, Australia. Khashei, M., Hejazi, S.R., Bijari, M., A new hybrid artificial neural networks and fuzzy regression model for time series forecasting. Fuzzy Sets and Systems. Volume 159, Issue 7, Pages Menhaj, M. B., Fundamentals of neural networks. Tehran: Professor Hesabi. Mohebbi, M., Akbarzadeh Totonchi, M.R., Shahidi, F. and Poorshehabi, M.R., Possibility evaluation of machine vision and artificial neural network application to estimate dried shrimp moisture. In: 4 th Iranian Conference on Machine Vision, Image Processing and Application, February 2007, Mashhad, Iran. Neuro solution help, Produced by NeuroDimension, Inc. Ntungo C. and Boyd M. (1998), Commodity Futures Trading Performance Using Neural Network Models versus ARMA Models, the Journal of Futures Markets, 18, pp Organization of Petroleum Exporting Countries (OPEC), Parisi, F.; Parisi, A. and Guerrero J.L. (2003), Rolling and Recursive Neural Network Models: The Gold Price, Working Paper, Universidad de Chile. Sarfaraz, L. and Afsar, A A study on the factors affecting gold price and a Neuro-fuzzy model of forecast online at MPRA. Sargent, T., Bounded Rationality in Macroeconomics. Oxford University Press, Oxford. Smith, G. (2002), Tests of the Random Walk Hypothesis for London Gold Prices, Applied Economics Letters, 9, pp Zenner, M., Learning to Become Rational: The Case of Self-Referential and Non-Stationary Models. Springer, Berlin. 11

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

Artificial Neural Network

Artificial Neural Network Artificial Neural Network Contents 2 What is ANN? Biological Neuron Structure of Neuron Types of Neuron Models of Neuron Analogy with human NN Perceptron OCR Multilayer Neural Network Back propagation

More information

An artificial neural networks (ANNs) model is a functional abstraction of the

An artificial neural networks (ANNs) model is a functional abstraction of the CHAPER 3 3. Introduction An artificial neural networs (ANNs) model is a functional abstraction of the biological neural structures of the central nervous system. hey are composed of many simple and highly

More information

Neural Networks and the Back-propagation Algorithm

Neural Networks and the Back-propagation Algorithm Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement

More information

Mr. Harshit K. Dave 1, Dr. Keyur P. Desai 2, Dr. Harit K. Raval 3

Mr. Harshit K. Dave 1, Dr. Keyur P. Desai 2, Dr. Harit K. Raval 3 Investigations on Prediction of MRR and Surface Roughness on Electro Discharge Machine Using Regression Analysis and Artificial Neural Network Programming Mr. Harshit K. Dave 1, Dr. Keyur P. Desai 2, Dr.

More information

Multilayer Perceptrons and Backpropagation

Multilayer Perceptrons and Backpropagation Multilayer Perceptrons and Backpropagation Informatics 1 CG: Lecture 7 Chris Lucas School of Informatics University of Edinburgh January 31, 2017 (Slides adapted from Mirella Lapata s.) 1 / 33 Reading:

More information

Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!

Artificial Neural Networks and Nonparametric Methods CMPSCI 383 Nov 17, 2011! Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011! 1 Todayʼs lecture" How the brain works (!)! Artificial neural networks! Perceptrons! Multilayer feed-forward networks! Error

More information

AN ARTIFICIAL NEURAL NETWORK MODEL FOR ROAD ACCIDENT PREDICTION: A CASE STUDY OF KHULNA METROPOLITAN CITY

AN ARTIFICIAL NEURAL NETWORK MODEL FOR ROAD ACCIDENT PREDICTION: A CASE STUDY OF KHULNA METROPOLITAN CITY Proceedings of the 4 th International Conference on Civil Engineering for Sustainable Development (ICCESD 2018), 9~11 February 2018, KUET, Khulna, Bangladesh (ISBN-978-984-34-3502-6) AN ARTIFICIAL NEURAL

More information

Lecture 4: Perceptrons and Multilayer Perceptrons

Lecture 4: Perceptrons and Multilayer Perceptrons Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons

More information

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis Introduction to Natural Computation Lecture 9 Multilayer Perceptrons and Backpropagation Peter Lewis 1 / 25 Overview of the Lecture Why multilayer perceptrons? Some applications of multilayer perceptrons.

More information

Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions

Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions BACK-PROPAGATION NETWORKS Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks Cannot approximate (learn) non-linear functions Difficult (if not impossible) to design

More information

Neural networks. Chapter 19, Sections 1 5 1

Neural networks. Chapter 19, Sections 1 5 1 Neural networks Chapter 19, Sections 1 5 Chapter 19, Sections 1 5 1 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 19, Sections 1 5 2 Brains 10

More information

4. Multilayer Perceptrons

4. Multilayer Perceptrons 4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output

More information

A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE

A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE Li Sheng Institute of intelligent information engineering Zheiang University Hangzhou, 3007, P. R. China ABSTRACT In this paper, a neural network-driven

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks 鮑興國 Ph.D. National Taiwan University of Science and Technology Outline Perceptrons Gradient descent Multi-layer networks Backpropagation Hidden layer representations Examples

More information

CS:4420 Artificial Intelligence

CS:4420 Artificial Intelligence CS:4420 Artificial Intelligence Spring 2018 Neural Networks Cesare Tinelli The University of Iowa Copyright 2004 18, Cesare Tinelli and Stuart Russell a a These notes were originally developed by Stuart

More information

Recurent Hyperinflations and Learning

Recurent Hyperinflations and Learning 1 / 17 Recurent Hyperinflations and Learning Albert Marcet and Juan P. Nicolini (AER,2003) Manuel M. Mosquera T. UC3M November 24, 2015 2 / 17 Motivation of Literature Expectations play a key role in macroeconomics

More information

Neural networks. Chapter 20, Section 5 1

Neural networks. Chapter 20, Section 5 1 Neural networks Chapter 20, Section 5 Chapter 20, Section 5 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 20, Section 5 2 Brains 0 neurons of

More information

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur Module 12 Machine Learning Lesson 39 Neural Networks - III 12.4.4 Multi-Layer Perceptrons In contrast to perceptrons, multilayer networks can learn not only multiple decision boundaries, but the boundaries

More information

Neural networks. Chapter 20. Chapter 20 1

Neural networks. Chapter 20. Chapter 20 1 Neural networks Chapter 20 Chapter 20 1 Outline Brains Neural networks Perceptrons Multilayer networks Applications of neural networks Chapter 20 2 Brains 10 11 neurons of > 20 types, 10 14 synapses, 1ms

More information

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA   1/ 21 Neural Networks Chapter 8, Section 7 TB Artificial Intelligence Slides from AIMA http://aima.cs.berkeley.edu / 2 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural

More information

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000

More information

Multilayer Perceptron

Multilayer Perceptron Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Single Perceptron 3 Boolean Function Learning 4

More information

Artificial Neural Networks. Edward Gatt

Artificial Neural Networks. Edward Gatt Artificial Neural Networks Edward Gatt What are Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning Very

More information

Engineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 6: Multi-Layer Perceptrons I

Engineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 6: Multi-Layer Perceptrons I Engineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 6: Multi-Layer Perceptrons I Phil Woodland: pcw@eng.cam.ac.uk Michaelmas 2012 Engineering Part IIB: Module 4F10 Introduction In

More information

Mark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer.

Mark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer. University of Cambridge Engineering Part IIB & EIST Part II Paper I0: Advanced Pattern Processing Handouts 4 & 5: Multi-Layer Perceptron: Introduction and Training x y (x) Inputs x 2 y (x) 2 Outputs x

More information

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,

More information

Artificial Neural Networks. MGS Lecture 2

Artificial Neural Networks. MGS Lecture 2 Artificial Neural Networks MGS 2018 - Lecture 2 OVERVIEW Biological Neural Networks Cell Topology: Input, Output, and Hidden Layers Functional description Cost functions Training ANNs Back-Propagation

More information

A Wavelet Neural Network Forecasting Model Based On ARIMA

A Wavelet Neural Network Forecasting Model Based On ARIMA A Wavelet Neural Network Forecasting Model Based On ARIMA Wang Bin*, Hao Wen-ning, Chen Gang, He Deng-chao, Feng Bo PLA University of Science &Technology Nanjing 210007, China e-mail:lgdwangbin@163.com

More information

Study of Causal Relationships in Macroeconomics

Study of Causal Relationships in Macroeconomics Study of Causal Relationships in Macroeconomics Contributions of Thomas Sargent and Christopher Sims, Nobel Laureates in Economics 2011. 1 1. Personal Background Thomas J. Sargent: PhD Harvard University

More information

Multilayer Perceptrons (MLPs)

Multilayer Perceptrons (MLPs) CSE 5526: Introduction to Neural Networks Multilayer Perceptrons (MLPs) 1 Motivation Multilayer networks are more powerful than singlelayer nets Example: XOR problem x 2 1 AND x o x 1 x 2 +1-1 o x x 1-1

More information

Lecture 4: Feed Forward Neural Networks

Lecture 4: Feed Forward Neural Networks Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training

More information

FORECASTING OF INFLATION IN BANGLADESH USING ANN MODEL

FORECASTING OF INFLATION IN BANGLADESH USING ANN MODEL FORECASTING OF INFLATION IN BANGLADESH USING ANN MODEL Rumana Hossain Department of Physical Science School of Engineering and Computer Science Independent University, Bangladesh Shaukat Ahmed Department

More information

Part 8: Neural Networks

Part 8: Neural Networks METU Informatics Institute Min720 Pattern Classification ith Bio-Medical Applications Part 8: Neural Netors - INTRODUCTION: BIOLOGICAL VS. ARTIFICIAL Biological Neural Netors A Neuron: - A nerve cell as

More information

Forecasting Crude Oil Price Using Neural Networks

Forecasting Crude Oil Price Using Neural Networks CMU. Journal (2006) Vol. 5(3) 377 Forecasting Crude Oil Price Using Neural Networks Komsan Suriya * Faculty of Economics, Chiang Mai University, Chiang Mai 50200, Thailand *Corresponding author. E-mail:

More information

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau Last update: October 26, 207 Neural networks CMSC 42: Section 8.7 Dana Nau Outline Applications of neural networks Brains Neural network units Perceptrons Multilayer perceptrons 2 Example Applications

More information

MODELLING ENERGY DEMAND FORECASTING USING NEURAL NETWORKS WITH UNIVARIATE TIME SERIES

MODELLING ENERGY DEMAND FORECASTING USING NEURAL NETWORKS WITH UNIVARIATE TIME SERIES MODELLING ENERGY DEMAND FORECASTING USING NEURAL NETWORKS WITH UNIVARIATE TIME SERIES S. Cankurt 1, M. Yasin 2 1&2 Ishik University Erbil, Iraq 1 s.cankurt@ishik.edu.iq, 2 m.yasin@ishik.edu.iq doi:10.23918/iec2018.26

More information

Machine Learning. Neural Networks

Machine Learning. Neural Networks Machine Learning Neural Networks Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 Biological Analogy Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 THE

More information

Pattern Matching and Neural Networks based Hybrid Forecasting System

Pattern Matching and Neural Networks based Hybrid Forecasting System Pattern Matching and Neural Networks based Hybrid Forecasting System Sameer Singh and Jonathan Fieldsend PA Research, Department of Computer Science, University of Exeter, Exeter, UK Abstract In this paper

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks Threshold units Gradient descent Multilayer networks Backpropagation Hidden layer representations Example: Face Recognition Advanced topics 1 Connectionist Models Consider humans:

More information

Lecture 7 Artificial neural networks: Supervised learning

Lecture 7 Artificial neural networks: Supervised learning Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

Department of Economics, UCSB UC Santa Barbara

Department of Economics, UCSB UC Santa Barbara Department of Economics, UCSB UC Santa Barbara Title: Past trend versus future expectation: test of exchange rate volatility Author: Sengupta, Jati K., University of California, Santa Barbara Sfeir, Raymond,

More information

Learning and Monetary Policy

Learning and Monetary Policy Learning and Monetary Policy Lecture 1 Introduction to Expectations and Adaptive Learning George W. Evans (University of Oregon) University of Paris X -Nanterre (September 2007) J. C. Trichet: Understanding

More information

Predicting Wheat Production in Iran Using an Artificial Neural Networks Approach

Predicting Wheat Production in Iran Using an Artificial Neural Networks Approach Predicting Wheat Production in Iran Using an Artificial Neural Networks Approach Reza Ghodsi, Ruzbeh Mirabdollah Yani, Rana Jalali, Mahsa Ruzbahman Industrial Engineering Department, University of Tehran,

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

AI Programming CS F-20 Neural Networks

AI Programming CS F-20 Neural Networks AI Programming CS662-2008F-20 Neural Networks David Galles Department of Computer Science University of San Francisco 20-0: Symbolic AI Most of this class has been focused on Symbolic AI Focus or symbols

More information

Lab 5: 16 th April Exercises on Neural Networks

Lab 5: 16 th April Exercises on Neural Networks Lab 5: 16 th April 01 Exercises on Neural Networks 1. What are the values of weights w 0, w 1, and w for the perceptron whose decision surface is illustrated in the figure? Assume the surface crosses the

More information

Analysis of Multilayer Neural Network Modeling and Long Short-Term Memory

Analysis of Multilayer Neural Network Modeling and Long Short-Term Memory Analysis of Multilayer Neural Network Modeling and Long Short-Term Memory Danilo López, Nelson Vera, Luis Pedraza International Science Index, Mathematical and Computational Sciences waset.org/publication/10006216

More information

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen Neural Networks - I Henrik I Christensen Robotics & Intelligent Machines @ GT Georgia Institute of Technology, Atlanta, GA 30332-0280 hic@cc.gatech.edu Henrik I Christensen (RIM@GT) Neural Networks 1 /

More information

Topics in Nonlinear Economic Dynamics: Bounded Rationality, Heterogeneous Expectations and Complex Adaptive Systems

Topics in Nonlinear Economic Dynamics: Bounded Rationality, Heterogeneous Expectations and Complex Adaptive Systems Topics in Nonlinear Economic Dynamics: Bounded Rationality, Heterogeneous Expectations and Complex Adaptive Systems CeNDEF, Amsterdam School of Economics University of Amsterdam PhD - Workshop Series in

More information

Neural Network Based Response Surface Methods a Comparative Study

Neural Network Based Response Surface Methods a Comparative Study . LS-DYNA Anwenderforum, Ulm Robustheit / Optimierung II Neural Network Based Response Surface Methods a Comparative Study Wolfram Beyer, Martin Liebscher, Michael Beer, Wolfgang Graf TU Dresden, Germany

More information

Introduction to Neural Networks

Introduction to Neural Networks Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning

More information

Neural Networks Introduction

Neural Networks Introduction Neural Networks Introduction H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011 H. A. Talebi, Farzaneh Abdollahi Neural Networks 1/22 Biological

More information

Introduction to Artificial Neural Networks

Introduction to Artificial Neural Networks Facultés Universitaires Notre-Dame de la Paix 27 March 2007 Outline 1 Introduction 2 Fundamentals Biological neuron Artificial neuron Artificial Neural Network Outline 3 Single-layer ANN Perceptron Adaline

More information

Volume 30, Issue 3. A note on Kalman filter approach to solution of rational expectations models

Volume 30, Issue 3. A note on Kalman filter approach to solution of rational expectations models Volume 30, Issue 3 A note on Kalman filter approach to solution of rational expectations models Marco Maria Sorge BGSE, University of Bonn Abstract In this note, a class of nonlinear dynamic models under

More information

FORECASTING SAVING DEPOSIT IN MALAYSIAN ISLAMIC BANKING: COMPARISON BETWEEN ARTIFICIAL NEURAL NETWORK AND ARIMA

FORECASTING SAVING DEPOSIT IN MALAYSIAN ISLAMIC BANKING: COMPARISON BETWEEN ARTIFICIAL NEURAL NETWORK AND ARIMA Jurnal Ekonomi dan Studi Pembangunan Volume 8, Nomor 2, Oktober 2007: 154-161 FORECASTING SAVING DEPOSIT IN MALAYSIAN ISLAMIC BANKING: COMPARISON BETWEEN ARTIFICIAL NEURAL NETWORK AND ARIMA Raditya Sukmana

More information

Neural Networks biological neuron artificial neuron 1

Neural Networks biological neuron artificial neuron 1 Neural Networks biological neuron artificial neuron 1 A two-layer neural network Output layer (activation represents classification) Weighted connections Hidden layer ( internal representation ) Input

More information

Multilayer Neural Networks

Multilayer Neural Networks Multilayer Neural Networks Multilayer Neural Networks Discriminant function flexibility NON-Linear But with sets of linear parameters at each layer Provably general function approximators for sufficient

More information

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks INFOB2KI 2017-2018 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Artificial Neural Networks Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html

More information

Computational statistics

Computational statistics Computational statistics Lecture 3: Neural networks Thierry Denœux 5 March, 2016 Neural networks A class of learning methods that was developed separately in different fields statistics and artificial

More information

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington Neural Networks CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 Perceptrons x 0 = 1 x 1 x 2 z = h w T x Output: z x D A perceptron

More information

A Statistical Input Pruning Method for Artificial Neural Networks Used in Environmental Modelling

A Statistical Input Pruning Method for Artificial Neural Networks Used in Environmental Modelling A Statistical Input Pruning Method for Artificial Neural Networks Used in Environmental Modelling G. B. Kingston, H. R. Maier and M. F. Lambert Centre for Applied Modelling in Water Engineering, School

More information

Research Article NEURAL NETWORK TECHNIQUE IN DATA MINING FOR PREDICTION OF EARTH QUAKE K.Karthikeyan, Sayantani Basu

Research Article   NEURAL NETWORK TECHNIQUE IN DATA MINING FOR PREDICTION OF EARTH QUAKE K.Karthikeyan, Sayantani Basu ISSN: 0975-766X CODEN: IJPTFI Available Online through Research Article www.ijptonline.com NEURAL NETWORK TECHNIQUE IN DATA MINING FOR PREDICTION OF EARTH QUAKE K.Karthikeyan, Sayantani Basu 1 Associate

More information

Introduction to Neural Networks

Introduction to Neural Networks CUONG TUAN NGUYEN SEIJI HOTTA MASAKI NAKAGAWA Tokyo University of Agriculture and Technology Copyright by Nguyen, Hotta and Nakagawa 1 Pattern classification Which category of an input? Example: Character

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

Artificial Neural Networks (ANN)

Artificial Neural Networks (ANN) Artificial Neural Networks (ANN) Edmondo Trentin April 17, 2013 ANN: Definition The definition of ANN is given in 3.1 points. Indeed, an ANN is a machine that is completely specified once we define its:

More information

POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH

POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH Abstract POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH A.H.M.A.Rahim S.K.Chakravarthy Department of Electrical Engineering K.F. University of Petroleum and Minerals Dhahran. Dynamic

More information

On the convergence speed of artificial neural networks in the solving of linear systems

On the convergence speed of artificial neural networks in the solving of linear systems Available online at http://ijimsrbiauacir/ Int J Industrial Mathematics (ISSN 8-56) Vol 7, No, 5 Article ID IJIM-479, 9 pages Research Article On the convergence speed of artificial neural networks in

More information

Sections 18.6 and 18.7 Artificial Neural Networks

Sections 18.6 and 18.7 Artificial Neural Networks Sections 18.6 and 18.7 Artificial Neural Networks CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline The brain vs. artifical neural

More information

18.6 Regression and Classification with Linear Models

18.6 Regression and Classification with Linear Models 18.6 Regression and Classification with Linear Models 352 The hypothesis space of linear functions of continuous-valued inputs has been used for hundreds of years A univariate linear function (a straight

More information

Course 395: Machine Learning - Lectures

Course 395: Machine Learning - Lectures Course 395: Machine Learning - Lectures Lecture 1-2: Concept Learning (M. Pantic) Lecture 3-4: Decision Trees & CBC Intro (M. Pantic & S. Petridis) Lecture 5-6: Evaluating Hypotheses (S. Petridis) Lecture

More information

A hybrid model of stock price prediction based on the PCA-ARIMA-BP

A hybrid model of stock price prediction based on the PCA-ARIMA-BP ANZIAM J. 58 (E) pp.e162 E178, 2017 E162 A hybrid model of stock price prediction based on the PCA-ARIMA-BP Hua Luo 1 Shuang Wang 2 (Received 13 June 2016; revised 17 February 2017) Abstract A pca-arima-bp

More information

International Journal of Scientific Research and Reviews

International Journal of Scientific Research and Reviews Research article Available online www.ijsrr.org ISSN: 2279 0543 International Journal of Scientific Research and Reviews Prediction of Compressive Strength of Concrete using Artificial Neural Network ABSTRACT

More information

Sections 18.6 and 18.7 Artificial Neural Networks

Sections 18.6 and 18.7 Artificial Neural Networks Sections 18.6 and 18.7 Artificial Neural Networks CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline The brain vs artifical neural networks

More information

1 The Basic RBC Model

1 The Basic RBC Model IHS 2016, Macroeconomics III Michael Reiter Ch. 1: Notes on RBC Model 1 1 The Basic RBC Model 1.1 Description of Model Variables y z k L c I w r output level of technology (exogenous) capital at end of

More information

Application of Fully Recurrent (FRNN) and Radial Basis Function (RBFNN) Neural Networks for Simulating Solar Radiation

Application of Fully Recurrent (FRNN) and Radial Basis Function (RBFNN) Neural Networks for Simulating Solar Radiation Bulletin of Environment, Pharmacology and Life Sciences Bull. Env. Pharmacol. Life Sci., Vol 3 () January 04: 3-39 04 Academy for Environment and Life Sciences, India Online ISSN 77-808 Journal s URL:http://www.bepls.com

More information

Gradient Descent Training Rule: The Details

Gradient Descent Training Rule: The Details Gradient Descent Training Rule: The Details 1 For Perceptrons The whole idea behind gradient descent is to gradually, but consistently, decrease the output error by adjusting the weights. The trick is

More information

Sections 18.6 and 18.7 Analysis of Artificial Neural Networks

Sections 18.6 and 18.7 Analysis of Artificial Neural Networks Sections 18.6 and 18.7 Analysis of Artificial Neural Networks CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline Univariate regression

More information

Multivariate Analysis, TMVA, and Artificial Neural Networks

Multivariate Analysis, TMVA, and Artificial Neural Networks http://tmva.sourceforge.net/ Multivariate Analysis, TMVA, and Artificial Neural Networks Matt Jachowski jachowski@stanford.edu 1 Multivariate Analysis Techniques dedicated to analysis of data with multiple

More information

22/04/2014. Economic Research

22/04/2014. Economic Research 22/04/2014 Economic Research Forecasting Models for Exchange Rate Tuesday, April 22, 2014 The science of prognostics has been going through a rapid and fruitful development in the past decades, with various

More information

FORECASTING OF ECONOMIC QUANTITIES USING FUZZY AUTOREGRESSIVE MODEL AND FUZZY NEURAL NETWORK

FORECASTING OF ECONOMIC QUANTITIES USING FUZZY AUTOREGRESSIVE MODEL AND FUZZY NEURAL NETWORK FORECASTING OF ECONOMIC QUANTITIES USING FUZZY AUTOREGRESSIVE MODEL AND FUZZY NEURAL NETWORK Dusan Marcek Silesian University, Institute of Computer Science Opava Research Institute of the IT4Innovations

More information

Lecture 5: Logistic Regression. Neural Networks

Lecture 5: Logistic Regression. Neural Networks Lecture 5: Logistic Regression. Neural Networks Logistic regression Comparison with generative models Feed-forward neural networks Backpropagation Tricks for training neural networks COMP-652, Lecture

More information

Neural Networks DWML, /25

Neural Networks DWML, /25 DWML, 2007 /25 Neural networks: Biological and artificial Consider humans: Neuron switching time 0.00 second Number of neurons 0 0 Connections per neuron 0 4-0 5 Scene recognition time 0. sec 00 inference

More information

Unit III. A Survey of Neural Network Model

Unit III. A Survey of Neural Network Model Unit III A Survey of Neural Network Model 1 Single Layer Perceptron Perceptron the first adaptive network architecture was invented by Frank Rosenblatt in 1957. It can be used for the classification of

More information

CSC242: Intro to AI. Lecture 21

CSC242: Intro to AI. Lecture 21 CSC242: Intro to AI Lecture 21 Administrivia Project 4 (homeworks 18 & 19) due Mon Apr 16 11:59PM Posters Apr 24 and 26 You need an idea! You need to present it nicely on 2-wide by 4-high landscape pages

More information

Deep Feedforward Networks

Deep Feedforward Networks Deep Feedforward Networks Liu Yang March 30, 2017 Liu Yang Short title March 30, 2017 1 / 24 Overview 1 Background A general introduction Example 2 Gradient based learning Cost functions Output Units 3

More information

Address for Correspondence

Address for Correspondence Research Article APPLICATION OF ARTIFICIAL NEURAL NETWORK FOR INTERFERENCE STUDIES OF LOW-RISE BUILDINGS 1 Narayan K*, 2 Gairola A Address for Correspondence 1 Associate Professor, Department of Civil

More information

Assessing others rationality in real time

Assessing others rationality in real time Assessing others rationality in real time Gaetano GABALLO Ph.D.candidate University of Siena, Italy June 4, 2009 Gaetano GABALLO Ph.D.candidate University of Siena, Italy Assessing () others rationality

More information

Learning to Forecast with Genetic Algorithms

Learning to Forecast with Genetic Algorithms Learning to Forecast with Genetic Algorithms Mikhail Anufriev 1 Cars Hommes 2,3 Tomasz Makarewicz 2,3 1 EDG, University of Technology, Sydney 2 CeNDEF, University of Amsterdam 3 Tinbergen Institute Computation

More information

Study of a neural network-based system for stability augmentation of an airplane

Study of a neural network-based system for stability augmentation of an airplane Study of a neural network-based system for stability augmentation of an airplane Author: Roger Isanta Navarro Annex 1 Introduction to Neural Networks and Adaptive Neuro-Fuzzy Inference Systems (ANFIS)

More information

Classification of Ordinal Data Using Neural Networks

Classification of Ordinal Data Using Neural Networks Classification of Ordinal Data Using Neural Networks Joaquim Pinto da Costa and Jaime S. Cardoso 2 Faculdade Ciências Universidade Porto, Porto, Portugal jpcosta@fc.up.pt 2 Faculdade Engenharia Universidade

More information

Artificial Neural Networks Examination, June 2005

Artificial Neural Networks Examination, June 2005 Artificial Neural Networks Examination, June 2005 Instructions There are SIXTY questions. (The pass mark is 30 out of 60). For each question, please select a maximum of ONE of the given answers (either

More information

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks Topics in Machine Learning-EE 5359 Neural Networks 1 The Perceptron Output: A perceptron is a function that maps D-dimensional vectors to real numbers. For notational convenience, we add a zero-th dimension

More information

Adaptive Learning and Applications in Monetary Policy. Noah Williams

Adaptive Learning and Applications in Monetary Policy. Noah Williams Adaptive Learning and Applications in Monetary Policy Noah University of Wisconsin - Madison Econ 899 Motivations J. C. Trichet: Understanding expectations formation as a process underscores the strategic

More information

Rainfall Prediction using Back-Propagation Feed Forward Network

Rainfall Prediction using Back-Propagation Feed Forward Network Rainfall Prediction using Back-Propagation Feed Forward Network Ankit Chaturvedi Department of CSE DITMR (Faridabad) MDU Rohtak (hry). ABSTRACT Back propagation is most widely used in neural network projects

More information

Neural Networks Lecture 4: Radial Bases Function Networks

Neural Networks Lecture 4: Radial Bases Function Networks Neural Networks Lecture 4: Radial Bases Function Networks H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011. A. Talebi, Farzaneh Abdollahi

More information

Deep Learning. Basics and Intuition. Constantin Gonzalez Principal Solutions Architect, Amazon Web Services

Deep Learning. Basics and Intuition. Constantin Gonzalez Principal Solutions Architect, Amazon Web Services Deep Learning Basics and Intuition Constantin Gonzalez Principal Solutions Architect, Amazon Web Services glez@amazon.de September 2017 2017, Amazon Web Services, Inc. or its Affiliates. All rights reserved.

More information

Notes on Back Propagation in 4 Lines

Notes on Back Propagation in 4 Lines Notes on Back Propagation in 4 Lines Lili Mou moull12@sei.pku.edu.cn March, 2015 Congratulations! You are reading the clearest explanation of forward and backward propagation I have ever seen. In this

More information