Examining the gold coin market of Iran in the rational expectations framework using the artificial neural network

Similar documents
ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

Artificial Neural Network

An artificial neural networks (ANNs) model is a functional abstraction of the

Neural Networks and the Back-propagation Algorithm

Artificial Intelligence

Mr. Harshit K. Dave 1, Dr. Keyur P. Desai 2, Dr. Harit K. Raval 3

Multilayer Perceptrons and Backpropagation

Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!

AN ARTIFICIAL NEURAL NETWORK MODEL FOR ROAD ACCIDENT PREDICTION: A CASE STUDY OF KHULNA METROPOLITAN CITY

Lecture 4: Perceptrons and Multilayer Perceptrons

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions

Neural networks. Chapter 19, Sections 1 5 1

4. Multilayer Perceptrons

A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE

Artificial Neural Networks

CS:4420 Artificial Intelligence

Recurent Hyperinflations and Learning

Neural networks. Chapter 20, Section 5 1

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Neural networks. Chapter 20. Chapter 20 1

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

Multilayer Perceptron

Artificial Neural Networks. Edward Gatt

Engineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 6: Multi-Layer Perceptrons I

Mark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer.

Data Mining Part 5. Prediction

Artificial Neural Networks. MGS Lecture 2

A Wavelet Neural Network Forecasting Model Based On ARIMA

Study of Causal Relationships in Macroeconomics

Multilayer Perceptrons (MLPs)

Lecture 4: Feed Forward Neural Networks

FORECASTING OF INFLATION IN BANGLADESH USING ANN MODEL

Part 8: Neural Networks

Forecasting Crude Oil Price Using Neural Networks

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau

MODELLING ENERGY DEMAND FORECASTING USING NEURAL NETWORKS WITH UNIVARIATE TIME SERIES

Machine Learning. Neural Networks

Pattern Matching and Neural Networks based Hybrid Forecasting System

Artificial Neural Networks

Lecture 7 Artificial neural networks: Supervised learning

Department of Economics, UCSB UC Santa Barbara

Learning and Monetary Policy

Predicting Wheat Production in Iran Using an Artificial Neural Networks Approach

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

AI Programming CS F-20 Neural Networks

Lab 5: 16 th April Exercises on Neural Networks

Analysis of Multilayer Neural Network Modeling and Long Short-Term Memory

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen

Topics in Nonlinear Economic Dynamics: Bounded Rationality, Heterogeneous Expectations and Complex Adaptive Systems

Neural Network Based Response Surface Methods a Comparative Study

Introduction to Neural Networks

Neural Networks Introduction

Introduction to Artificial Neural Networks

Volume 30, Issue 3. A note on Kalman filter approach to solution of rational expectations models

FORECASTING SAVING DEPOSIT IN MALAYSIAN ISLAMIC BANKING: COMPARISON BETWEEN ARTIFICIAL NEURAL NETWORK AND ARIMA

Neural Networks biological neuron artificial neuron 1

Multilayer Neural Networks

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

Computational statistics

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

A Statistical Input Pruning Method for Artificial Neural Networks Used in Environmental Modelling

Research Article NEURAL NETWORK TECHNIQUE IN DATA MINING FOR PREDICTION OF EARTH QUAKE K.Karthikeyan, Sayantani Basu

Introduction to Neural Networks

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

Artificial Neural Networks (ANN)

POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH

On the convergence speed of artificial neural networks in the solving of linear systems

Sections 18.6 and 18.7 Artificial Neural Networks

18.6 Regression and Classification with Linear Models

Course 395: Machine Learning - Lectures

A hybrid model of stock price prediction based on the PCA-ARIMA-BP

International Journal of Scientific Research and Reviews

Sections 18.6 and 18.7 Artificial Neural Networks

1 The Basic RBC Model

Application of Fully Recurrent (FRNN) and Radial Basis Function (RBFNN) Neural Networks for Simulating Solar Radiation

Gradient Descent Training Rule: The Details

Sections 18.6 and 18.7 Analysis of Artificial Neural Networks

Multivariate Analysis, TMVA, and Artificial Neural Networks

22/04/2014. Economic Research

FORECASTING OF ECONOMIC QUANTITIES USING FUZZY AUTOREGRESSIVE MODEL AND FUZZY NEURAL NETWORK

Lecture 5: Logistic Regression. Neural Networks

Neural Networks DWML, /25

Unit III. A Survey of Neural Network Model

CSC242: Intro to AI. Lecture 21

Deep Feedforward Networks

Address for Correspondence

Assessing others rationality in real time

Learning to Forecast with Genetic Algorithms

Study of a neural network-based system for stability augmentation of an airplane

Classification of Ordinal Data Using Neural Networks

Artificial Neural Networks Examination, June 2005

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks

Adaptive Learning and Applications in Monetary Policy. Noah Williams

Rainfall Prediction using Back-Propagation Feed Forward Network

Neural Networks Lecture 4: Radial Bases Function Networks

Deep Learning. Basics and Intuition. Constantin Gonzalez Principal Solutions Architect, Amazon Web Services

Notes on Back Propagation in 4 Lines

Transcription:

Examining the gold coin market of Iran in the rational expectations framework using the artificial neural network Mohammad Kavoosi-Kalashami Assistant Professor, Department of Agricultural Economics, Faculty of Agricultural Sciences, University of Guilan, Iran. Email: mkavoosi@guilan.ac.ir Mehdi Shabanzadeh PhD Student, Faculty of Agricultural Economics and Development, University of Tehran, Iran. Email: mshabanzadeh@ut.ac.ir Mohammad Reza Pakravan PhD Student, Faculty of Agricultural Economics and Development, University of Tehran, Iran. Email: mpakravan@ut.ac.ir Hamid Reza Alipour Department of Business Management, Islamic Azad University, Rasht branch, Rasht, Iran. Email: drbehdad_66@yahoo.com Abstract In this paper, we used MLP neural networks model to survey the gold coin market of Iran in the framework of rational expectations. Lag of gold coin price, exchange rate, gold price in world market and oil price have been considered as the affective factors in this function. All data are obtained daily from 9 February 2009 to 9 February 2010. They were collected from the Central Bank of Iran and the Organization of Petroleum Exporting Countries (OPEC). The results of this study showed that although the gold coin market cannot be surveyed in the complete rational expectations framework, it could be examined in the bounded rational expectations framework. Therefore, agents in the gold coin market can approximate a rational expectations function. Also, the sensitivity analysis method can be a powerful tool to help agents to understand the affective factors on gold coin market. Keywords: Bounded rational expectations; Gold coin market; MLP neural networks 1

Introduction Modern economic theory recognizes that the central difference between economics and natural sciences lies in the forward-looking decisions made by economic agents. In every segment of macroeconomics, expectations play a key role. In consumption theory, the paradigm life-cycle and permanent income approaches stress the role of expected future incomes. In investment decisions, present-value calculations are conditional on expected future prices and sales. Asset prices (equity prices, interest rates, and exchange rates) clearly depend on expected future prices. Many other examples can be given. A fundamental aspect is that expectations influence the time path of the economy, and one might reasonably hypothesize that the time path of the economy influences expectations. The current standard methodology for modeling expectations is to assume rational expectations (RE), which is in fact equilibrium in this two-sided relationship. Formally, in dynamic stochastic models, RE is usually defined as the mathematical conditional expectation of the relevant variables (Evansand and Honkapohja, 2001 (. Lucas (1973) states that agents typically obtain information freely as a by-product of their normal economic activity. Long after Lucas, Thomas Sergeant (1993) concludes that rational expectation models assign much more knowledge to the agents within the model than is possessed by an econometrician, who faces estimation and inference problems that the agents in the models have somehow solved. This means that the agents' knowledge of their economic environment is such that the objective distributions of the related variables are known to them. Thus, their expectations are based upon these objective distributions. In this extreme form, the hypothesis of rational expectations is obviously subject to criticism. One possible approach is to assume that the agents do not have any prior knowledge about the objective distributions of relevant variables, but instead are equipped with an auxiliary model describing the perceived relationship between these variables. In this case, agents are in a similar condition as the econometrician in the above quotation: They have to estimate the unknown parameters of their auxiliary model in order to form the relevant expectations based on this model. This is but one possible notion of bounded rationality that is discussed in the economics literature. In the boundedly rational learning framework, the auxiliary model of the agents is correctly specified at best in the sense that it correctly depicts the relationship between the relevant variables within the rational expectations equilibrium, but the model is misspecified during the learning process. This means that during the learning process the relationship between the variables observed by the agents will change as long as the agents change their expectations scheme. In the case of linear models, i.e. models where the rational expectations equilibrium is a linear function of exogenous and lagged endogenous variables, recursive least squares can be used to estimate the parameters of the auxiliary model. Regarding this case, there are a number of contributions where conditions for converging the learning process towards rational expectations are derived (Bray and Savin, 1986; Fourgeaud et al., 1986; Marcet and Sargent, 1989). In the nonlinear case, when the rational expectations equilibrium is not represented by a linear function of exogenous and lagged endogenous variables, the analysis of adaptive learning procedures is more complicated. The difficulty is that the boundedly rational learning approach requires assumptions regarding the auxiliary model of the agents. It is obvious that the assumption of a correctly specified auxiliary model is a very strong one, because such an assumption presupposes extraordinary a priori knowledge of the agents regarding their economic environment. But the elimination of this assumption requires that the auxiliary model of the agents is flexible enough to represent various kinds of possible relationships between the relevant variables. One way to get this flexibility is to assume that the auxiliary model of the agents is a neural network. Neural networks might be well suited for this duty because they are, as will be shown below, able to approximate a wide class of functions at any desired degree of accuracy. Thus, if the auxiliary model of the agents is given by a neural network, they may possibly be able to learn the shape of rational expectations, without the requirement of knowledge regarding the specific relationship of the relevant 2

variables (Heinemann, 2000). As was mentioned in the primary section, price expectations play an important role in the market asset. Gold market or in other words precious metals market as part of the property market is not an exception to this rule. Basically, investment in this market is done with different objectives. Some of the gold market investors consider it as the profitable investment in the future (or in other words anti-inflammatory commodity), some of them as save assets (namely money), and other group of the investors it known as protection against the financial crisis. Whatever is the goal, agent expectations of condition future market play an important role in this case. So far, various studies have been done on the precious metals market and especially the gold market; in these studies the forecasting ability of linear and nonlinear models for the prices of this metal have been studied. All in all, it can be said that in these studies the neural network models had better performance as compared with alternative models. These studies include: Baker and Van Tassel (1985), Ntungo and Boyd (1998), Agnon et al (1999), Smith (2002), Parisi et al (2003), Dunis et al(2005), Sarfaraz, and Afsar (2007), Khashei, Hejazi and Bijari (2008) and Matroushi and Samarasinghe (2009). It can be said that any of these studies has not within the framework of rational expectations. Using the lag variables for prediction, we plan to study this question: can artificial neural network models be used as auxiliary models for agents to approximate the rational expectations function of the gold coins market. In other words, we investigate the gold coin market in the framework of boundedly rational expectations. To achieve this goal, we look at different structures of the neural network. In this paper, we used the MLP neural networks model since it has a high ability to approximate various functions. Also, the function of rational expectations for the gold coins market, considering the economic conditions of Iran, is considered as a function of gold coins price with a lag, exchange rates, oil price and gold price in the world market. The remainder of this paper is organized as follows: the boundedly rational expectation model is reviewed in Section 2. Basic concepts of the MLP neural networks model are explained in Section 3. In Section 4, results and discussion are considered and in Section 5, the conclusions will be provided. Boundedly rational expectation model Zenner (1996) gives a survey of boundedly rational learning models. Following his classification, the model considered here is the simplest form of a static model, because the exogenous variables are assumed to be stationary and serially uncorrelated. p t = ap t e + g(x t ) + ε t (1) Here p t e denotes the agents' expectation of the endogenous variable p t in period t and p t denotes the actual value the endogenous variable takes in period t. x t is a k-dimensional vector of exogenous variables, which can be observed before the expectation p t e is formed. It is assumed implicitly that x t is for all t a vector of independent and identically distributed random variables. Additionally it is assumed that x t is bounded for all t, i.e. x t takes only values in a set Ω x R k. E t is, like the elements of x t, for all t an independent and identically distributed random variable that satisfies Ε[ε t ] = 0, Ε[ε t 2 ] = δ ε 2 and Ε(ε t x t ) = 0 Like x t, ε t is bounded for all t and takes values in a set ε t ε Ω ε R. Dissimilar to the elements of x t, ε t is not observable by the agents, such that expectations regarding the endogenous variable cannot be conditioned on this variable. Finally, the function g(x t ) is a continuous function for all x ε Ω x. Reduced form (1) may be viewed as a special case of a more general class of nonlinear models, where the value of the endogenous variable p in period t is given by p t = G(z t e, y t, ε t ). Where at this juncture z t e is a vector of expectations regarding future values of the endogenous variable, y t is a vector of variables 3

predetermined in period t, ε t is an unobservable error and G may be any continuous function. Note that y t may contain exogenous as well as lagged endogenous variables. As shown by Kuan and White (1994a), the methodology used here to analyze learning processes based on the reduced form (1), can also be used to analyze this more general case. But contrary to the simple model (1), the required restrictions are quite abstract and it is not possible to derive conditions for convergence, which may be interpreted from an economic viewpoint. A more general reduced form than (1) will be considered below. Given the reduced form (1) and because Ε(ε t x t ) = 0, rational expectations are given by: p e t = (p t x t ) = ( g(x t )+ε t x 1 a t ) = g(x t ) = φ(x 1 a t) (2) If a 1, there exists a unique rational expectation of p t whatever value is taken by the exogenous variables x t. Considering what was said, φ(x t ) denotes the rational expectations function that gives this unique rational expectation of the endogenous variable for all x ε Ω x. It is obvious that the agents may not be able to form rational expectations if they do not know the reduced form of the model and especially the form of the function g(x t ). The question is, whether the agents can learn to form rational expectations, using observations of the variables p t 1, p t 2, and x t 1, x t 2,. This means that the agents are at least aware of the relevant variables that determine the endogenous variable p t in rational expectations equilibrium. It is assumed that the agents have an auxiliary model at their disposal representing the relationship between the exogenous variables and the endogenous variable. As the function g(x t ) from (1) does not need to be linear in x t, the rational expectations function φ(x t ) need not be a linear function either. Thus, if one assumes that agents use parametrically specified auxiliary models and have no prior knowledge regarding the functional form of φ(x t ), it would be suitable to take an auxiliary model that is flexible enough to approximate various possible functional forms at least sufficiently well. The next subsection establishes that neural networks may be auxiliary models having the desired property (Heinemann, 2000). Artificial Neural Network Artificial neural network (ANN) is a suitable prediction paradigm based on the behavior of neurons. It has different applications, especially in learning a linear or nonlinear mapping. As a simple intelligence model, it has several attributes including learning ability, generalization, parallel processing and robustness to noises (Mohebbi, 2007). Considering its possible structure, ANN can be categorized as either feed forward or recurrent. Multilayer feed forward networks are an important class of neural networks, consisting of a set of units which constitute the input layer, one or more hidden layers and an output layer, each composed of one or more computation nodes. Perceptron, a mathematical model of neurons in the brain, is a very simple neural net that accepts only binary inputs and outputs. One of the commonly used feed forward ANN architectures is the multilayer perceptron (MLP) network. The main advantages of MLP compared to other architectures are easy implementation and approximation of any input / output mapping (Menhaj, 1998). In a common three-layer perceptron, the neurons are grouped in sequentially connected layers: the input, output and hidden layer(s). Each neuron in the hidden and output layers are activated by an activation function which relies on the weighted sum of its n m Y = h[ j=1 w j. g( i=1 w ij. x i + b j )] + b i (3) 4

The connection strengths link the hidden neurons to Y; the connection strengths link the input neurons to the hidden neurons. Thus, it is seen that the connection strengths are summed and filtered through another activation function h (.). Some common used nonlinear activation function and related equations are depicted in Figure 1. Figure 1. Some commonly used nonlinearities 5

Resource: neuro solution help, 2010 Multilayer perceptrons (MLP) have been applied successfully to solve many default problems in a supervised manner with a highly popular algorithm known as back propagation (BP) based on the error correction learning rule. BP is a supervised learning algorithm which computes output error and modification of the weight of nodes in a backward direction. It is a gradient descent procedure that only uses the local information and maybe caught in local minima. Additionally, BP is an inherently noise procedure with poor estimation of the gradient and, therefore, slow convergence. To speed up and stabilize the convergence, a memory term is used in momentum learning: in the gradient descent learning, each weight can be measured as follows: w ij (n + 1) = w ij (n) + ηδ i (n)x j (n) (4) Where, the local error (n) can be directly computed from (n) at the output PE or can be computed as a weighted sum of errors at the internal PEs. The constant η is called the step size. In momentum learning, the equation for updating the weight becomes: w ij (n + 1) = w ij (n) + ηδ i (n)x j (n) + α (w ij (n) w ij (n 1)) (5) Where α is the momentum and normally that should be set between 0.1 and 0.9 (neuro solution help, 2010). Determining the optimum number of neurons in each hidden layer is a necessity for successful modeling 6

and can be carried out by trial / error procedure based on minimizing the difference between estimated ANN outputs and experimental values. In this research, an MLP was engaged to develop a model for prediction of gold coin price output from gold coin price with a lag, exchange rate, gold price in world market and oil price as inputs, as depicted in Figure 2. The data set with 366 observations collected from Central Bank of Iran and the Organization of Petroleum Exporting Countries (OPEC) was applied in this work. The database to be introduced to the neural network was broken into three groups as depicted in Table 1. Forty percent, 30 and 30% of data set were used for performing the training of the network (training data), evaluating the prediction quality of the network during training (cross-validation data) and estimating the performance of the trained network on new data which was never seen by the network (test data). Table 1. Data division for MLP Model Subset Period Observations Test 05/07/2009 22/10/2009 110 Cross-Validation 23/10/2009 02/09/2010 110 Training 09/02/2009 4/07/2009 146 The training process was carried out for 1000 epochs. Testing was carried out with the best weights stored during the training. Mean square error (MSE), normalized mean squared error (NMSE), the mean absolute error (MAE), and correlation coefficient (R) were calculated by using the following equations: MSE = 1 N (T N i=1 i P i ) 2 (6) NMSE = 1 σ 2 1 N (T i P i ) 2 N i=1 (7) MAE = Ti Pi T i N R = 1 N i=1 (T i P i ) 2 N i=1(t i P m ) 2 (8) (9) The complexity of the network depends on the number of layers and the number of neurons in each layer. A three layer neural network was applied for modeling in the current work, while for finding the best topology, 19 networks with different neurons, from 2 to 20, were built and evaluated. During training, the momentum value was fixed at 0.7, and the learning rate was determined at level 1 on the hidden layer and 0.1 at the output layer. The training process was carried on for 1,000 epochs or until the cross-validation data s mean-squared error (MSE), calculated by Eq. 7, did not improve for 100 epochs to avoid over-fitting of the network. Eventually, sensitivity analysis was carried out to examine the contribution of each input, namely gold coin price with a lag, exchange rate, gold price in world market and oil price, to gold coin price. In this study, the ANN models were constructed by Neurosolution for Excel Software release 5, produced by NeuroDimension, Inc. Results and discussion 7

The optimal number of neurons in a single hidden layer network using a trial and error method is shown in Table 2. The training results for the ANN on the cross-validation data showed the lowest MSE when the number of hidden neurons was 12. It was found that 4-12-1 architecture was the best model in terms of MSE, which meant 4, 12 and 1 neurons in the input, hidden and output layers, respectively. Table 2. Number of the neurons of a single hidden layer network Numbers of PEs in hidden layer Cross-validation error (MSE) 2 0.000571 3 0.000764 4 0.00169 5 0.0017 6 0.000688 7 0.0012 8 0.000531 9 0.00298 10 0.000731 11 0.000529 12 0.000481 13 0.000574 14 0.000763 15 0.00107 16 0.000649 17 0.001 18 0.000677 19 0.000582 20 0.000596 Performance efficiency of the network was evaluated using the experimental and ANN estimated values. The ANN performances in terms of: mean squared error (MSE), normalized mean-squared error (NMSE), mean absolute error (MAE) minimum absolute error and maximum absolute error, and the linear correlation coefficient (R) between the coin price and neural network output is depicted in Table 3. Table 3. Performance results from the best architecture Performance measure Coin Price MSE 3109070.282 8

NMSE 0.006073391 MAE 1237.317535 Min Abs Error 15.19364783 Max Abs Error 7934.333503 R 0.997044554 Figure 2 shows the actual gold coin values in comparison with the network's estimation/prediction for each sample. Testing the results presented in Table 3 reveals that the correlation coefficient between the actual gold coin and ANN output is 0.997. Therefore, the network is able to predict/estimate gold coin values comparable to that of the actual gold coins. On the other hand, the capabilities of the approximation neural networks in pattern recognition were established. Figure 2. Desired output and actual network output Figure 3 shows the sensitivity analysis method for extracting the cause and effect relationship between input and output networks. As this figure shows, the gold price in the world market and the oil price have the highest and lowest effect on the network product, respectively. 9

Sensitivity About the Mean Sensitivity 12000 10000 8000 6000 4000 2000 Coin Price 0 Coin Price w ith a Log Exchange Rate Gold Price Oil Price Input Name Figure 3. Sensitivity analysis between input and output networks Conclusions In this paper, an artificial neural network model as the auxiliary model agent was used to examine the gold coins market within the rational expectations framework neural networks might be well suited for this task because they are able to approximate a wide class of functions at any desired degree of accuracy. Thus, if the auxiliary model of the agents is given by a neural network, agents may possibly be able to learn the formation of rational expectations in the gold coins market, without the requirement of knowledge regarding the specific relationship of the relevant variables. As was shown in previous sections, although the perfect approximation of the rational expectations function is not possible, agents can hope that the convergence towards to the approximate rational expectations occurs, i.e. expectations to improve the criteria mentioned above. Also, the results of Figure 4 show that the sensitivity analysis method can be a powerful tool to help agents understand the factors affect gold coins market. References Agnon, Y., Golan, A., and Shearer, M. (1999), Nonparametric, Nonlinear, Short-Term Forecasting: Theory and Evidence for Nonlinearities in the Commodity Markets, Economic Letters, Vol. 65, pp. 293-299. Baker, S.A., and van Tassel, R.C., 1985. Forecasting the price of gold: A fundamentalist approach. Atlantic Economic journal 13, 43-51. Bray, M., Savin, N., 1986. Rational expectations equilibria, learning and model specification. Econometrica 54, 1129-1160. Central Bank of Iran (CBI), 2010. http://www.cbi.ir/. Dunis, C. L, Laws, J. and Evans, B. (2005), Modeling with Recurrent and Higher Order Networks: A Comparative Analysis, Neural Network World, Vol. 6 (5), pp. 509-523. Evans, G. W., and Honkapohja, S., 2001. Learning and expectations in macroeconomics. Princeton, N.J.: Princeton University Press. 10

Fourgeaud, C., Gourieroux, C., Pradel, J., 1986. Learning procedures and convergence to rationality. Econometrica 54, 845-868. Heinemann, M., 2000. Adaptive learning of rational expectations using neural networks. Journal of Economic Dynamics & Control 24, 1007-1026. Kuan, C. M., White, H., 1994a. Adaptive learning with nonlinear dynamics driven by dependent processes. Econometrica 62, 1087-1114. Lucas, R.E., 1973. Some international evidence on output-inflation and unemployment tradeoffs. American Economic Review 63, 326-334. Marcet, A., Sargent, T., 1989. Convergence of least squares learning mechanisms in self referential linear stochastic models. Journal of Economic Theory 48, 337-368. Matroushi, M.S., and Samarasinghe, S., 2009. Building a hybrid neural network model for gold price forecasting. 18 th World IMACS / MODSIM Congress, Cairns, Australia. http://mssanz.org.au/modsim09. Khashei, M., Hejazi, S.R., Bijari, M., 2008. A new hybrid artificial neural networks and fuzzy regression model for time series forecasting. Fuzzy Sets and Systems. Volume 159, Issue 7, Pages 769-786. Menhaj, M. B., 1998. Fundamentals of neural networks. Tehran: Professor Hesabi. Mohebbi, M., Akbarzadeh Totonchi, M.R., Shahidi, F. and Poorshehabi, M.R., 2007. Possibility evaluation of machine vision and artificial neural network application to estimate dried shrimp moisture. In: 4 th Iranian Conference on Machine Vision, Image Processing and Application, 14 15 February 2007, Mashhad, Iran. Neuro solution help, 2010. Produced by NeuroDimension, Inc. http://www.neurosolutions.com/downloads/documentation.html Ntungo C. and Boyd M. (1998), Commodity Futures Trading Performance Using Neural Network Models versus ARMA Models, the Journal of Futures Markets, 18, pp. 965-983. Organization of Petroleum Exporting Countries (OPEC), 2010. http://www.opec.org/. Parisi, F.; Parisi, A. and Guerrero J.L. (2003), Rolling and Recursive Neural Network Models: The Gold Price, Working Paper, Universidad de Chile. Sarfaraz, L. and Afsar, A. 2007. A study on the factors affecting gold price and a Neuro-fuzzy model of forecast online at http://mpra.ub.uni-muenchen.de/2855/ MPRA. Sargent, T., 1993. Bounded Rationality in Macroeconomics. Oxford University Press, Oxford. Smith, G. (2002), Tests of the Random Walk Hypothesis for London Gold Prices, Applied Economics Letters, 9, pp. 671-674. Zenner, M., 1996. Learning to Become Rational: The Case of Self-Referential and Non-Stationary Models. Springer, Berlin. 11