Article. Chaotic analysis of predictability versus knowledge discovery techniques: case study of the Polish stock market

Size: px
Start display at page:

Download "Article. Chaotic analysis of predictability versus knowledge discovery techniques: case study of the Polish stock market"

Transcription

1 Article Chaotic analysis of predictability versus knowledge discovery techniques: case study of the Polish stock market Se-Hak Chun, 1 Kyoung-Jae Kim 2 and Steven H. Kim 3 (1) Hallym University, 1 Okchon-Dong, Chunchon, Kangwon-Do, , Korea shchun@hallym.ac.kr (2) Kyung-Hee Cyber University, 1 Hoegi-Dong, Dongdaemun-Gu, Seoul , Korea (3) Sookmyung Women s University, Chungpa-dong 2 Ka, Yongsan-Gu, Seoul, , Korea Abstract: Increasing evidence over the past decade indicates that financial markets exhibit nonlinear dynamics in the form of chaotic behavior. Traditionally, the prediction of stock markets has relied on statistical methods including multivariate statistical methods, autoregressive integrated moving average models and autoregressive conditional heteroskedasticity models. In recent years, neural networks and other knowledge techniques have been applied extensively to the task of predicting financial variables. This paper examines the relationship between chaotic models and learning techniques. In particular, chaotic analysis indicates the upper limits of predictability for a time series. The learning techniques involve neural networks and case-based reasoning. The chaotic models take the form of R/S analysis to measure persistence in a time series, the correlation dimension to encapsulate system complexity, and Lyapunov exponents to indicate predictive horizons. The concepts are illustrated in the context of a major emerging market, namely the Polish stock market. Keywords: chaotic models, knowledge discovery, backpropagation neural network, case-based reasoning 1. Introduction Increasing evidence over the past decade indicates that financial markets exhibit chaotic behavior. In general, the geometry of a chaotic process in a suitable state space exhibits a dimension of fractional rather than integer value. Consequently, such a structure is said to exhibit a fractal dimension. Traditionally, the prediction of stock markets has relied on statistical methods including multivariate statistical methods, autoregressive integrated moving average models and autoregressive conditional heteroskedasticity models. In recent years, neural networks and other knowledge techniques have been applied extensively to the task of predicting financial variables. This paper examines the relationship between chaotic models and learning techniques. In particular, chaotic analysis indicates the upper limits of predictability for a time series. The learning techniques involve neural networks and case-based reasoning. The chaotic models take the form of R/S analysis to measure persistence in a time series, the correlation dimension to encapsulate system complexity, and Lyapunov exponents to indicate predictive horizons. The concepts are illustrated in the context of a major emerging market, namely the Polish stock market. 2. Methodology The level of chaos in a data stream can be characterized by a number of methods. Two of the most popular parameters are the correlation dimension and the Lyapunov exponent Correlation dimension The correlation dimension is an estimate of the fractal dimension. More specifically, this metric considers the probability that two points chosen at random will lie within a certain distance of each other, and determines how this probability changes as the distance is increased. As an estimate of the fractal dimension, the correlation dimension is a measure of the complexity of a process (Farmer, 1982; Grassberger & Procaccia, 1983). 264 Expert Systems, November 2002, Vol. 19, No. 5

2 2. 2. Lyapunov exponent The Lyapunov exponent characterizes the dynamics of a complex process. Each dimension of the process is associated with a Lyapunov exponent. A positive exponent indi- Figure 2: Predictive procedure through case reasoning using composite neighbors. cates the sensitivity of initial conditions, i.e. how much a forecast diverges based on approximately similar starting conditions. From a slightly different perspective, a Lyapunov exponent indicates the loss of predictive ability as one looks forward in time. The largest positive exponent max determines the maximal rate of stretching. For this reason, the value of max is often used to characterize the predictability of a chaotic process. On the other hand, a negative exponent indicates the degree to which points converge toward one another. For instance, a point attractor is characterized by negative values for each exponent (Peters, 1991, 1994; Pesaran & Potter, 1993; Ott et al., 1994) Hurst exponent The Hurst exponent H is a measure of the bias in random motion. For Brownian motion, the Hurst exponent has value 0.5. For a persistent or trend-reinforcing series, 0.5 H 1.0. On the other hand, 0 H 0.5 for an antipersistent or mean reverting system. The calculation of the Hurst exponent involves a preliminary step known as rescaled (R/S) range analysis. The procedure for computing the Hurst exponent is presented in detail in Figure 1. Figure 1: Calculation of the Hurst exponent through rescaled (R/S) range analysis. In Step 2, a starting value for n of no less than 10 is recommended (Peters, 1994, p. 63) Learning techniques The learning techniques employed in this study relate to neural networks and case-based reasoning. Neural nets have been used extensively over the past decade for predicting financial markets. The application of case reasoning to forecasting, however, is an area with little prior history Neural network The most common neural network methodology employs the backpropagation (BPN) algorithm. This approach involves a layered, feedforward network structure with fully interconnected nodes from one layer to the next. The learning technique involves the back- Expert Systems, November 2002, Vol. 19, No

3 ward propagation of errors to aid in updating internode weights. As a model of biological systems, artificial neural networks have learning capabilities which can be applied to the task of prediction. Unfortunately, BPN models suffer from protracted training periods. Thousands of trials are usually required for satisfactory performance in various tasks. The time and effort required for training have hindered their widespread application to practical domains. Fortunately, certain other learning techniques such as probabilistic neural networks and case-based reasoning offer much swifter response Case-based reasoning and composite neighbors Conventional methods of prediction based on discrete logicusuallyseekthesinglebestinstance,oraweighted combination of a small number of neighbors in the observational space. For instance, the rule of thumb in casebased reasoning is to seek the nearest neighbor to a target case. In an analogous way, certain algorithms in neural networks seek a fixed number of the closest neighbors; this approach is illustrated by the use of self-organizing maps for pattern recognition tasks (Kohonen, 1984). An intelligent learning algorithm should therefore take account of a virtual or composite neighbor whose parameters are defined by some weighted combination of actual neighbors in the case base. In this way, the algorithm can utilize the knowledge reflected in a larger subset of the case base than the immediate collection of proximal neighbors. The procedure for case reasoning using composite neighbors is presented in Figure 2. The key to the composite approach lies in the determination of the most effective set of weights to use in order to construct the virtual neighbor. Learning the optimal set of weights is the primary challenge, and the particular values of the weights may well evolve over time as the experience base expands. A promising way to address this task lies in simulated annealing: the weights for constructing the composite neighbor may be perturbed randomly and the advantageous trends pursued, as in the quest for effective parameters in a neural network algorithm. 3. Application to stock market data The data The case study examined the relationship between chaotic models and learning techniques. The application involved the prediction of the Polish stock price index, for which the input variables were as follows. Stock price index (SI): Polish stock price index with a baseline of 100 for 1994 Total return index (RI): cumulative stock index return, including dividends Dividend yield (DY): dividend yield of the Polish stock market Turnover volume (VO): trading volume of the Polish stock market Price/earnings ratio (PE): price/earnings ratio for the Polish stock price index Interest rate (INT): interest rate for Poland The learning phase consisted of 569 observations from 1 March 1994 to 30 April 1996, while the testing phase consisted of 100 observations from 1 May 1996 to 15 September Model construction An exploratory plot of the Polish stock price index is presented in Figure 3. During the period of the study, the stock market first declined over the course of a year, and then began to recover slowly after March Other exploratory plots for the raw data series are shown in Figures 4 8. Figure 4 depicts the trajectory of the total return index for the stock market. Figure 5 plots the interest rate for Poland, while Figure 6 displays the turnover by volume for the exchange. Figure 7 depicts the price/ earnings ratio for the Polish stock price index. Figure 8 plots the time series of the dividend yield for the bourse. For the BPN architectures, the transfer function took the form of a sigmoidal curve. The learning rate and momentum were both set to 0.1. The training proceeded until there was no perceptible change in the error (at a threshold of 10 8 ). The network architecture was determined in part by the domain variables. Since six variables were selected as input data streams to provide a multivariate forecast for the stock index, the architecture would be 6*h*1 where h denotes the number of neurons in the hidden layer. 1 In this paper, the value of 4 was chosen for h. In constructing the predictive model for Poland, the input variables were first transformed. The modifications involved a logarithmic transformation (L), a differencing operation (D) and a standardization operation (Z) as appropriate. The correlation coefficients between the transformed variables are shown in Table 1 along with their significance levels. The input variables for predicting the Polish stock market were as follows: 1 A common difficulty in applying neural networks lies in overfitting the data. A rule of thumb in the field of statistical modeling specifies that, for a case base of N observations, the degrees of freedom in the model should not exceed N 0.5. For the Polish model using neural networks, the training set contained 569 observations; consequently an upper bound would be about 24 weights, each corresponding to a degree of freedom in the neural network to be trained. The neural network model with a 6*4*1 configuration (corresponding to = 28 weights) was selected and evaluated. 266 Expert Systems, November 2002, Vol. 19, No. 5

4 Figure 3: Polish stock price index (SI). Figure 4: Total return index (RI) for the Polish stock price index. Figure 5: Polish interest rate (INT). Expert Systems, November 2002, Vol. 19, No

5 Figure 6: Turnover by volume (VO) for the Polish exchange. Figure 7: Price/earnings ratio (PE) for the Polish stock price index. Figure 8: Dividend yield (DY) for the Polish stock price index. 268 Expert Systems, November 2002, Vol. 19, No. 5

6 Table 1: Kendall correlation coefficients with their significance levels in parentheses for the transformed input variables for the Polish stock market ZPE (0.000) ZDLRI (0.000) (0.000) ZDLSI (0.000) (0.000) (0.000) ZDLVO (0.000) (0.290) (0.000) (0.000) ZINT (0.000) (0.000) (0.000) (0.000) (0.892) ZDY ZPE ZDLRI ZDLSI ZDLVO The notation LA denotes the logarithm of variable A, while ZB denotes the standardized version of variable B. Moreover the notation DC denotes the difference of variable C over the previous period. For instance, ZDLSI is the standardized form of the difference of the logarithm of variable SI while ZDY is the standardized form of variable DY. ZDLSI: the standardized form of the difference of the logarithm of the stock price index ZDLVO: the standardized form of the difference of the logarithm of the trading volume ZDLRI: the standardized form of the difference of the logarithm of the cumulative stock index return ZDY: the standardized form of the dividend yield ZPE: the standardized form of the price/earnings ratio ZINT: the standardized form of the interest rate 4. Results of study A rescaled (R/S) range analysis for the raw Polish index is shown in Figure 9. An estimate of the Hurst exponent is given by the slope of the dotted line in the figure. The slope for Poland tends to be higher than 0.5; hence the data series is persistent. However, the estimate of the slope varies as a function of the predictive horizon T. For the observations in Figure 9, a series of incremental estimates was obtained for the Hurst exponent. Table 2 presents the estimates of H as a function of the predictive horizon T i. The estimates begin with H(T 1 ) = and then tend to decline as the horizon T i increases. A Hurst exponent larger than 0.5 indicates persistence, while the converse indicates reversion. Hence the deviation from 0.5 is a measure of predictability: G(T i ) H(T i ) 0.5 The predictability G(T i ) is also shown in Table 2. Although the pattern is not clear-cut, the predictability tends to fall as a function of the horizon T i. Table 3 presents the rates of change in the value of R/S as a function of the predictive horizon, as well as corresponding changes in accuracy due to forecasts from the knowledge systems. The change in accuracy was traced by the mean absolute percentage error (MAPE) for each predictive technique, namely BPN and case-based reasoning. As expected, the estimated values of the slope of R/S tended to fall with T i, while the errors in both predictive techniques rose. The correlation coefficients between the estimated Hurst exponent and the forecast error are presented in Table 4. The correlation with BPN is weakly significant, while that with case-based reasoning is indeterminate. The next table lists the correlations between the predictability G and the forecast errors. As anticipated, a rise Table 2: The error MAPE i and incremental Hurst exponent H(T i ) asafunctionofthehorizont i for the ith observation i T i log T i log(r/s) H(T i ) G(T i ) Figure 9: R/S analysis to compute the Hurst exponent versus random walk (H random = 0.5) as a function of predictive horizon T The incremental Hurst exponent is calculated as H(T i ) [log(r/s) i log(r/s) i1 ]/(log T i log T i1 ). Expert Systems, November 2002, Vol. 19, No

7 Table 3: The slopes of MAPE i and of log(r/s) i1 as a function of the horizon T i i T i log(r/s) i slog(r/s) i MAPEFBPN i smapefbpn i MAPEFCBR i smapefcbr i N/A N/A N/A The slope of MAPE i was calculated as smape(t i ) (MAPE i MAPE i1 )/(T i T i1 ), while that of log(r/s) i was slog(r/s) i [log(r/s) i log(r/s) i1 ]/(T i T i1 ). CBR, case-based reasoning. Table 4: Correlation coefficients between the estimated Hurst exponent and forecast error (MAPE) for each technique Method p value BPN CBR Table 5: Correlation coefficients between the predictability G and forecast error (MAPE) for each technique Method p value Figure 10: Error as a function of the horizon T for each learning method. BPN CBR Here, G H 0.5. Table 6: Correlation coefficients between the slope of log(r/s) and the slope of the forecast error (MAPE) for each technique Method p value BPN CBR in G was correlated with a fall in error for both BPN and case-based reasoning. However, the pattern is not statistically significant. Table 6 exhibits the correlation coefficients between the incremental slope of log(r/s) and the corresponding fore- cast errors at various horizons T i. For BPN, the fall in error with a rise in log(r/s) is statistically significant, but the pattern for case-based resasoning is inconclusive. Figure 10 compares the performance of the two learning techniques as a function of the horizon T. As expected, the errors rose with an increase in T. The neural and case reasoning methods performed approximately equally up to a period of T = 16, after which the errors due to BPN rose faster than those in case-based reasoning. The shortest horizon for which the differential performance appears to be substantive was T = 20 days. A pairwise t test shown in Table 7 indicates that the difference was significant at level p Table 8 presents a comparative set of nonlinear characteristics for the stock indexes of several emerging markets. The Hurst exponent is highest for the Philippines, but its correlation dimension and Lyapunov exponent are only moderate. With the highest correlation dimension of 5.358, Singapore s stock market appears to be the most complex. The market exhibits weakly reverting behavior, as indicated 270 Expert Systems, November 2002, Vol. 19, No. 5

8 Table 7: Pairwise t tests of the predictive models for predictive horizon T = 20 Variable Number of pairs Correlation Two-tailed Mean SD SE of mean significance NN CBR Paired differences Mean SD SE of mean t value Degrees of freedom Two-tailed significance % CI (2.831, 3.808) The comparison is based on the MAPE of the residuals. Table 8: Hurst exponents, correlation dimension and Lyapunov exponents for each stock market a larger collection of learning techniques represents a rich area for future investigation. H Correlation Lyapunov dimension exponent Singapore Malaysia Philippines China Hong Kong Poland Korea by a Hurst exponent of Moreover, it has the highest Lyapunov exponent, indicating a rapid loss of information over time. Poland has a relatively complex market, with a correlation dimension of However, it has the second largest Hurst exponent of , thereby indicating a moderate level of predictability. Moreover, Poland has the smallest Lyapunov exponent: the inverse value of 1/0.053 = indicates that the predictive horizon for Poland is about 19 days. 5. Concluding remarks and future work In this paper we investigated the predictability of a complex process based on chaotic analysis and knowledge based tools. Case-based reasoning outperformed neural nets, but the performance of both techniques declined with an increase in predictive horizon, as expected from informal considerations as well as chaotic analyses. The literature contains numerous varieties of neural networks as well as other learning techniques such as induction and genetic algorithms. The systematic evaluation of Acknowledgements This research was supported by the Hallym Academy of Science at Hallym University, Korea, References Famer, J.D. (1982) Chaotic attractors of an infinite-dimensional dynamical system, Physica, 4D (3), Grassberger, P. and I. Procaccia (1983) Measuring the strangeness of strange attractors, Physica, 9D, Kohonen, T. (1984) Self-Organization and Associative Memory, New York: Springer. Ott, E., T. Sauer and J.A. Yorke (1994) Coping with Chaos, New York: Wiley. Pesaran, M.H. and S.M. Potter (1993). Nonlinear Dynamics, Chaos and Econometrics, New York: Wiley. Peters, E.E. (1991) Chaos and Order in the Capital Markets, New York: Wiley. Peters, E.E. (1994) Fractal Market Analysis, New York: Wiley. The authors Se-Hak Chun Se-Hak Chun is a Professor at Hallym University. His research interests include financial forecasting, complex theory, data mining and electronic commerce. He received a BS degree in economics at Korea University (1994), an ME degree in management engineering at the Korea Advanced Institute of Science and Technology (1997) and a PhD degree in management engineering at the Korea Advanced Institute of Science and Technology (2002). Expert Systems, November 2002, Vol. 19, No

9 Kyoung-Jae Kim Kyoung-Jae Kim is a full-time lecturer in the Department of Management Information Systems in Kyung Hee Cyber University. He received his MS and PhD degrees in management information systems from the Graduate School of Management of the Korea Advanced Institute of Science and Technology and his BA degree from Chung-Ang University. He has published in Applied Intelligence, Expert Systems, Expert Systems with Applications, Intelligent Data Analysis, etc. His research interests include data mining, knowledge management and intelligent agents. Steven Kim Steven Kim is Distinguished Professor at Sookmyung Women s University. His research interests include knowl- edge discovery, online multimedia and technological innovation. From 1995 to February 2001, he taught at the Graduate School of Management at the Korea Advanced Institute of Science and Technology. Between 1993 and 1995, he was President of Lightwell Inc., a company dedicated to high performance devices for intelligent automation and photonic computing. Prior to a one-year position as a Visiting Fellow at Cornell University, he served as an Assistant Professor of Mechanical Engineering at Massachusetts Institute of Technology from 1985 to In 1989 he received the Presidential Young Investigator award from the US National Science Foundation. He earned a BS degree in mechanical engineering at Columbia University (1977) and a PhD in management and mechanical engineering at MIT (1985). In between, he received master s degrees in mechanical engineering (1978), operations research (1980), and management (1983) at the two universities. Dr Kim has written over 100 technical articles. 272 Expert Systems, November 2002, Vol. 19, No. 5

Pattern Matching and Neural Networks based Hybrid Forecasting System

Pattern Matching and Neural Networks based Hybrid Forecasting System Pattern Matching and Neural Networks based Hybrid Forecasting System Sameer Singh and Jonathan Fieldsend PA Research, Department of Computer Science, University of Exeter, Exeter, UK Abstract In this paper

More information

ECLT 5810 Classification Neural Networks. Reference: Data Mining: Concepts and Techniques By J. Hand, M. Kamber, and J. Pei, Morgan Kaufmann

ECLT 5810 Classification Neural Networks. Reference: Data Mining: Concepts and Techniques By J. Hand, M. Kamber, and J. Pei, Morgan Kaufmann ECLT 5810 Classification Neural Networks Reference: Data Mining: Concepts and Techniques By J. Hand, M. Kamber, and J. Pei, Morgan Kaufmann Neural Networks A neural network is a set of connected input/output

More information

A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE

A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE Li Sheng Institute of intelligent information engineering Zheiang University Hangzhou, 3007, P. R. China ABSTRACT In this paper, a neural network-driven

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning

CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Learning Neural Networks Classifier Short Presentation INPUT: classification data, i.e. it contains an classification (class) attribute.

More information

Long-Term Prediction, Chaos and Artificial Neural Networks. Where is the Meeting Point?

Long-Term Prediction, Chaos and Artificial Neural Networks. Where is the Meeting Point? Engineering Letters, 5:, EL_5 Long-Term Prediction, Chaos and Artificial Neural Networks. Where is the Meeting Point? Pilar Gómez-Gil Abstract This paper presents the advances of a research using a combination

More information

Revista Economica 65:6 (2013)

Revista Economica 65:6 (2013) INDICATIONS OF CHAOTIC BEHAVIOUR IN USD/EUR EXCHANGE RATE CIOBANU Dumitru 1, VASILESCU Maria 2 1 Faculty of Economics and Business Administration, University of Craiova, Craiova, Romania 2 Faculty of Economics

More information

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,

More information

A Different Statistic for the Management of Portfolios - the Hurst Exponent: Persistent, Antipersistent or Random Time Series?

A Different Statistic for the Management of Portfolios - the Hurst Exponent: Persistent, Antipersistent or Random Time Series? A Different Statistic for the Management of Portfolios - the Hurst Exponent: Persistent, Antipersistent or Random Time Series? Ana-Maria CALOMFIR (METESCU) 1 Abstract In recent years, research in the capital

More information

Chaos in GDP. Abstract

Chaos in GDP. Abstract Chaos in GDP R. Kříž Abstract This paper presents an analysis of GDP and finds chaos in GDP. I tried to find a nonlinear lower-dimensional discrete dynamic macroeconomic model that would characterize GDP.

More information

MODELLING ENERGY DEMAND FORECASTING USING NEURAL NETWORKS WITH UNIVARIATE TIME SERIES

MODELLING ENERGY DEMAND FORECASTING USING NEURAL NETWORKS WITH UNIVARIATE TIME SERIES MODELLING ENERGY DEMAND FORECASTING USING NEURAL NETWORKS WITH UNIVARIATE TIME SERIES S. Cankurt 1, M. Yasin 2 1&2 Ishik University Erbil, Iraq 1 s.cankurt@ishik.edu.iq, 2 m.yasin@ishik.edu.iq doi:10.23918/iec2018.26

More information

Artificial Neural Network

Artificial Neural Network Artificial Neural Network Contents 2 What is ANN? Biological Neuron Structure of Neuron Types of Neuron Models of Neuron Analogy with human NN Perceptron OCR Multilayer Neural Network Back propagation

More information

No. 6 Determining the input dimension of a To model a nonlinear time series with the widely used feed-forward neural network means to fit the a

No. 6 Determining the input dimension of a To model a nonlinear time series with the widely used feed-forward neural network means to fit the a Vol 12 No 6, June 2003 cfl 2003 Chin. Phys. Soc. 1009-1963/2003/12(06)/0594-05 Chinese Physics and IOP Publishing Ltd Determining the input dimension of a neural network for nonlinear time series prediction

More information

The Research of Railway Coal Dispatched Volume Prediction Based on Chaos Theory

The Research of Railway Coal Dispatched Volume Prediction Based on Chaos Theory The Research of Railway Coal Dispatched Volume Prediction Based on Chaos Theory Hua-Wen Wu Fu-Zhang Wang Institute of Computing Technology, China Academy of Railway Sciences Beijing 00044, China, P.R.

More information

Address for Correspondence

Address for Correspondence Research Article APPLICATION OF ARTIFICIAL NEURAL NETWORK FOR INTERFERENCE STUDIES OF LOW-RISE BUILDINGS 1 Narayan K*, 2 Gairola A Address for Correspondence 1 Associate Professor, Department of Civil

More information

AI Programming CS F-20 Neural Networks

AI Programming CS F-20 Neural Networks AI Programming CS662-2008F-20 Neural Networks David Galles Department of Computer Science University of San Francisco 20-0: Symbolic AI Most of this class has been focused on Symbolic AI Focus or symbols

More information

Economy and Application of Chaos Theory

Economy and Application of Chaos Theory Economy and Application of Chaos Theory 1. Introduction The theory of chaos came into being in solution of technical problems, where it describes the behaviour of nonlinear systems that have some hidden

More information

Artificial Neural Network Based Approach for Design of RCC Columns

Artificial Neural Network Based Approach for Design of RCC Columns Artificial Neural Network Based Approach for Design of RCC Columns Dr T illai, ember I Karthekeyan, Non-member Recent developments in artificial neural network have opened up new possibilities in the field

More information

Application of Artificial Neural Network for Short Term Load Forecasting

Application of Artificial Neural Network for Short Term Load Forecasting aerd Scientific Journal of Impact Factor(SJIF): 3.134 e-issn(o): 2348-4470 p-issn(p): 2348-6406 International Journal of Advance Engineering and Research Development Volume 2,Issue 4, April -2015 Application

More information

A SEASONAL FUZZY TIME SERIES FORECASTING METHOD BASED ON GUSTAFSON-KESSEL FUZZY CLUSTERING *

A SEASONAL FUZZY TIME SERIES FORECASTING METHOD BASED ON GUSTAFSON-KESSEL FUZZY CLUSTERING * No.2, Vol.1, Winter 2012 2012 Published by JSES. A SEASONAL FUZZY TIME SERIES FORECASTING METHOD BASED ON GUSTAFSON-KESSEL * Faruk ALPASLAN a, Ozge CAGCAG b Abstract Fuzzy time series forecasting methods

More information

Integer weight training by differential evolution algorithms

Integer weight training by differential evolution algorithms Integer weight training by differential evolution algorithms V.P. Plagianakos, D.G. Sotiropoulos, and M.N. Vrahatis University of Patras, Department of Mathematics, GR-265 00, Patras, Greece. e-mail: vpp

More information

Experiments with a Hybrid-Complex Neural Networks for Long Term Prediction of Electrocardiograms

Experiments with a Hybrid-Complex Neural Networks for Long Term Prediction of Electrocardiograms IEEE. ransactions of the 6 International World Congress of Computational Intelligence, IJCNN 6 Experiments with a Hybrid-Complex Neural Networks for Long erm Prediction of Electrocardiograms Pilar Gómez-Gil,

More information

Artificial Neural Networks. Edward Gatt

Artificial Neural Networks. Edward Gatt Artificial Neural Networks Edward Gatt What are Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning Very

More information

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis Introduction to Natural Computation Lecture 9 Multilayer Perceptrons and Backpropagation Peter Lewis 1 / 25 Overview of the Lecture Why multilayer perceptrons? Some applications of multilayer perceptrons.

More information

Fractals. Mandelbrot defines a fractal set as one in which the fractal dimension is strictly greater than the topological dimension.

Fractals. Mandelbrot defines a fractal set as one in which the fractal dimension is strictly greater than the topological dimension. Fractals Fractals are unusual, imperfectly defined, mathematical objects that observe self-similarity, that the parts are somehow self-similar to the whole. This self-similarity process implies that fractals

More information

Dynamical Systems and Deep Learning: Overview. Abbas Edalat

Dynamical Systems and Deep Learning: Overview. Abbas Edalat Dynamical Systems and Deep Learning: Overview Abbas Edalat Dynamical Systems The notion of a dynamical system includes the following: A phase or state space, which may be continuous, e.g. the real line,

More information

Lecture 4: Perceptrons and Multilayer Perceptrons

Lecture 4: Perceptrons and Multilayer Perceptrons Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons

More information

What is Chaos? Implications of Chaos 4/12/2010

What is Chaos? Implications of Chaos 4/12/2010 Joseph Engler Adaptive Systems Rockwell Collins, Inc & Intelligent Systems Laboratory The University of Iowa When we see irregularity we cling to randomness and disorder for explanations. Why should this

More information

Course 395: Machine Learning - Lectures

Course 395: Machine Learning - Lectures Course 395: Machine Learning - Lectures Lecture 1-2: Concept Learning (M. Pantic) Lecture 3-4: Decision Trees & CBC Intro (M. Pantic & S. Petridis) Lecture 5-6: Evaluating Hypotheses (S. Petridis) Lecture

More information

FORECASTING OF INFLATION IN BANGLADESH USING ANN MODEL

FORECASTING OF INFLATION IN BANGLADESH USING ANN MODEL FORECASTING OF INFLATION IN BANGLADESH USING ANN MODEL Rumana Hossain Department of Physical Science School of Engineering and Computer Science Independent University, Bangladesh Shaukat Ahmed Department

More information

Forecasting Crude Oil Price Using Neural Networks

Forecasting Crude Oil Price Using Neural Networks CMU. Journal (2006) Vol. 5(3) 377 Forecasting Crude Oil Price Using Neural Networks Komsan Suriya * Faculty of Economics, Chiang Mai University, Chiang Mai 50200, Thailand *Corresponding author. E-mail:

More information

WEATHER DEPENENT ELECTRICITY MARKET FORECASTING WITH NEURAL NETWORKS, WAVELET AND DATA MINING TECHNIQUES. Z.Y. Dong X. Li Z. Xu K. L.

WEATHER DEPENENT ELECTRICITY MARKET FORECASTING WITH NEURAL NETWORKS, WAVELET AND DATA MINING TECHNIQUES. Z.Y. Dong X. Li Z. Xu K. L. WEATHER DEPENENT ELECTRICITY MARKET FORECASTING WITH NEURAL NETWORKS, WAVELET AND DATA MINING TECHNIQUES Abstract Z.Y. Dong X. Li Z. Xu K. L. Teo School of Information Technology and Electrical Engineering

More information

22/04/2014. Economic Research

22/04/2014. Economic Research 22/04/2014 Economic Research Forecasting Models for Exchange Rate Tuesday, April 22, 2014 The science of prognostics has been going through a rapid and fruitful development in the past decades, with various

More information

A Neural Network learning Relative Distances

A Neural Network learning Relative Distances A Neural Network learning Relative Distances Alfred Ultsch, Dept. of Computer Science, University of Marburg, Germany. ultsch@informatik.uni-marburg.de Data Mining and Knowledge Discovery aim at the detection

More information

CSC 578 Neural Networks and Deep Learning

CSC 578 Neural Networks and Deep Learning CSC 578 Neural Networks and Deep Learning Fall 2018/19 3. Improving Neural Networks (Some figures adapted from NNDL book) 1 Various Approaches to Improve Neural Networks 1. Cost functions Quadratic Cross

More information

1 Random walks and data

1 Random walks and data Inference, Models and Simulation for Complex Systems CSCI 7-1 Lecture 7 15 September 11 Prof. Aaron Clauset 1 Random walks and data Supposeyou have some time-series data x 1,x,x 3,...,x T and you want

More information

Application of Physics Model in prediction of the Hellas Euro election results

Application of Physics Model in prediction of the Hellas Euro election results Journal of Engineering Science and Technology Review 2 (1) (2009) 104-111 Research Article JOURNAL OF Engineering Science and Technology Review www.jestr.org Application of Physics Model in prediction

More information

APPLICATION OF RADIAL BASIS FUNCTION NEURAL NETWORK, TO ESTIMATE THE STATE OF HEALTH FOR LFP BATTERY

APPLICATION OF RADIAL BASIS FUNCTION NEURAL NETWORK, TO ESTIMATE THE STATE OF HEALTH FOR LFP BATTERY International Journal of Electrical and Electronics Engineering (IJEEE) ISSN(P): 2278-9944; ISSN(E): 2278-9952 Vol. 7, Issue 1, Dec - Jan 2018, 1-6 IASET APPLICATION OF RADIAL BASIS FUNCTION NEURAL NETWORK,

More information

EPL442: Computational

EPL442: Computational EPL442: Computational Learning Systems Lab 2 Vassilis Vassiliades Department of Computer Science University of Cyprus Outline Artificial Neuron Feedforward Neural Network Back-propagation Algorithm Notes

More information

COGS Q250 Fall Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November.

COGS Q250 Fall Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November. COGS Q250 Fall 2012 Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November. For the first two questions of the homework you will need to understand the learning algorithm using the delta

More information

Selection of the Appropriate Lag Structure of Foreign Exchange Rates Forecasting Based on Autocorrelation Coefficient

Selection of the Appropriate Lag Structure of Foreign Exchange Rates Forecasting Based on Autocorrelation Coefficient Selection of the Appropriate Lag Structure of Foreign Exchange Rates Forecasting Based on Autocorrelation Coefficient Wei Huang 1,2, Shouyang Wang 2, Hui Zhang 3,4, and Renbin Xiao 1 1 School of Management,

More information

International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS)

International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS) International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research) International Journal of Emerging Technologies in Computational

More information

Neural Networks DWML, /25

Neural Networks DWML, /25 DWML, 2007 /25 Neural networks: Biological and artificial Consider humans: Neuron switching time 0.00 second Number of neurons 0 0 Connections per neuron 0 4-0 5 Scene recognition time 0. sec 00 inference

More information

Introduction to Artificial Neural Networks

Introduction to Artificial Neural Networks Facultés Universitaires Notre-Dame de la Paix 27 March 2007 Outline 1 Introduction 2 Fundamentals Biological neuron Artificial neuron Artificial Neural Network Outline 3 Single-layer ANN Perceptron Adaline

More information

Reading, UK 1 2 Abstract

Reading, UK 1 2 Abstract , pp.45-54 http://dx.doi.org/10.14257/ijseia.2013.7.5.05 A Case Study on the Application of Computational Intelligence to Identifying Relationships between Land use Characteristics and Damages caused by

More information

arxiv: v1 [q-fin.st] 1 Feb 2016

arxiv: v1 [q-fin.st] 1 Feb 2016 How to improve accuracy for DFA technique Alessandro Stringhi, Silvia Figini Department of Physics, University of Pavia, Italy Department of Statistics and Applied Economics, University of Pavia, Italy

More information

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Neural Networks Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Brains as Computational Devices Brains advantages with respect to digital computers: Massively parallel Fault-tolerant Reliable

More information

A Wavelet Neural Network Forecasting Model Based On ARIMA

A Wavelet Neural Network Forecasting Model Based On ARIMA A Wavelet Neural Network Forecasting Model Based On ARIMA Wang Bin*, Hao Wen-ning, Chen Gang, He Deng-chao, Feng Bo PLA University of Science &Technology Nanjing 210007, China e-mail:lgdwangbin@163.com

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement

More information

Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions

Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions BACK-PROPAGATION NETWORKS Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks Cannot approximate (learn) non-linear functions Difficult (if not impossible) to design

More information

Lecture 7 Artificial neural networks: Supervised learning

Lecture 7 Artificial neural networks: Supervised learning Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption ANDRÉ NUNES DE SOUZA, JOSÉ ALFREDO C. ULSON, IVAN NUNES

More information

Effect of number of hidden neurons on learning in large-scale layered neural networks

Effect of number of hidden neurons on learning in large-scale layered neural networks ICROS-SICE International Joint Conference 009 August 18-1, 009, Fukuoka International Congress Center, Japan Effect of on learning in large-scale layered neural networks Katsunari Shibata (Oita Univ.;

More information

Forecasting River Flow in the USA: A Comparison between Auto-Regression and Neural Network Non-Parametric Models

Forecasting River Flow in the USA: A Comparison between Auto-Regression and Neural Network Non-Parametric Models Journal of Computer Science 2 (10): 775-780, 2006 ISSN 1549-3644 2006 Science Publications Forecasting River Flow in the USA: A Comparison between Auto-Regression and Neural Network Non-Parametric Models

More information

Comparison Forecasting with Double Exponential Smoothing and Artificial Neural Network to Predict the Price of Sugar

Comparison Forecasting with Double Exponential Smoothing and Artificial Neural Network to Predict the Price of Sugar Comparison Forecasting with Double Exponential Smoothing and Artificial Neural Network to Predict the Price of Sugar Fauziah Nasir Fauziah *, Aris Gunaryati Universitas Nasional Sawo Manila, South Jakarta.

More information

Using SDM to Train Neural Networks for Solving Modal Sensitivity Problems

Using SDM to Train Neural Networks for Solving Modal Sensitivity Problems Using SDM to Train Neural Networks for Solving Modal Sensitivity Problems Brian J. Schwarz, Patrick L. McHargue, & Mark H. Richardson Vibrant Technology, Inc. 18141 Main Street Jamestown, California 95327

More information

A New Look at Nonlinear Time Series Prediction with NARX Recurrent Neural Network. José Maria P. Menezes Jr. and Guilherme A.

A New Look at Nonlinear Time Series Prediction with NARX Recurrent Neural Network. José Maria P. Menezes Jr. and Guilherme A. A New Look at Nonlinear Time Series Prediction with NARX Recurrent Neural Network José Maria P. Menezes Jr. and Guilherme A. Barreto Department of Teleinformatics Engineering Federal University of Ceará,

More information

Temperature Prediction based on Artificial Neural Network and its Impact on Rice Production, Case Study: Bangladesh

Temperature Prediction based on Artificial Neural Network and its Impact on Rice Production, Case Study: Bangladesh erature Prediction based on Artificial Neural Network and its Impact on Rice Production, Case Study: Bangladesh Tushar Kanti Routh Lecturer, Department of Electronics & Telecommunication Engineering, South

More information

Abstract. 1 Introduction

Abstract. 1 Introduction Time Series Analysis: Mandelbrot Theory at work in Economics M. F. Guiducci and M. I. Loflredo Dipartimento di Matematica, Universita di Siena, Siena, Italy Abstract The consequences of the Gaussian hypothesis,

More information

FORECASTING SAVING DEPOSIT IN MALAYSIAN ISLAMIC BANKING: COMPARISON BETWEEN ARTIFICIAL NEURAL NETWORK AND ARIMA

FORECASTING SAVING DEPOSIT IN MALAYSIAN ISLAMIC BANKING: COMPARISON BETWEEN ARTIFICIAL NEURAL NETWORK AND ARIMA Jurnal Ekonomi dan Studi Pembangunan Volume 8, Nomor 2, Oktober 2007: 154-161 FORECASTING SAVING DEPOSIT IN MALAYSIAN ISLAMIC BANKING: COMPARISON BETWEEN ARTIFICIAL NEURAL NETWORK AND ARIMA Raditya Sukmana

More information

Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!

Artificial Neural Networks and Nonparametric Methods CMPSCI 383 Nov 17, 2011! Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011! 1 Todayʼs lecture" How the brain works (!)! Artificial neural networks! Perceptrons! Multilayer feed-forward networks! Error

More information

Application of Chaotic Number Generators in Econophysics

Application of Chaotic Number Generators in Econophysics 1 Application of Chaotic Number Generators in Econophysics Carmen Pellicer-Lostao 1, Ricardo López-Ruiz 2 Department of Computer Science and BIFI, Universidad de Zaragoza, 50009 - Zaragoza, Spain. e-mail

More information

Analysis of Interest Rate Curves Clustering Using Self-Organising Maps

Analysis of Interest Rate Curves Clustering Using Self-Organising Maps Analysis of Interest Rate Curves Clustering Using Self-Organising Maps M. Kanevski (1), V. Timonin (1), A. Pozdnoukhov(1), M. Maignan (1,2) (1) Institute of Geomatics and Analysis of Risk (IGAR), University

More information

Artifical Neural Networks

Artifical Neural Networks Neural Networks Artifical Neural Networks Neural Networks Biological Neural Networks.................................. Artificial Neural Networks................................... 3 ANN Structure...........................................

More information

CMSC 421: Neural Computation. Applications of Neural Networks

CMSC 421: Neural Computation. Applications of Neural Networks CMSC 42: Neural Computation definition synonyms neural networks artificial neural networks neural modeling connectionist models parallel distributed processing AI perspective Applications of Neural Networks

More information

Time Series Models for Measuring Market Risk

Time Series Models for Measuring Market Risk Time Series Models for Measuring Market Risk José Miguel Hernández Lobato Universidad Autónoma de Madrid, Computer Science Department June 28, 2007 1/ 32 Outline 1 Introduction 2 Competitive and collaborative

More information

Comparison of Predictive Accuracy of Neural Network Methods and Cox Regression for Censored Survival Data

Comparison of Predictive Accuracy of Neural Network Methods and Cox Regression for Censored Survival Data Comparison of Predictive Accuracy of Neural Network Methods and Cox Regression for Censored Survival Data Stanley Azen Ph.D. 1, Annie Xiang Ph.D. 1, Pablo Lapuerta, M.D. 1, Alex Ryutov MS 2, Jonathan Buckley

More information

Learning Cellular Automaton Dynamics with Neural Networks

Learning Cellular Automaton Dynamics with Neural Networks Learning Cellular Automaton Dynamics with Neural Networks N H Wulff* and J A Hertz t CONNECT, the Niels Bohr Institute and Nordita Blegdamsvej 17, DK-2100 Copenhagen 0, Denmark Abstract We have trained

More information

Multifractal Analysis and Local Hoelder Exponents Approach to Detecting Stock Markets Crashes

Multifractal Analysis and Local Hoelder Exponents Approach to Detecting Stock Markets Crashes Multifractal Analysis and Local Hoelder Exponents Approach to Detecting Stock Markets Crashes I. A. Agaev 1, Yu. A. Kuperin 2 1 Division of Computational Physics, Saint-Petersburg State University 198504,Ulyanovskaya

More information

Analysis of Fast Input Selection: Application in Time Series Prediction

Analysis of Fast Input Selection: Application in Time Series Prediction Analysis of Fast Input Selection: Application in Time Series Prediction Jarkko Tikka, Amaury Lendasse, and Jaakko Hollmén Helsinki University of Technology, Laboratory of Computer and Information Science,

More information

ABOUT UNIVERSAL BASINS OF ATTRACTION IN HIGH-DIMENSIONAL SYSTEMS

ABOUT UNIVERSAL BASINS OF ATTRACTION IN HIGH-DIMENSIONAL SYSTEMS International Journal of Bifurcation and Chaos, Vol. 23, No. 12 (2013) 1350197 (7 pages) c World Scientific Publishing Company DOI: 10.1142/S0218127413501976 ABOUT UNIVERSAL BASINS OF ATTRACTION IN HIGH-DIMENSIONAL

More information

epochs epochs

epochs epochs Neural Network Experiments To illustrate practical techniques, I chose to use the glass dataset. This dataset has 214 examples and 6 classes. Here are 4 examples from the original dataset. The last values

More information

Cascade Neural Networks with Node-Decoupled Extended Kalman Filtering

Cascade Neural Networks with Node-Decoupled Extended Kalman Filtering Cascade Neural Networks with Node-Decoupled Extended Kalman Filtering Michael C. Nechyba and Yangsheng Xu The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 Abstract Most neural networks

More information

Financial Risk and Returns Prediction with Modular Networked Learning

Financial Risk and Returns Prediction with Modular Networked Learning arxiv:1806.05876v1 [cs.lg] 15 Jun 2018 Financial Risk and Returns Prediction with Modular Networked Learning Carlos Pedro Gonçalves June 18, 2018 University of Lisbon, Instituto Superior de Ciências Sociais

More information

( t) Identification and Control of a Nonlinear Bioreactor Plant Using Classical and Dynamical Neural Networks

( t) Identification and Control of a Nonlinear Bioreactor Plant Using Classical and Dynamical Neural Networks Identification and Control of a Nonlinear Bioreactor Plant Using Classical and Dynamical Neural Networks Mehmet Önder Efe Electrical and Electronics Engineering Boðaziçi University, Bebek 80815, Istanbul,

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) Human Brain Neurons Input-Output Transformation Input Spikes Output Spike Spike (= a brief pulse) (Excitatory Post-Synaptic Potential)

More information

Short Term Load Forecasting Based Artificial Neural Network

Short Term Load Forecasting Based Artificial Neural Network Short Term Load Forecasting Based Artificial Neural Network Dr. Adel M. Dakhil Department of Electrical Engineering Misan University Iraq- Misan Dr.adelmanaa@gmail.com Abstract Present study develops short

More information

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks Commun. Theor. Phys. (Beijing, China) 40 (2003) pp. 607 613 c International Academic Publishers Vol. 40, No. 5, November 15, 2003 Effects of Interactive Function Forms in a Self-Organized Critical Model

More information

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000

More information

Forecasting Chaotic time series by a Neural Network

Forecasting Chaotic time series by a Neural Network Forecasting Chaotic time series by a Neural Network Dr. ATSALAKIS George Technical University of Crete, Greece atsalak@otenet.gr Dr. SKIADAS Christos Technical University of Crete, Greece atsalak@otenet.gr

More information

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks INFOB2KI 2017-2018 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Artificial Neural Networks Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html

More information

arxiv: v1 [cond-mat.stat-mech] 6 Mar 2008

arxiv: v1 [cond-mat.stat-mech] 6 Mar 2008 CD2dBS-v2 Convergence dynamics of 2-dimensional isotropic and anisotropic Bak-Sneppen models Burhan Bakar and Ugur Tirnakli Department of Physics, Faculty of Science, Ege University, 35100 Izmir, Turkey

More information

COMS 4771 Introduction to Machine Learning. Nakul Verma

COMS 4771 Introduction to Machine Learning. Nakul Verma COMS 4771 Introduction to Machine Learning Nakul Verma Announcements HW1 due next lecture Project details are available decide on the group and topic by Thursday Last time Generative vs. Discriminative

More information

Forecasting & Futurism

Forecasting & Futurism Article from: Forecasting & Futurism December 2013 Issue 8 A NEAT Approach to Neural Network Structure By Jeff Heaton Jeff Heaton Neural networks are a mainstay of artificial intelligence. These machine-learning

More information

Intelligent Decision Support for New Product Development: A Consumer-Oriented Approach

Intelligent Decision Support for New Product Development: A Consumer-Oriented Approach Appl. Math. Inf. Sci. 8, No. 6, 2761-2768 (2014) 2761 Applied Mathematics & Information Sciences An International Journal http://dx.doi.org/10.12785/amis/080611 Intelligent Decision Support for New Product

More information

Nonlinear Characterization of Activity Dynamics in Online Collaboration Websites

Nonlinear Characterization of Activity Dynamics in Online Collaboration Websites Nonlinear Characterization of Activity Dynamics in Online Collaboration Websites Tiago Santos 1 Simon Walk 2 Denis Helic 3 1 Know-Center, Graz, Austria 2 Stanford University 3 Graz University of Technology

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Neural Networks Varun Chandola x x 5 Input Outline Contents February 2, 207 Extending Perceptrons 2 Multi Layered Perceptrons 2 2. Generalizing to Multiple Labels.................

More information

Logic Learning in Hopfield Networks

Logic Learning in Hopfield Networks Logic Learning in Hopfield Networks Saratha Sathasivam (Corresponding author) School of Mathematical Sciences, University of Science Malaysia, Penang, Malaysia E-mail: saratha@cs.usm.my Wan Ahmad Tajuddin

More information

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm Volume 4, Issue 5, May 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Huffman Encoding

More information

CS:4420 Artificial Intelligence

CS:4420 Artificial Intelligence CS:4420 Artificial Intelligence Spring 2018 Neural Networks Cesare Tinelli The University of Iowa Copyright 2004 18, Cesare Tinelli and Stuart Russell a a These notes were originally developed by Stuart

More information

Lecture 5: Logistic Regression. Neural Networks

Lecture 5: Logistic Regression. Neural Networks Lecture 5: Logistic Regression. Neural Networks Logistic regression Comparison with generative models Feed-forward neural networks Backpropagation Tricks for training neural networks COMP-652, Lecture

More information

A new method for short-term load forecasting based on chaotic time series and neural network

A new method for short-term load forecasting based on chaotic time series and neural network A new method for short-term load forecasting based on chaotic time series and neural network Sajjad Kouhi*, Navid Taghizadegan Electrical Engineering Department, Azarbaijan Shahid Madani University, Tabriz,

More information

An Evaluation of Errors in Energy Forecasts by the SARFIMA Model

An Evaluation of Errors in Energy Forecasts by the SARFIMA Model American Review of Mathematics and Statistics, Vol. 1 No. 1, December 13 17 An Evaluation of Errors in Energy Forecasts by the SARFIMA Model Leila Sakhabakhsh 1 Abstract Forecasting is tricky business.

More information

Information Dynamics Foundations and Applications

Information Dynamics Foundations and Applications Gustavo Deco Bernd Schürmann Information Dynamics Foundations and Applications With 89 Illustrations Springer PREFACE vii CHAPTER 1 Introduction 1 CHAPTER 2 Dynamical Systems: An Overview 7 2.1 Deterministic

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

GDP growth and inflation forecasting performance of Asian Development Outlook

GDP growth and inflation forecasting performance of Asian Development Outlook and inflation forecasting performance of Asian Development Outlook Asian Development Outlook (ADO) has been the flagship publication of the Asian Development Bank (ADB) since 1989. Issued twice a year

More information

Neural Network Approach to Estimating Conditional Quantile Polynomial Distributed Lag (QPDL) Model with an Application to Rubber Price Returns

Neural Network Approach to Estimating Conditional Quantile Polynomial Distributed Lag (QPDL) Model with an Application to Rubber Price Returns American Journal of Business, Economics and Management 2015; 3(3): 162-170 Published online June 10, 2015 (http://www.openscienceonline.com/journal/ajbem) Neural Network Approach to Estimating Conditional

More information

Studies on the trend and chaotic behaviour of Tamil Nadu rainfall

Studies on the trend and chaotic behaviour of Tamil Nadu rainfall J. Ind. Geophys. Union ( October 2013 ) v.17,no.4,pp:335-339 Studies on the trend and chaotic behaviour of Tamil Nadu rainfall P. Indira 1, S. Stephen Rajkumar Inbanathan 2 1 Research and Development Centre,

More information

A LONG MEMORY PATTERN MODELLING AND RECOGNITION SYSTEM FOR FINANCIAL TIME-SERIES FORECASTING

A LONG MEMORY PATTERN MODELLING AND RECOGNITION SYSTEM FOR FINANCIAL TIME-SERIES FORECASTING A LONG MEMORY PATTERN MODELLING AND RECOGNITION SYSTEM FOR FINANCIAL TIME-SERIES FORECASTING Sameer Singh {s.singh@exeter.ac.uk} University of Exeter Department of Computer Science Prince of Wales Road

More information

The U.S. Congress established the East-West Center in 1960 to foster mutual understanding and cooperation among the governments and peoples of the

The U.S. Congress established the East-West Center in 1960 to foster mutual understanding and cooperation among the governments and peoples of the The U.S. Congress established the East-West Center in 1960 to foster mutual understanding and cooperation among the governments and peoples of the Asia Pacific region including the United States. Funding

More information