D-Facto public., ISBN , pp

Similar documents
EM-algorithm for Training of State-space Models with Application to Time Series Prediction

Modeling and Predicting Chaotic Time Series

Nonlinear Prediction for Top and Bottom Values of Time Series

y(n) Time Series Data

Self-organized criticality and the self-organizing map

Available online at AASRI Procedia 1 (2012 ) AASRI Conference on Computational Intelligence and Bioinformatics

Chaos, Complexity, and Inference (36-462)

A New Look at Nonlinear Time Series Prediction with NARX Recurrent Neural Network. José Maria P. Menezes Jr. and Guilherme A.

Is correlation dimension a reliable indicator of low-dimensional chaos in short hydrological time series?

Reconstruction Deconstruction:

Long-Term Prediction, Chaos and Artificial Neural Networks. Where is the Meeting Point?

Experiments with a Hybrid-Complex Neural Networks for Long Term Prediction of Electrocardiograms

D-Facto public., ISBN , pp

Commun Nonlinear Sci Numer Simulat

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

Ensembles of Nearest Neighbor Forecasts

No. 6 Determining the input dimension of a To model a nonlinear time series with the widely used feed-forward neural network means to fit the a

Evaluating nonlinearity and validity of nonlinear modeling for complex time series

Revista Economica 65:6 (2013)

Effects of data windows on the methods of surrogate data

D-Facto public., ISBN , pp

Phase Space Reconstruction from Economic Time Series Data: Improving Models of Complex Real-World Dynamic Systems

Introduction to Neural Networks: Structure and Training

Documents de Travail du Centre d Economie de la Sorbonne

Application of Chaos Theory and Genetic Programming in Runoff Time Series

Forecasting Time Series by SOFNN with Reinforcement Learning

OUTPUT REGULATION OF THE SIMPLIFIED LORENZ CHAOTIC SYSTEM

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks

The Research of Railway Coal Dispatched Volume Prediction Based on Chaos Theory

THE frequency sensitive competitive learning (FSCL) is

Application of Physics Model in prediction of the Hellas Euro election results

Delay Coordinate Embedding

The analog data assimilation: method, applications and implementation

Analysis of Electroencephologram Data Using Time-Delay Embeddings to Reconstruct Phase Space

TECHNICAL RESEARCH REPORT

Algorithms for generating surrogate data for sparsely quantized time series

Information Dynamics Foundations and Applications

MODELING NONLINEAR DYNAMICS WITH NEURAL. Eric A. Wan. Stanford University, Department of Electrical Engineering, Stanford, CA

Sample Exam COMP 9444 NEURAL NETWORKS Solutions

arxiv: v1 [nlin.ao] 21 Sep 2018

Convergence of a Neural Network Classifier

Analysis of Neural Networks with Chaotic Dynamics

NEURAL CONTROL OF CHAOS AND APLICATIONS. Cristina Hernández, Juan Castellanos, Rafael Gonzalo, Valentín Palencia

Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!

xt+1 = 1 ax 2 t + y t y t+1 = bx t (1)

Neural Network to Control Output of Hidden Node According to Input Patterns

Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks

Research Article Multivariate Local Polynomial Regression with Application to Shenzhen Component Index

LoopSOM: A Robust SOM Variant Using Self-Organizing Temporal Feedback Connections

AN ELECTRIC circuit containing a switch controlled by

arxiv: v1 [nlin.cd] 15 Jun 2017

Networks of McCulloch-Pitts Neurons

INTRODUCTION TO NEURAL NETWORKS

Robust Learning of Chaotic Attractors

Phase-Space Learning

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption

Phase-Space Learning Fu-Sheng Tsung Chung Tai Ch'an Temple 56, Yuon-fon Road, Yi-hsin Li, Pu-li Nan-tou County, Taiwan 545 Republic of China Garrison

A Dynamical Systems Approach to Modeling Input-Output Systems

HIGHER-ORDER SPECTRA OF NONLINEAR POLYNOMIAL MODELS FOR CHUA S CIRCUIT

Topologic Conjugation and Asymptotic Stability in Impulsive Semidynamical Systems

Detection of Nonlinearity and Stochastic Nature in Time Series by Delay Vector Variance Method

A stable and online approach to detect concept drift in data streams

The Pattern Recognition System Using the Fractal Dimension of Chaos Theory

Forecasting Chaotic time series by a Neural Network

Analysis of Interest Rate Curves Clustering Using Self-Organising Maps

Statistical Pattern Recognition

PREDICTION OF NONLINEAR SYSTEM BY CHAOTIC MODULAR

A new method for short-term load forecasting based on chaotic time series and neural network

Using the Delta Test for Variable Selection

SEPARATION OF ACOUSTIC SIGNALS USING SELF-ORGANIZING NEURAL NETWORKS. Temujin Gautama & Marc M. Van Hulle

Entropy Manipulation of Arbitrary Non I inear Map pings

The Behaviour of a Mobile Robot Is Chaotic

On the use of Long-Short Term Memory neural networks for time series prediction

Iterative Laplacian Score for Feature Selection

Using Kernel PCA for Initialisation of Variational Bayesian Nonlinear Blind Source Separation Method

Lyapunov Exponents Analysis and Phase Space Reconstruction to Chua s Circuit

ABOUT UNIVERSAL BASINS OF ATTRACTION IN HIGH-DIMENSIONAL SYSTEMS

ADAPTIVE DESIGN OF CONTROLLER AND SYNCHRONIZER FOR LU-XIAO CHAOTIC SYSTEM

Linear Least-Squares Based Methods for Neural Networks Learning

Multivariate class labeling in Robust Soft LVQ

Introduction to Dynamical Systems Basic Concepts of Dynamics

THE IMPLICIT FUNCTION THEOREM FOR CONTINUOUS FUNCTIONS. Carlos Biasi Carlos Gutierrez Edivaldo L. dos Santos. 1. Introduction

ESTIMATING THE ATTRACTOR DIMENSION OF THE EQUATORIAL WEATHER SYSTEM M. Leok B.T.

Predicting Chaotic Time Series by Reinforcement Learning

A MINIMAL 2-D QUADRATIC MAP WITH QUASI-PERIODIC ROUTE TO CHAOS

In the Name of God. Lectures 15&16: Radial Basis Function Networks

Chaos Control for the Lorenz System

Jae-Bong Lee 1 and Bernard A. Megrey 2. International Symposium on Climate Change Effects on Fish and Fisheries

Nonlinear Systems, Chaos and Control in Engineering

Self-Organization by Optimizing Free-Energy

THE DESIGN OF ACTIVE CONTROLLER FOR THE OUTPUT REGULATION OF LIU-LIU-LIU-SU CHAOTIC SYSTEM

On the Stability of the Least-Mean Fourth (LMF) Algorithm

OUTPUT REGULATION OF RÖSSLER PROTOTYPE-4 CHAOTIC SYSTEM BY STATE FEEDBACK CONTROL

Using Artificial Neural Networks (ANN) to Control Chaos

Artificial Intelligence Hopfield Networks

What is Chaos? Implications of Chaos 4/12/2010

Adaptive Rejection Sampling with fixed number of nodes

Convergence of Hybrid Algorithm with Adaptive Learning Parameter for Multilayer Neural Network

On Node-Fault-Injection Training of an RBF Network

Attractor of a Shallow Water Equations Model

Transcription:

Chaotic Time Series Prediction using the Kohonen Algorithm Luis Monzón Ben tez Λ;ΛΛ;y, Ademar Ferreira Λ and Diana I. Pedreira Iparraguirre ΛΛ Λ Departamento de Engenharia de Telecomunicaοc~ oes e Controle Escola Polit écnica da Universidade de S~ao Paulo CEP: 05508 900 S~ao Paulo SP, Brazil. lmonzon@lac.usp.br, ademar@lac.usp.br ΛΛ Universidade Medica Carlos J. Finlay. Departamento de Bioestat stica e Computaοc~ao. Camaguey, Cuba. ivette@finlay.cmw.sld.cu Abstract. Deterministic nonlinear prediction is a pow erful tec hnique for the analysis and prediction of time series generated by nonlinear dynamical systems. In this paper the use of a Kohonen netw ork as a component of one deterministic nonlinear prediction algorithm is suggested. In order to ev aluate the performance of the proposed algorithm, it was applied to the prediction of time series generated by two well known c haotic dynamical systems and the results were compared with those obtained using the Modified Method of Analogues with the same time series. The generated time series were corrupted by superimposed observational noise. The experimental results ha ve sho wn that the Kohonen net w ork can learn the neigh borhood relations present in the reconstructed attractor of the time series and that good predictions can also be obtained with the proposed algorithm. 1. Introduction In the last decades many different techniques have been developed for the analysis and prediction of time series, ranging from the w ell-known Box and Jenkins models [1] to the latest neural netw ork based ones [3, 4]. The most used approach consists in considering the time series as a realization of a stochastic process and in applying the different methods devised in the framework of The authors would lik e to thank Professor Manuel Lazo of the Institute for Cibernetics, Mathematics and Physics, Cuba, for his useful comments. y P artially supported by the National Council for Scientific and Technological Development (CNPq), Brazil.

Statistics to obtain a model for the process [1, 3]. Today, with the latest results concerning deterministic chaos, even if the data look random apparently it is important to consider the possibility that the time series was generated b y a low order nonlinear deterministic dynamical system [4, 5]. One of the techniques dev eloped for the analysis of chaotic time series is deterministic nonlinear prediction[5]. In this paper the use of a Kohonen neural netw ork as a component ofone deterministic nonlinear prediction algorithm is suggested. The algorithm was originally proposed by Lorenz [7] and later modified b y Ikeguchi and Aihara [5] and is known as the Modified Method of Analogues (MMA). The paper is organized as follows: In section 2 the MMA is explained and in the next section the proposed prediction algorithm is introduced. In section 4 some experiments are conducted to evaluate the performance of the algorithm and the results are discussed. Finally, the main conclusions are highlighted. 2. The Modified Method of Analogues This method was introduced by Lorenz in 1969 [7] and was named The Method of Analogues" because the prediction of a point in the attractor is obtained analogizing the movement of its nearest neighbor. In 1995, Ikeguchi and Aihara [5] proposed touse M nearest neighbors instead of only one neighbor to find the prediction of the point. They named this algorithm The Modified Method of Analogues". Let v T be the point ofthestate space whose future behavior will be predicted. First, the M nearest neighbors of v T are searc hed from all the points in the attractor and designated by v ki (i =1; 2;:::;M), with v ki being the nearest to the point v T and continuing in ascending order. After p time steps v ki is mapped to v ki+p. The prediction ^v T +p of v T is given as follows: P M i=1 ^v T +p = P M i=1 v ki +p jv ki v T j 1 jv ki v T j : (1) Our aim with this work is to use a Kohonen netw ork that properly trained with the different points of the attractor could be used to obtain the neighbors v ki+p of the point v T. This imply that the complete prediction process could be done extremely fast (once the netw ork has learned the neighborhood relations of the attractor), because of the high degree of parallelism associated with the Kohonen neural netw ork. 3. The proposed prediction algorithm. As it was mentioned in the previous section, we expect the Kohonen algorithm could be used to learn the metric relationships between the points in the reconstructed attractor and later that it could be included as a part of the MMA,

without significant loss on the accuracy of predictions. Therefore, the training algorithm and the prediction phase will be described in the next subsections. 3.1. The training process. The well-known Kohonen algorithm was introduced by Teuv o Kohonen in 1982 as a model of the self-organization of neural connections [6]. We will giv e a brief o verview of this artificial neural netw ork and its main features, including an added statement. The interested reader could find more comprehensive descriptions in [2, 6]. The net w orkhas n units distributed in a one or tw o-dimensionalarray. There exists a neighborhood function Λ defined on IxI (I being the set of units) that depends only on the distance between two units of I (Λ(i; j) decreases with increasing distance between i and j). The input space Ω is a subset of < d endow ed with a distance (in this paper the Euclidean distance is used). The unit i, represented by theweight vector: X i =(X i1 ;X i2 ;:::;X id ): (2) is fully connected to the d inputs. Avector NDX i, that contains the references to the points in the attractor which are nearest to neuron i, was also included. Then, the state of the netw ork at timet is completely defined by: X(t) =(X 1 (t);x 2 (t);:::;x n (t);ndx 1 (t);ndx 2 (t);:::;ndx n (t)) (3) F or a given state X, the netw ork response to inputv is the best matching unit (BMU) i 0,which is the neuron whose weight vector X i0 is the closest to input v. This netw ork implements what is called a topology preserving map" in the sense that, as far as possible, neighbors in the input space are mapped onto neighboring neurons [2, 6]. This property of the netw ork is the most important in our attempt to use it as an ordered representation of the attractor generated b y the time series we wan t to predict. In most real applications only one variable can be measured from the chaotic dynamical system, a scalar time series that can be designated b y y(t)(t = 1; 2;:::;N). The attractor can be reconstructed from the time series using the method of time delayed vectors proposed by Takens [8] in the following form: v(t) =(y(t);y(t fi);:::;y(t (d e 1)fi)) (4) where d e is the embedding dimension of the reconstructed attractor and fi is the time delay. Now, the only statement that needs to be added to the training algorithm of the Kohonen netw ork is the following: ) Update the vectors NDX i (t +1); 8i 2 I with the indexes of the M nearest points (in the attractor) to X i (t +1).

The Kohonen netw orkcan be trained b y taking the points in the reconstructed attractor as the inputs (we used the points from the first half of the time series values). Once the learning process has ended, the net work can be incorporated as a part of the prediction algorithm. 3.2. The prediction phase The idea is to substitute the search for the nearest neighbors of the point v T b y the BMU i 0 (taking v T as the input to the netw ork) and its closest neighbors (as indicated by the references in the vector NDX i0 ). Then, these points are taken as the v ki. After that, the prediction is found as in the MMA. The prediction algorithm can be explicitly written as: Let v T be the point on the attractor whose future ev olution will be predicted. 1. Present the input v T to the netw ork (previously trained with the points of the reconstructed attractor) and choose the BMU i 0. 2. Find the v ki in the original attractor using the indexes in the vector NDX i0 of the winning unit i 0 : v ki = v(ndx i0 (j));j =1;:::;M (5) 3. T ransform thev ki to v ki+p (p is the prediction step) and obtain the prediction ^v T +p using equation (1) as in the MMA. 4. Application of the prediction algorithm to computer generated time series In order to evaluate the performance of the proposed algorithm, it was applied to the prediction of time series generated by the H énon andthe Ikeda Maps (tw ow ell kno wnchaotic dynamical systems) and the results w erecompared with those obtained using the MMA with the same time series. The generated time series were corrupted by superimposed observational noise [5]. F or the systems previously mentioned, time series of different lengths N(128, 256, 512, 1024, 2048) were generated, having the first 10000 points discarded, because considered as transient. After that, the attractors were reconstructed using the method of dela ys[8] with delay time fi = 1, prediction step p =1 and embedding dimension (reconstruction dimension) d e = 3 in all cases. Next, different Kohonen netw orks were trained with the points of the attractors and the prediction algorithm was applied to predict the rest of the points of eac h time series not used during training.it is important tomention that the trained networks had a number of neurons that varied depending on the size of the attractor, ranging from netw orks with 7x7 neurons to netw orks with 33x33 neurons.

The proposed prediction algorithm was implemented using the SOM Toolbox dev elopedb y the Net w orks Researc hcentre of the Helsinki University oftechnology and the experiments w ere carried out using a SUN Sparc Ultra10 with MatLab 5.2.1. To quantify the performance of the algorithm, the Relative Root Mean Square Error (RRMSE) betw eenthe real and predicted time series, was computed [5]. The results of the application of the proposed prediction method are shown in figure 1. In all cases the plotted values are the averaged values betw een the fiv etime series generated from eac h system. As it can be seen from the figure, the values of the RRMSE clearly demonstrate the ability of the proposed algorithm to make successful predictions of chaotic time series. As it can be expected (considering that the proposed method is only a modification of the MMA) the RRMSE has a similar behavior for both prediction algorithms, but with a slightly better performance of the proposed method. It is important to notice that our aim with this work was to propose a modification of the MMA that could take advantage of the high degree of parallelism and the possibility of a VLSI implementation of the Kohonen self-organizing map, to improve the prediction speed of the MMA without significant lossin the accuracy of predictions. The experimental results hav e shown that this objective was accomplished. 5. Conclusions In this paper the use of a Kohonen netw ork as part of a deterministic nonlinear prediction method has been proposed and the short term predictions of different computer generated time series were analyzed. The experimental results have shown that the Kohonen netw ork can effectively learn the neighborhood relations present in the reconstructed attractor of the time series, and that the proposed algorithm can achiev e good prediction performance. The MMA can thus be implemented using akohonen net w ork as one of its components. The benefits of the high parallelism of this neural netw ork, can thus be obtained, without loss on the accuracy of predictions. As a future w orkw eare in terested in the use of the proposed algorithm to develop a test to distinguish betw een c haos and noise.the performance of the algorithm with dynamic selection of the number of neighbors can also be studied. References [1] G. E. P. Box andg.m. Jenkins. Time Series A nalysis: Forecasting and Control. San Francisco, CA: Holden-Day, 1976. [2] M. Cottrell, J.C. Fort, and G. Pages. Theoretical aspects of the som algorithm. Neurocomputing, 21:119 138, 1998.

Relative Root Mean Square Error 1 0.8 0.6 0.4 0.2 a) Henon Map o Proposed Method..x.. Modified Method of Analogues 0 10 15 20 25 30 35 40 Signal to Noise ratio (db) Relative Root Mean Square Error 1.4 1.2 1 0.8 0.6 0.4 0.2 b) Ikeda Map o Proposed Method..x.. Modified Method of Analogues 0 10 15 20 25 30 35 40 Signal to Noise ratio (db) Figure 1: Averaged values of the RRMSE for the time series generated from the Hénon Map (a) and the Ikeda Map (b). [3] M. Cottrell, B. Girard, Y. Girard, M. Mangeas, and C. Muller. modelling for time series: Statistical stepwise method for weight elimination. IEEE Trans., 6(6), Nov ember 1995. [4] J. D. Farmer and J. J. Sidorowich. Predicting chaotic time series. Physical R eview Letters, 59(8):845 848, 1987. [5] T. Ikeguchi and K. Aihara. Prediction of chaotic time series with noise. IEICE Trans. Fundamentals, E78-A(10):1291 1298, October 1995. [6] T. Kohonen. Analysis of a simple self-organizing process. Biol. Cybern, 44:135 140, 1982. [7] E. N. Lorenz. Atomospheric predictibility as revealed by naturally occuring analogues. Journal of the Atomospheric Sciences, 26:636 646, 1969. [8] F. Tak ens. Detecting strange attractors in turbulence, v olume 898 of eds., D.A. Rand and B.S. Young, Lecture Notes in Mathematics. Springer-Verlag, Berlin, 1981.