Analyzing the State Space Property of Echo

Size: px
Start display at page:

Download "Analyzing the State Space Property of Echo"

Transcription

1 Proceedings of International Joint Conference on Neural Networks, Montreal, Canada, July 31 August 4, 25 Analyzing the State Space Property of Echo State Networks for Chaotic System Prediction AbstractFor chaotic system prediction, ESNs (Echo State Networks) are realization of neural state reconstruction, in which the reconstructed state variable is from the internal neurons' activation, rather than the delay vector obtained from delay coordinate reconstruction. In the framework of the neural state reconstruction, some quantitative analyses can be further made on the issues such as the network structure configuration and initial state determination. Based on the simulation study on chaotic data from Chua's circuit, it is shown that the ESN is a nonminimum state space realization of the target time series, and the initial state can be freely chosen in the training process, and in the phase of prediction, ESN needs to know where the prediction begins by being set a proper initial state through a process of teacher forcing. Index TermsEcho State Networks, Chaotic Time Series, State Space 1. INTRODUCTION In realworld system, lots of time series from practical problems belong to nonlinear chaotic time series. It has been proved in practice that the linear models can not be used to describe nonlinear chaotic time series, therefore many efforts have been given to the investigation of nonlinear model of the measured time series. In practice, however, the nature and structure of the state space is obscure and the actual variables that contribute to the state vector are unknown or debatable. For a given measurements of one component of state vector in a dynamic system, Takens' theorem implies that for a wide class of deterministic system there exists a diffeomorphism (onetoone differential mapping) between a finite window of the time series and underlying state of the dynamic system which give rise to the time series [1]. This implies that there exists, in theory, a nonlinear autoregression function which models the series exactly. Neural network is a universal approximator, so neural network can be used for the modeling and prediction of the chaotic system. There are dozens of literatures based on this idea of combining theory of delay coordinate embedding and neural network, for example, used in feedforward neural network [2][3][4], recurrent neural network [2][5][6], and Jianhui Xi, Zhiwei Shi, Min Han, Member, IEEE School of Electronic and Information Engineering, Dalian University of Technology, Liaoning, 11623, China minhan@dlut.edu.cn support vector machine [7][8], etc. Common character of this kind method is that the structure of the neural network is closely related with the embedding parameters, such as embedding dimension and delay time. As an alternative to Takens' embedding theorem, some efforts have been attached to modeling a chaotic system without the reliance of the embedding parameters [913]. These methods try to reconstruct the time series directly into a neural state space rather than leaning from a reconstructed trajectory by Takens' theorem. The technique used, at present, is to build a recurrent neural network which produces the target time series. ESN (Echo state network) [13] is a kind of recurrent neural network which produces target time series, once it is trained on observed time series. Echo state networks provide a new training approach to recurrent neural network, in the training process of ESN, the determination of optimal output weights becomes a linear, uniquely solvable task of output error minimization. Similar learning technique is also used by "Liquid State Machines" [14], in which target dynamics of interest are also read out by trainable mechanisms. Firstly, this paper highlights the conception of the neural state reconstruction, as an alternative to the delay coordinate reconstruction. We address the differences of the two types of reconstructed state variables: the state using delay coordinate is come from "time", and the state in the neural state space is come from "space". Secondly, we use ESN to identify the chaotic system by the observed data measured from a real realization of a Chua's circuit. The rich dynamical behavior of Chua's circuit was confirmed by computer simulation [15] and experiment [16]. Complex time series produced by Chua's circuit is a benchmark problem for system identification [17][18] and the time series prediction [19] and still a challenge for the neural network community [2]. The differences from the existing method are: First of all, the data is the measured data rather than simulated data [9][11][19][2], and secondly only one variable is available from the measurement instead of knowing all the state information of the circuit [9][I1], and then the initial state of the circuit is also unknown. Lastly, we view the trained ESN from a framework of state space. Compare with the state variables in the circuit, the state variables in the trained ESN is rather huge and IEEE 1412

2 redundant, we will show that any three state variables in the well trained ESN has a similar space structure as the phase space structure obtained from the circuit or from the delay coordinate embedding. Another interest is the analysis of the initial state of the well trained ESN. Because chaotic system is initial state sensitively, focus is on the proper initial state of the ESN when it is used to simulate and predict a chaotic time series. The organization of this paper is as follows: Section II highlights the conception of neural state reconstruction as an alternative to delay coordinate reconstruction. In Section I1I, ESN is used to identify the chaotic data from Chua's circuit and the prediction result is given by the detection of the two transitions between the two scrolls. Section IV is our further analysis of the trained ESN, including the nonminimum realization property of the ESN and the initial state determination. Section V is the conclusions. II. DELAY COORDINATE RECONSTRUCTION AND NEURAL STATE SPACE RECONSTRUCTION According to Takens' theorem, the geometric structure of the multivariable dynamics of the system can be unfolded from the observable ya,k) in a Ddimensional space constructed from the new vector: SR(k)=[yd(k), yd(k)..*yd (k(d)r)]t (1) where r is a positive integer called the normalized embedding delay. That is, given the observable yxk) for varying discretetime k, which pertains to a single component of an unknown dynamical system, dynamic reconstruction is possible using the Ddimensional vector SR(k) provided that D > 2n + 1 where n is state dimension the system. The procedure for finding a suitable D is called embedding, and the minimum integer D that achieves dynamic reconstruction is called the embedding dimension. Based on Takens' theorem, neural network can be trained as a onestep predictor to identify the unknown mapping f: RD + RI, which is defined by Yd(k + 1) = f(sr (k)) (2) Once it is determined, the evolution SR(k) Yyd(k +1) becomes known. The intention of the neural state space reconstruction is to build up a dynamic neural system to simulate system output yd(k) of the system, and the neural system can be generally written as: X(k + 1) = NN, (X(k), X(O)) y(k) = NNY (X(k)) where X(k) is the internal state variables of the neural system. It is well known that a dynamic system can be described fundamentally by state [21]. The state of a dynamic system is defined as a set of quantities that summarizes all the (3) information about the past behavior of the system that is needed to uniquely describe its future behavior, except for the pure external effects arising from the applied input (excitation). To properly represent a dynamic system one has to determine how the internal state transit from the previous one to the current one, and how current system output is measured from the internal state. From equation (3) we know that NNW defmes the state equation and NNy defines the measurement equation. Ify(k) in neural system (3) can simulate ya,(k), X(k) is called a reconstructed neural state variables. The differences between the delay coordinate reconstruction and the neural state space reconstruction are as follows: 1) The state variables reconstructed have different time or space components, the state variables SR(k) is composed of time components from Yd (k (D I)r) to Yd (k), however, the neural state variable X(k) only have the space components at time k. 2) The state variables reconstructed have different physical meaning, the state variables SR (k) is composed of the history values of the observed time series, however, the state variables in the neural state space are the activation values of the internal neurons, which have nothing direct relationship with the observed output. III. USING ESN TO LEARN CHAOTIC DATA FROM CHUA'S CIRCUIT Chua's circuit is famous device with nonlinear character. The nonlinear element in this circuit is the twoterminal piecewiselinear resistor denoted "Chua's diode". The equations governing the circuit dynamics are: C, dv' (V2RVI) id(vi) dv2 (VI V21) dt R L dil = V dt 2 where vi is the voltage across capacitor C,, il is the current through the inductor and the current through Chua's diode is given by Imv + Bp (mo ml) v. < B, id (vim) = I vm B_ m Iv I < Bp (5) [ mov, + Bp (m, mo ) v, > Bp The data used in the simulation were collected using a digital oscilloscope [18]. Of particular interest in this paper will be the double scrolls attractor generated by the circuit. The data in Fig. 1 is the voltage across capacitor Cl, which were sampled at TS=12us and 5 samples were recorded (4) 1413

3 with a resolution of 13 bits. For details about the data refer [18]. After the ESN is disconnected form the teacher yd(k) at the prediction origin(=23), the iterated ESN structure can be seen in the Fig.3, and the network starts from the initial state X(23). y(k) y~~~~~(k + I) $ m 3= * Fig. I k The Measured Data Sets In timeseries prediction, the prediction origin (denoted by ) is the time from which the prediction is generated. The time between the prediction origin and the predicted data point is the prediction horizon. The training sequence is 123 in the samples, and so the prediction origin is 23. Our interest is focused on the two transitions before 24 and around 25(the two transitions between two scrolls after 23 as shown in Fig.2), and it is checked if the ESN network can detect the transitions and make a proper prediction about future. Fig.3 The Structure of ESN for the Iterative Prediction Fig.4 is the continue prediction after the prediction origin (=23). As it is shown in the figure, the trained ESN detects the transitions between the two scrolls well, the accurate prediction is available until 2 steps after the prediction origin (until 25 steps of the original sequence). It is also interesting to give the corresponding logi of the absolute prediction error in Fig.5, where yadk) is the target sequence and y(k) is the sequence generated by ESN after the prediction origin. ;g " O Fig.2 The prediction origin and scrolls after the prediction origin the transition between the two A network of 5 units was created with spectral.93, connectivity 1/2, output feedback connection weights sampled from uniform distribution over (4, 4). The refined version method was used for the learning of the connection weight WO. The refined version method for the network learning can see [13], in our simulation, the refined version of the learning method is more stable than the method with trick of "wobbling". For the prediction task, the training sequence are teacherforced on the trained network before the prediction origin, then the network is left running freely and the network's continuation is compared to the true continuation k Fig.4 The Comparison of the Observed Output and the Predicted Output after the Prediction Origin 1414

4 ~z, 2 I 3.,. where X(k) is the internal state variables of the ESN at time k, and y(k) is the network output at time k, a(.) is a sigmoid function. A. The redundancy ofthe state variables in the ESN By Takens' theorem, a phase space is reconstructed from the observed v1, as is shown in the Fig.6. We know that it is similar with the portrait ofx(k)(= [v, (k); v2 (k); il (k)]) k Fig.5 The Prediction Error log1(abs(yd(k) y(k))) Versus the Prediction Horizon after the Prediction Origin IV. THE ANALYSIS OF THE STATE SPACE IN THE TRAINED ESN For each internal unit xi (X =[x,x2,...,xn]t) there exists an "echo function" e, [22], such that, if the network has been driven by an signal y(k) for a long time, the current state can be written as xi (k) = ei (y(k), y(k 1), y(k 2),...). Consider the NAR (Nonlinear AutoRegressive) system, the output of ESN is written as: y(k) = f(y(k 1), y(k 2),...) _ E, wiei (y(k 1), y(k 2),) (6) From equation (6), we know that the learning approach of ESN is to approximate the nonlinear system functionfby a linear combination of the echo functions, where the linear combination weights are the trained output connection weights. NAR system can be converted into a state space model. For the Chua's circuit, we know that the state dimension is three, and there is discrete map with the form (the discrete form of system shown as equation (4)): X(k + 1) = F(X(k)) Yd(k) = h(x(k)) where X(k) = [v, (k); v2 (k); il (k)], h(x(k)) is the observed fimction, in our simulation, Yd (k) = h(x(k)) = v,. The data are stationary in the sense that while the measures were taken, and there is no reason to believe that the circuit dynamics and parameters changed in any significant way, so F(.) and h() are assumed not to change with time. When the ESN is used for the prediction (as shown in Fig.3), the equations govern the ESN can be written as: X(k + 1) = a((w + W * WO) X(k)) (8) y(k) = W. X(k) 3i X t 3 S,, ' /.3I / < X 32/ Fig.6 The reconstruction state space by delay embedding The modeling process is carried out by letting system (8) to simulate system (7), that is to say, system (8) is a realization of system (7). However, the state dimension of the chaotic system (7) is 3, and ESN in our simulation has the dimension of 5. In other word, the ESN realization is a nonminimum realization. xg.4.~5 X^66) Fig.7 The evolution ofx5x145x255 in the neural network 1415

5 The evidence for the nonminimum realization can be checked by the portrait of any three of the 5 internal state variables in the ESN. Fig.7, Fig.8 and Fig.9 are the state evolution of x5x145x255, x9x187x418 and x9x29x495 (xi denotes the ith internal neuron of the trained ESN), respectively. Any of these portraits have the similar structure with the portrait of Fig.6, which comes from the delay embedding..2.1 ' i Fig.8 The evolution ofx9xi87x418 in the neural network A,2~~~~~~~~~~~~~~~~~, ~ ~ ~ ~ ~ Fi Teevltin X X9 i n Fig.9 The evolution of x9x29x495 in the neural network Another evidence for the nonminimum realization can refer[9], where the neural state space is a minimum realization of the system identified. The neural network in[9] is given as: X(k + 1) = W s(v X(k) + 8) (9) where the dimension of X(k) is three according to the Chua's circuit. However, this kind of recurrent neural network can learn and behave like Chua's double scrolls, though the learning of recurrent networks with gradient is i.15 difficult. B. The initial state determination ofthe ESN When the network is used for prediction from the prediction origin, it is run as an autonomous dynamic system. There is lots of evidence that the prediction system dynamics are chaotic. For a chaotic system, an important character is initial state sensitive, that is to say, a small difference in the initial state eventually will end up with a big difference between their states. For ESN, which is the proper initial condition when it is used for the iterative prediction? To this problem, first of all, consider the minimum realization in [9]. The state space realization in [9], however, is aimed to model the three state variables of the Chua's circuit, so the initial state of the neural network must be identical to the initial state of original system. It is usual case when the scalar variable is available, so we are interested to add a measurement equation to the minimum realization in [9], and we have the neural system: X (k+l)=w(vx* (k)+i3) /IN lu) y (k) = C.a(DX*(k)+) Now introduce a transform 1'( ) into the state variables of system (7). X(k) = W'(X(k)) (1 1) and system (7) can be rewritten as: X(k + 1) = V(F(W (X(k)))) (12) y(k) = h(w' (X(k))) To let system (1) approximate system (12) is obvious, if (F(v'())) and h(x'()) are continuous function defined on a compact set. The initial state of system (1) can be chosen freely to model system (7) if we can find a proper transformy() so that X*(O) = X(O) = %V(X()). This result can be easily extended to a nonminimum realization of the neural system. In the training of ESN, there is no worry to choose the initial state. In fact, one can train the ESN from the random initial state. Another question, as shown in equation (8), there is not a hidden layer between X(k +1) andx(k), how can the ESN approximate some general state equation? First of all, when the output layer is a sigmoid function, the transition between X(k +1) andx(k) could be universal; Secondly, it is not necessary to add such a layer because there are so many combination of the state variables to be chosen by the output dynamics, in fact, ESN leams well using linear regression and obtains a low error in the training sequence. Once the network training is completed, the ESN are ready for the prediction task. For a given prediction, the trained ESN need to know where the prediction begins. To make a proper prediction, it is necessary to tell where is the prediction origin (=23 in our simulation) for a given task, because the prediction is start from there. The 1416

6 evolution of the ESN network is based on its state variables, so to tell the prediction origin is to set a proper initial state for the network. In other word, once the initial state in system (8) is proper for the given prediction task, the proper prediction can be made. In our analysis, the initial state is set by the phase of teacher forcing. When the trained connection weights are in place, the network is teacher forced by the target series before the prediction origin : X(k + 1) = G(W X(k) + Wft yd(k)), k < (13) and then the ESN is disconnected form the teacher y,(k) at the prediction origin. In this process, proper initial state is set gradually and then the prediction begins: the output signal y(k)(k.) is created from the internal neuron X(k) through the weight WO, by y(k) = W X(k). Conversely, the internal signals were "echoed" from that output signal through the fixed output feedback connections. V. CONCLUTIONS When ESN is used to model nonlinear chaotic time series, it uses a linear combination of the echo function to approximate the nonlinear system function. This paper views the ESN from a perspective of state space, and the chaotic time series modeling becomes the problem of the neural state space reconstruction. We apply ESN to the problem of identification and prediction of observed data from a real implementation of Chua's double scrolls, and ESN learns the data well and makes a desired prediction performance. It detects the two transitions between the two scrolls. Based on the simulation study on chaotic data from Chua's circuit, we make a further analysis of the trained ESN in the framework of state space. It is shown that the ESN is a nonminimum state space realization of the target time series, and the initial state can be freely chosen in the training process, and in the phase of prediction, ESN needs to know where the prediction begins by being set a proper initial state through a process of teacher forcing. ACKNOWLEDGMENT This research is supported by the project (637464) of the National Nature Science Foundation of China. It is also supported by the project (51392) of the National Natural Science Foundation of China. All of these supports are appreciated. REFERENCES [1] F. Takens, "Detecting Strange Attractors in Fluid Turbulence", in Dynamical Systems and Turbulence, eds. D. Rand and L.S. Young, Springer, Berlin, 1981 [2] S. Haykin and J. Principe, "Making sense of a complex world," IEEE Signal Processing Magazine, Vol. 15, No.3, pp. 6668, [3] E. A. Wan, "Time series prediction by using a connectionist network with internal delay lines (data set A)," Proceedings of the NATO Advanced Research Workshop on Comparative Time Series Analysis, May , pp , [4] M.R. Cowper, B. Mulgrew, and C. P. Unsworth, "Nonlinear prediction of chaotic signals using a normalised radial basis function network," Signal Processing, Vol. 82, No.5, pp , 22. [5] J.C. Principe, A. Rathie, and J.M. Kuo, "Prediction ofchaotic Time Series with Neural Networks and the Issue ofdynamic Modeling", Int. J. Bifurcation and Chaos, Vol. 2, pp , [6] M. Han, J. H. Xi, S. G. Xu, and F. L. Yin, "Prediction of chaotic time series based on the recurrent predictor neural network," IEEE Transactions on Signal Processing, Vol. 52, No.12, pp , 24. [7] S. Mukherjee, E. Osuna, and F. Girosi, "Nonlinear prediction of chaotic time series using support vector machines," presented at Neural Networks for Signal Processing [1997] VII. Proceedings of the 1997 IEEE Workshop, pp.51152, [8] L. Cao, "Support vector machines experts for time series forecasting," Neurocomputing, Vol. 51, pp , 23. [9] J. A. K. Suykens and J. Vandewalle, "Learning a Simple Recurrent Neural StateSpace Model to Behave Like Chua's Double Scroll," IEEE Transactions on Circuits and Systems IFundamental Theory and Applications, Vol. 42, No.8, pp , [1] M. Han, Z. W. Shi, and W. Wang, "Modeling dynamic system by recurrent neural network with state variables," in Advances in Neural Networks ISNN 24, Pt 2, Vol. 3174, Lecture Notes in Computer Science. Berlin: SPRINGERVERLAG BERLIN, 24, pp [11] B. Cannas, S. Cincotti, M. Marchesi, and F. Pilo, "Learning of Chua's circuit attractors by locally recurrent neural networks," Chaos Solitons & Fractals, Vol. 12, No. I 1, pp , 21. [12] B. Cannas and S. Cincotti, "Neural reconstruction of Lorenz attractors by an observable," Chaos Solitons & Fractals, Vol. 14, No. 1, pp. 8186, 22. [13] H. Jaeger and H. Haas, "Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication," Science, Vol. 34, No.5667, pp. 788, 24. [14] W. Maass, T. Natschlager, and H. Markram, "Realtime computing without stable states: a new framework for neural computation based on perturbations," Neural Computation, Vol. vol.14, no.1 1, pp , 22. [15] T. Matsumoto, "A chaotic attractor from Chua's circuit," Circuits and Systems, IEEE Transactions on, Vol. 31, No.12, pp , [16] G. Q. Zhong and F. Ayrom, "Experimental confirmation of chaos from Chua's circuit," Int. J. Circuit Theory Appl., Vol.13, No. 11, pp.9398, [17] M. H. Petrick and B. Wigdorowitz, "A priori nonlinear model structure selection for system identification," Control Engineering Practice, Vol. 5, No.8, pp , [18] L. A. Aguirre, G. G. Rodrigues, and E. Mendes, "Nonlinear identification and cluster analysis of chaotic attractors from a real implementation of Chua's circuit," International Journal of Bifurcation and Chaos, Vol. 7, No.6, pp , [19] J. McNames, J. A. K. Suykens, and J. Vandewalle, "Winning entry of the K.U. Leuven timeseries prediction competition," International Journal ofbifurcation and Chaos in Applied Sciences and Engineering, Vol. vol.9, no.8, pp , [2] J. A. K. Suykens and J. Vandewalle, "The K.U. Leuven competition data: a challenge for advanced neural network techniques," 8th European Symposium on Artificial Neural Networks. ESANN"2. Proceedings, pp , 2. [21] R. E. Kalman, "A New Approach to Linear Filtering and Prediction Problems," Transaction ofthe ASMEJournal ofbasic Engineering, Vol. 82, pp. 3545, 196. [22] H. Jaeger, "The echo state approach to analysing and training recurrent neural networks" (GMDReport 148, German National Research Institute for Computer Science 21). 1417

Harnessing Nonlinearity: Predicting Chaotic Systems and Saving

Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication Publishde in Science Magazine, 2004 Siamak Saliminejad Overview Eco State Networks How to build ESNs Chaotic

More information

Synchronization and control in small networks of chaotic electronic circuits

Synchronization and control in small networks of chaotic electronic circuits Synchronization and control in small networks of chaotic electronic circuits A. Iglesias Dept. of Applied Mathematics and Computational Sciences, Universi~ of Cantabria, Spain Abstract In this paper, a

More information

Majid Sodagar, 1 Patrick Chang, 1 Edward Coyler, 1 and John Parke 1 School of Physics, Georgia Institute of Technology, Atlanta, Georgia 30332, USA

Majid Sodagar, 1 Patrick Chang, 1 Edward Coyler, 1 and John Parke 1 School of Physics, Georgia Institute of Technology, Atlanta, Georgia 30332, USA Experimental Characterization of Chua s Circuit Majid Sodagar, 1 Patrick Chang, 1 Edward Coyler, 1 and John Parke 1 School of Physics, Georgia Institute of Technology, Atlanta, Georgia 30332, USA (Dated:

More information

A SYSTEMATIC APPROACH TO GENERATING n-scroll ATTRACTORS

A SYSTEMATIC APPROACH TO GENERATING n-scroll ATTRACTORS International Journal of Bifurcation and Chaos, Vol. 12, No. 12 (22) 297 2915 c World Scientific Publishing Company A SYSTEMATIC APPROACH TO ENERATIN n-scroll ATTRACTORS UO-QUN ZHON, KIM-FUN MAN and UANRON

More information

Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions

Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions Artem Chernodub, Institute of Mathematical Machines and Systems NASU, Neurotechnologies

More information

A New Look at Nonlinear Time Series Prediction with NARX Recurrent Neural Network. José Maria P. Menezes Jr. and Guilherme A.

A New Look at Nonlinear Time Series Prediction with NARX Recurrent Neural Network. José Maria P. Menezes Jr. and Guilherme A. A New Look at Nonlinear Time Series Prediction with NARX Recurrent Neural Network José Maria P. Menezes Jr. and Guilherme A. Barreto Department of Teleinformatics Engineering Federal University of Ceará,

More information

Using Artificial Neural Networks (ANN) to Control Chaos

Using Artificial Neural Networks (ANN) to Control Chaos Using Artificial Neural Networks (ANN) to Control Chaos Dr. Ibrahim Ighneiwa a *, Salwa Hamidatou a, and Fadia Ben Ismael a a Department of Electrical and Electronics Engineering, Faculty of Engineering,

More information

Experimental Characterization of Nonlinear Dynamics from Chua s Circuit

Experimental Characterization of Nonlinear Dynamics from Chua s Circuit Experimental Characterization of Nonlinear Dynamics from Chua s Circuit John Parker*, 1 Majid Sodagar, 1 Patrick Chang, 1 and Edward Coyle 1 School of Physics, Georgia Institute of Technology, Atlanta,

More information

Negatively Correlated Echo State Networks

Negatively Correlated Echo State Networks Negatively Correlated Echo State Networks Ali Rodan and Peter Tiňo School of Computer Science, The University of Birmingham Birmingham B15 2TT, United Kingdom E-mail: {a.a.rodan, P.Tino}@cs.bham.ac.uk

More information

No. 6 Determining the input dimension of a To model a nonlinear time series with the widely used feed-forward neural network means to fit the a

No. 6 Determining the input dimension of a To model a nonlinear time series with the widely used feed-forward neural network means to fit the a Vol 12 No 6, June 2003 cfl 2003 Chin. Phys. Soc. 1009-1963/2003/12(06)/0594-05 Chinese Physics and IOP Publishing Ltd Determining the input dimension of a neural network for nonlinear time series prediction

More information

Experimental and numerical realization of higher order autonomous Van der Pol-Duffing oscillator

Experimental and numerical realization of higher order autonomous Van der Pol-Duffing oscillator Indian Journal of Pure & Applied Physics Vol. 47, November 2009, pp. 823-827 Experimental and numerical realization of higher order autonomous Van der Pol-Duffing oscillator V Balachandran, * & G Kandiban

More information

Long-Term Prediction, Chaos and Artificial Neural Networks. Where is the Meeting Point?

Long-Term Prediction, Chaos and Artificial Neural Networks. Where is the Meeting Point? Engineering Letters, 5:, EL_5 Long-Term Prediction, Chaos and Artificial Neural Networks. Where is the Meeting Point? Pilar Gómez-Gil Abstract This paper presents the advances of a research using a combination

More information

EM-algorithm for Training of State-space Models with Application to Time Series Prediction

EM-algorithm for Training of State-space Models with Application to Time Series Prediction EM-algorithm for Training of State-space Models with Application to Time Series Prediction Elia Liitiäinen, Nima Reyhani and Amaury Lendasse Helsinki University of Technology - Neural Networks Research

More information

International University Bremen Guided Research Proposal Improve on chaotic time series prediction using MLPs for output training

International University Bremen Guided Research Proposal Improve on chaotic time series prediction using MLPs for output training International University Bremen Guided Research Proposal Improve on chaotic time series prediction using MLPs for output training Aakash Jain a.jain@iu-bremen.de Spring Semester 2004 1 Executive Summary

More information

Several ways to solve the MSO problem

Several ways to solve the MSO problem Several ways to solve the MSO problem J. J. Steil - Bielefeld University - Neuroinformatics Group P.O.-Box 0 0 3, D-3350 Bielefeld - Germany Abstract. The so called MSO-problem, a simple superposition

More information

A Hybrid Time-delay Prediction Method for Networked Control System

A Hybrid Time-delay Prediction Method for Networked Control System International Journal of Automation and Computing 11(1), February 2014, 19-24 DOI: 10.1007/s11633-014-0761-1 A Hybrid Time-delay Prediction Method for Networked Control System Zhong-Da Tian Xian-Wen Gao

More information

Linear Least-Squares Based Methods for Neural Networks Learning

Linear Least-Squares Based Methods for Neural Networks Learning Linear Least-Squares Based Methods for Neural Networks Learning Oscar Fontenla-Romero 1, Deniz Erdogmus 2, JC Principe 2, Amparo Alonso-Betanzos 1, and Enrique Castillo 3 1 Laboratory for Research and

More information

A New Dynamic Phenomenon in Nonlinear Circuits: State-Space Analysis of Chaotic Beats

A New Dynamic Phenomenon in Nonlinear Circuits: State-Space Analysis of Chaotic Beats A New Dynamic Phenomenon in Nonlinear Circuits: State-Space Analysis of Chaotic Beats DONATO CAFAGNA, GIUSEPPE GRASSI Diparnto Ingegneria Innovazione Università di Lecce via Monteroni, 73 Lecce ITALY giuseppe.grassi}@unile.it

More information

A New Chaotic Behavior from Lorenz and Rossler Systems and Its Electronic Circuit Implementation

A New Chaotic Behavior from Lorenz and Rossler Systems and Its Electronic Circuit Implementation Circuits and Systems,,, -5 doi:.46/cs..5 Published Online April (http://www.scirp.org/journal/cs) A New Chaotic Behavior from Lorenz and Rossler Systems and Its Electronic Circuit Implementation Abstract

More information

Ensembles of Nearest Neighbor Forecasts

Ensembles of Nearest Neighbor Forecasts Ensembles of Nearest Neighbor Forecasts Dragomir Yankov 1, Dennis DeCoste 2, and Eamonn Keogh 1 1 University of California, Riverside CA 92507, USA, {dyankov,eamonn}@cs.ucr.edu, 2 Yahoo! Research, 3333

More information

Lecture 5: Recurrent Neural Networks

Lecture 5: Recurrent Neural Networks 1/25 Lecture 5: Recurrent Neural Networks Nima Mohajerin University of Waterloo WAVE Lab nima.mohajerin@uwaterloo.ca July 4, 2017 2/25 Overview 1 Recap 2 RNN Architectures for Learning Long Term Dependencies

More information

Refutation of Second Reviewer's Objections

Refutation of Second Reviewer's Objections Re: Submission to Science, "Harnessing nonlinearity: predicting chaotic systems and boosting wireless communication." (Ref: 1091277) Refutation of Second Reviewer's Objections Herbert Jaeger, Dec. 23,

More information

Reconstruction Deconstruction:

Reconstruction Deconstruction: Reconstruction Deconstruction: A Brief History of Building Models of Nonlinear Dynamical Systems Jim Crutchfield Center for Computational Science & Engineering Physics Department University of California,

More information

PREDICTION OF NONLINEAR SYSTEM BY CHAOTIC MODULAR

PREDICTION OF NONLINEAR SYSTEM BY CHAOTIC MODULAR Journal of Marine Science and Technology, Vol. 8, No. 3, pp. 345-35 () 345 PREDICTION OF NONLINEAR SSTEM B CHAOTIC MODULAR Chin-Tsan Wang*, Ji-Cong Chen**, and Cheng-Kuang Shaw*** Key words: chaotic module,

More information

Autonomous learning algorithm for fully connected recurrent networks

Autonomous learning algorithm for fully connected recurrent networks Autonomous learning algorithm for fully connected recurrent networks Edouard Leclercq, Fabrice Druaux, Dimitri Lefebvre Groupe de Recherche en Electrotechnique et Automatique du Havre Université du Havre,

More information

Information Dynamics Foundations and Applications

Information Dynamics Foundations and Applications Gustavo Deco Bernd Schürmann Information Dynamics Foundations and Applications With 89 Illustrations Springer PREFACE vii CHAPTER 1 Introduction 1 CHAPTER 2 Dynamical Systems: An Overview 7 2.1 Deterministic

More information

Chaotic Attractor With Bounded Function

Chaotic Attractor With Bounded Function Proceedings of Engineering & Technology (PET) Copyright IPCO-2016 pp. 880-886 Chaotic Attractor With Bounded Function Nahed Aouf Souayed Electronical and micro-electronical laboratory, Faculty of science

More information

Introducing chaotic circuits in an undergraduate electronic course. Abstract. Introduction

Introducing chaotic circuits in an undergraduate electronic course. Abstract. Introduction Introducing chaotic circuits in an undergraduate electronic course Cherif Aissi 1 University of ouisiana at afayette, College of Engineering afayette, A 70504, USA Session II.A3 Abstract For decades, the

More information

A new method for short-term load forecasting based on chaotic time series and neural network

A new method for short-term load forecasting based on chaotic time series and neural network A new method for short-term load forecasting based on chaotic time series and neural network Sajjad Kouhi*, Navid Taghizadegan Electrical Engineering Department, Azarbaijan Shahid Madani University, Tabriz,

More information

A simple electronic circuit to demonstrate bifurcation and chaos

A simple electronic circuit to demonstrate bifurcation and chaos A simple electronic circuit to demonstrate bifurcation and chaos P R Hobson and A N Lansbury Brunel University, Middlesex Chaos has generated much interest recently, and many of the important features

More information

Introducing Chaotic Circuits in Analog Systems Course

Introducing Chaotic Circuits in Analog Systems Course Friday Afternoon Session - Faculty Introducing Chaotic Circuits in Analog Systems Course Cherif Aissi Department of Industrial Technology University of Louisiana at Lafayette Mohammed Zubair Department

More information

HIGHER-ORDER SPECTRA OF NONLINEAR POLYNOMIAL MODELS FOR CHUA S CIRCUIT

HIGHER-ORDER SPECTRA OF NONLINEAR POLYNOMIAL MODELS FOR CHUA S CIRCUIT Letters International Journal of Bifurcation and Chaos, Vol. 8, No. 12 (1998) 2425 2431 c World Scientific Publishing Company HIGHER-ORDER SPECTRA OF NONLINEAR POLYNOMIAL MODELS FOR CHUA S CIRCUIT STEVE

More information

AN ELECTRIC circuit containing a switch controlled by

AN ELECTRIC circuit containing a switch controlled by 878 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 46, NO. 7, JULY 1999 Bifurcation of Switched Nonlinear Dynamical Systems Takuji Kousaka, Member, IEEE, Tetsushi

More information

A New Circuit for Generating Chaos and Complexity: Analysis of the Beats Phenomenon

A New Circuit for Generating Chaos and Complexity: Analysis of the Beats Phenomenon A New Circuit for Generating Chaos and Complexity: Analysis of the Beats Phenomenon DONATO CAFAGNA, GIUSEPPE GRASSI Diparnto Ingegneria Innovazione Università di Lecce via Monteroni, 73 Lecce ITALY Abstract:

More information

Using reservoir computing in a decomposition approach for time series prediction.

Using reservoir computing in a decomposition approach for time series prediction. Using reservoir computing in a decomposition approach for time series prediction. Francis wyffels, Benjamin Schrauwen and Dirk Stroobandt Ghent University - Electronics and Information Systems Department

More information

Finite-time hybrid synchronization of time-delay hyperchaotic Lorenz system

Finite-time hybrid synchronization of time-delay hyperchaotic Lorenz system ISSN 1746-7659 England UK Journal of Information and Computing Science Vol. 10 No. 4 2015 pp. 265-270 Finite-time hybrid synchronization of time-delay hyperchaotic Lorenz system Haijuan Chen 1 * Rui Chen

More information

Experimental Characterization of Nonlinear Dynamics from Chua s Circuit

Experimental Characterization of Nonlinear Dynamics from Chua s Circuit Experimental Characterization of Nonlinear Dynamics from Chua s Circuit Patrick Chang, Edward Coyle, John Parker, Majid Sodagar NLD class final presentation 12/04/2012 Outline Introduction Experiment setup

More information

The Research of Railway Coal Dispatched Volume Prediction Based on Chaos Theory

The Research of Railway Coal Dispatched Volume Prediction Based on Chaos Theory The Research of Railway Coal Dispatched Volume Prediction Based on Chaos Theory Hua-Wen Wu Fu-Zhang Wang Institute of Computing Technology, China Academy of Railway Sciences Beijing 00044, China, P.R.

More information

Short Term Memory Quantifications in Input-Driven Linear Dynamical Systems

Short Term Memory Quantifications in Input-Driven Linear Dynamical Systems Short Term Memory Quantifications in Input-Driven Linear Dynamical Systems Peter Tiňo and Ali Rodan School of Computer Science, The University of Birmingham Birmingham B15 2TT, United Kingdom E-mail: {P.Tino,

More information

Visual Selection and Attention Shifting Based on FitzHugh-Nagumo Equations

Visual Selection and Attention Shifting Based on FitzHugh-Nagumo Equations Visual Selection and Attention Shifting Based on FitzHugh-Nagumo Equations Haili Wang, Yuanhua Qiao, Lijuan Duan, Faming Fang, Jun Miao 3, and Bingpeng Ma 3 College of Applied Science, Beijing University

More information

Research Article Adaptive Control of Chaos in Chua s Circuit

Research Article Adaptive Control of Chaos in Chua s Circuit Mathematical Problems in Engineering Volume 2011, Article ID 620946, 14 pages doi:10.1155/2011/620946 Research Article Adaptive Control of Chaos in Chua s Circuit Weiping Guo and Diantong Liu Institute

More information

Echo State Networks with Filter Neurons and a Delay&Sum Readout

Echo State Networks with Filter Neurons and a Delay&Sum Readout Echo State Networks with Filter Neurons and a Delay&Sum Readout Georg Holzmann 2,1 (Corresponding Author) http://grh.mur.at grh@mur.at Helmut Hauser 1 helmut.hauser@igi.tugraz.at 1 Institute for Theoretical

More information

Revista Economica 65:6 (2013)

Revista Economica 65:6 (2013) INDICATIONS OF CHAOTIC BEHAVIOUR IN USD/EUR EXCHANGE RATE CIOBANU Dumitru 1, VASILESCU Maria 2 1 Faculty of Economics and Business Administration, University of Craiova, Craiova, Romania 2 Faculty of Economics

More information

Memory Capacity of Input-Driven Echo State NetworksattheEdgeofChaos

Memory Capacity of Input-Driven Echo State NetworksattheEdgeofChaos Memory Capacity of Input-Driven Echo State NetworksattheEdgeofChaos Peter Barančok and Igor Farkaš Faculty of Mathematics, Physics and Informatics Comenius University in Bratislava, Slovakia farkas@fmph.uniba.sk

More information

USING DYNAMIC NEURAL NETWORKS TO GENERATE CHAOS: AN INVERSE OPTIMAL CONTROL APPROACH

USING DYNAMIC NEURAL NETWORKS TO GENERATE CHAOS: AN INVERSE OPTIMAL CONTROL APPROACH International Journal of Bifurcation and Chaos, Vol. 11, No. 3 (2001) 857 863 c World Scientific Publishing Company USING DYNAMIC NEURAL NETWORKS TO GENERATE CHAOS: AN INVERSE OPTIMAL CONTROL APPROACH

More information

Short Term Memory and Pattern Matching with Simple Echo State Networks

Short Term Memory and Pattern Matching with Simple Echo State Networks Short Term Memory and Pattern Matching with Simple Echo State Networks Georg Fette (fette@in.tum.de), Julian Eggert (julian.eggert@honda-ri.de) Technische Universität München; Boltzmannstr. 3, 85748 Garching/München,

More information

Advanced Methods for Recurrent Neural Networks Design

Advanced Methods for Recurrent Neural Networks Design Universidad Autónoma de Madrid Escuela Politécnica Superior Departamento de Ingeniería Informática Advanced Methods for Recurrent Neural Networks Design Master s thesis presented to apply for the Master

More information

MODULAR ECHO STATE NEURAL NETWORKS IN TIME SERIES PREDICTION

MODULAR ECHO STATE NEURAL NETWORKS IN TIME SERIES PREDICTION Computing and Informatics, Vol. 30, 2011, 321 334 MODULAR ECHO STATE NEURAL NETWORKS IN TIME SERIES PREDICTION Štefan Babinec, Jiří Pospíchal Department of Mathematics Faculty of Chemical and Food Technology

More information

Nonchaotic random behaviour in the second order autonomous system

Nonchaotic random behaviour in the second order autonomous system Vol 16 No 8, August 2007 c 2007 Chin. Phys. Soc. 1009-1963/2007/1608)/2285-06 Chinese Physics and IOP Publishing Ltd Nonchaotic random behaviour in the second order autonomous system Xu Yun ) a), Zhang

More information

A Novel Three Dimension Autonomous Chaotic System with a Quadratic Exponential Nonlinear Term

A Novel Three Dimension Autonomous Chaotic System with a Quadratic Exponential Nonlinear Term ETASR - Engineering, Technology & Applied Science Research Vol., o.,, 9-5 9 A Novel Three Dimension Autonomous Chaotic System with a Quadratic Exponential Nonlinear Term Fei Yu College of Information Science

More information

Time-delay feedback control in a delayed dynamical chaos system and its applications

Time-delay feedback control in a delayed dynamical chaos system and its applications Time-delay feedback control in a delayed dynamical chaos system and its applications Ye Zhi-Yong( ), Yang Guang( ), and Deng Cun-Bing( ) School of Mathematics and Physics, Chongqing University of Technology,

More information

Complex Dynamics of a Memristor Based Chua s Canonical Circuit

Complex Dynamics of a Memristor Based Chua s Canonical Circuit Complex Dynamics of a Memristor Based Chua s Canonical Circuit CHRISTOS K. VOLOS Department of Mathematics and Engineering Sciences Univ. of Military Education - Hellenic Army Academy Athens, GR6673 GREECE

More information

AN ALGORITHM FOR ESTIMATING FIXED POINTS OF DYNAMICAL SYSTEMS FROM TIME SERIES

AN ALGORITHM FOR ESTIMATING FIXED POINTS OF DYNAMICAL SYSTEMS FROM TIME SERIES Letters International Journal of Bifurcation and Chaos, Vol 8, No 11 (1998) 2203 2213 c World Scientific Publishing Company AN ALGORITHM FOR ESTIMATING FIXED POINTS OF DYNAMICAL SYSTEMS FROM TIME SERIES

More information

Acceleration of Levenberg-Marquardt method training of chaotic systems fuzzy modeling

Acceleration of Levenberg-Marquardt method training of chaotic systems fuzzy modeling ISSN 746-7233, England, UK World Journal of Modelling and Simulation Vol. 3 (2007) No. 4, pp. 289-298 Acceleration of Levenberg-Marquardt method training of chaotic systems fuzzy modeling Yuhui Wang, Qingxian

More information

Experiments with a Hybrid-Complex Neural Networks for Long Term Prediction of Electrocardiograms

Experiments with a Hybrid-Complex Neural Networks for Long Term Prediction of Electrocardiograms IEEE. ransactions of the 6 International World Congress of Computational Intelligence, IJCNN 6 Experiments with a Hybrid-Complex Neural Networks for Long erm Prediction of Electrocardiograms Pilar Gómez-Gil,

More information

A quick introduction to reservoir computing

A quick introduction to reservoir computing A quick introduction to reservoir computing Herbert Jaeger Jacobs University Bremen 1 Recurrent neural networks Feedforward and recurrent ANNs A. feedforward B. recurrent Characteristics: Has at least

More information

y(n) Time Series Data

y(n) Time Series Data Recurrent SOM with Local Linear Models in Time Series Prediction Timo Koskela, Markus Varsta, Jukka Heikkonen, and Kimmo Kaski Helsinki University of Technology Laboratory of Computational Engineering

More information

Available online at AASRI Procedia 1 (2012 ) AASRI Conference on Computational Intelligence and Bioinformatics

Available online at  AASRI Procedia 1 (2012 ) AASRI Conference on Computational Intelligence and Bioinformatics Available online at www.sciencedirect.com AASRI Procedia ( ) 377 383 AASRI Procedia www.elsevier.com/locate/procedia AASRI Conference on Computational Intelligence and Bioinformatics Chaotic Time Series

More information

Neural Networks in Structured Prediction. November 17, 2015

Neural Networks in Structured Prediction. November 17, 2015 Neural Networks in Structured Prediction November 17, 2015 HWs and Paper Last homework is going to be posted soon Neural net NER tagging model This is a new structured model Paper - Thursday after Thanksgiving

More information

NONLINEAR AND ADAPTIVE (INTELLIGENT) SYSTEMS MODELING, DESIGN, & CONTROL A Building Block Approach

NONLINEAR AND ADAPTIVE (INTELLIGENT) SYSTEMS MODELING, DESIGN, & CONTROL A Building Block Approach NONLINEAR AND ADAPTIVE (INTELLIGENT) SYSTEMS MODELING, DESIGN, & CONTROL A Building Block Approach P.A. (Rama) Ramamoorthy Electrical & Computer Engineering and Comp. Science Dept., M.L. 30, University

More information

Reservoir Computing with Stochastic Bitstream Neurons

Reservoir Computing with Stochastic Bitstream Neurons Reservoir Computing with Stochastic Bitstream Neurons David Verstraeten, Benjamin Schrauwen and Dirk Stroobandt Department of Electronics and Information Systems (ELIS), Ugent {david.verstraeten, benjamin.schrauwen,

More information

A SYSTEMATIC PROCEDURE FOR SYNCHRONIZING HYPERCHAOS VIA OBSERVER DESIGN

A SYSTEMATIC PROCEDURE FOR SYNCHRONIZING HYPERCHAOS VIA OBSERVER DESIGN Journal of Circuits, Systems, and Computers, Vol. 11, No. 1 (22) 1 16 c World Scientific Publishing Company A SYSTEMATIC PROCEDURE FOR SYNCHRONIZING HYPERCHAOS VIA OBSERVER DESIGN GIUSEPPE GRASSI Dipartimento

More information

Discussion About Nonlinear Time Series Prediction Using Least Squares Support Vector Machine

Discussion About Nonlinear Time Series Prediction Using Least Squares Support Vector Machine Commun. Theor. Phys. (Beijing, China) 43 (2005) pp. 1056 1060 c International Academic Publishers Vol. 43, No. 6, June 15, 2005 Discussion About Nonlinear Time Series Prediction Using Least Squares Support

More information

A Novel Chaotic Neural Network Architecture

A Novel Chaotic Neural Network Architecture ESANN' proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), - April, D-Facto public., ISBN ---, pp. - A Novel Neural Network Architecture Nigel Crook and Tjeerd olde Scheper

More information

Nonlinear Dynamics of Chaotic Attractor of Chua Circuit and Its Application for Secure Communication

Nonlinear Dynamics of Chaotic Attractor of Chua Circuit and Its Application for Secure Communication Nonlinear Dynamics of Chaotic Attractor of Chua Circuit and Its Application for Secure Communication 1,M. Sanjaya WS, 1 D.S Maulana, M. Mamat & Z. Salleh 1Computation and Instrumentation Division, Department

More information

3. Controlling the time delay hyper chaotic Lorenz system via back stepping control

3. Controlling the time delay hyper chaotic Lorenz system via back stepping control ISSN 1746-7659, England, UK Journal of Information and Computing Science Vol 10, No 2, 2015, pp 148-153 Chaos control of hyper chaotic delay Lorenz system via back stepping method Hanping Chen 1 Xuerong

More information

Chapter 15. Dynamically Driven Recurrent Networks

Chapter 15. Dynamically Driven Recurrent Networks Chapter 15. Dynamically Driven Recurrent Networks Neural Networks and Learning Machines (Haykin) Lecture Notes on Self-learning Neural Algorithms Byoung-Tak Zhang School of Computer Science and Engineering

More information

Handout 2: Invariant Sets and Stability

Handout 2: Invariant Sets and Stability Engineering Tripos Part IIB Nonlinear Systems and Control Module 4F2 1 Invariant Sets Handout 2: Invariant Sets and Stability Consider again the autonomous dynamical system ẋ = f(x), x() = x (1) with state

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface

More information

Temporal Backpropagation for FIR Neural Networks

Temporal Backpropagation for FIR Neural Networks Temporal Backpropagation for FIR Neural Networks Eric A. Wan Stanford University Department of Electrical Engineering, Stanford, CA 94305-4055 Abstract The traditional feedforward neural network is a static

More information

Lyapunov Exponents Analysis and Phase Space Reconstruction to Chua s Circuit

Lyapunov Exponents Analysis and Phase Space Reconstruction to Chua s Circuit Contemporary Engineering Sciences, Vol. 11, 2018, no. 50, 2465-2473 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ces.2018.84159 Lyapunov Exponents Analysis and Phase Space Reconstruction to Chua

More information

A DELAY-DEPENDENT APPROACH TO DESIGN STATE ESTIMATOR FOR DISCRETE STOCHASTIC RECURRENT NEURAL NETWORK WITH INTERVAL TIME-VARYING DELAYS

A DELAY-DEPENDENT APPROACH TO DESIGN STATE ESTIMATOR FOR DISCRETE STOCHASTIC RECURRENT NEURAL NETWORK WITH INTERVAL TIME-VARYING DELAYS ICIC Express Letters ICIC International c 2009 ISSN 1881-80X Volume, Number (A), September 2009 pp. 5 70 A DELAY-DEPENDENT APPROACH TO DESIGN STATE ESTIMATOR FOR DISCRETE STOCHASTIC RECURRENT NEURAL NETWORK

More information

Research Article Application of Chaos and Neural Network in Power Load Forecasting

Research Article Application of Chaos and Neural Network in Power Load Forecasting Discrete Dynamics in Nature and Society Volume 2011, Article ID 597634, 12 pages doi:10.1155/2011/597634 Research Article Application of Chaos and Neural Network in Power Load Forecasting Li Li and Liu

More information

Research Article Chaotic Attractor Generation via a Simple Linear Time-Varying System

Research Article Chaotic Attractor Generation via a Simple Linear Time-Varying System Discrete Dnamics in Nature and Societ Volume, Article ID 836, 8 pages doi:.//836 Research Article Chaotic Attractor Generation via a Simple Linear Time-Varing Sstem Baiu Ou and Desheng Liu Department of

More information

arxiv: v1 [cs.lg] 2 Feb 2018

arxiv: v1 [cs.lg] 2 Feb 2018 Short-term Memory of Deep RNN Claudio Gallicchio arxiv:1802.00748v1 [cs.lg] 2 Feb 2018 Department of Computer Science, University of Pisa Largo Bruno Pontecorvo 3-56127 Pisa, Italy Abstract. The extension

More information

Maps and differential equations

Maps and differential equations Maps and differential equations Marc R. Roussel November 8, 2005 Maps are algebraic rules for computing the next state of dynamical systems in discrete time. Differential equations and maps have a number

More information

Adaptive Inverse Control

Adaptive Inverse Control TA1-8:30 Adaptive nverse Control Bernard Widrow Michel Bilello Stanford University Department of Electrical Engineering, Stanford, CA 94305-4055 Abstract A plant can track an input command signal if it

More information

Chaos, Complexity, and Inference (36-462)

Chaos, Complexity, and Inference (36-462) Chaos, Complexity, and Inference (36-462) Lecture 4 Cosma Shalizi 22 January 2009 Reconstruction Inferring the attractor from a time series; powerful in a weird way Using the reconstructed attractor to

More information

Intelligent Modular Neural Network for Dynamic System Parameter Estimation

Intelligent Modular Neural Network for Dynamic System Parameter Estimation Intelligent Modular Neural Network for Dynamic System Parameter Estimation Andrzej Materka Technical University of Lodz, Institute of Electronics Stefanowskiego 18, 9-537 Lodz, Poland Abstract: A technique

More information

Delay Coordinate Embedding

Delay Coordinate Embedding Chapter 7 Delay Coordinate Embedding Up to this point, we have known our state space explicitly. But what if we do not know it? How can we then study the dynamics is phase space? A typical case is when

More information

Chaos and R-L diode Circuit

Chaos and R-L diode Circuit Chaos and R-L diode Circuit Rabia Aslam Chaudary Roll no: 2012-10-0011 LUMS School of Science and Engineering Thursday, December 20, 2010 1 Abstract In this experiment, we will use an R-L diode circuit

More information

Analysis of Interest Rate Curves Clustering Using Self-Organising Maps

Analysis of Interest Rate Curves Clustering Using Self-Organising Maps Analysis of Interest Rate Curves Clustering Using Self-Organising Maps M. Kanevski (1), V. Timonin (1), A. Pozdnoukhov(1), M. Maignan (1,2) (1) Institute of Geomatics and Analysis of Risk (IGAR), University

More information

Controlling a Novel Chaotic Attractor using Linear Feedback

Controlling a Novel Chaotic Attractor using Linear Feedback ISSN 746-7659, England, UK Journal of Information and Computing Science Vol 5, No,, pp 7-4 Controlling a Novel Chaotic Attractor using Linear Feedback Lin Pan,, Daoyun Xu 3, and Wuneng Zhou College of

More information

A First Attempt of Reservoir Pruning for Classification Problems

A First Attempt of Reservoir Pruning for Classification Problems A First Attempt of Reservoir Pruning for Classification Problems Xavier Dutoit, Hendrik Van Brussel, Marnix Nuttin Katholieke Universiteit Leuven - P.M.A. Celestijnenlaan 300b, 3000 Leuven - Belgium Abstract.

More information

International Journal of PharmTech Research CODEN (USA): IJPRIF, ISSN: Vol.8, No.3, pp , 2015

International Journal of PharmTech Research CODEN (USA): IJPRIF, ISSN: Vol.8, No.3, pp , 2015 International Journal of PharmTech Research CODEN (USA): IJPRIF, ISSN: 0974-4304 Vol.8, No.3, pp 377-382, 2015 Adaptive Control of a Chemical Chaotic Reactor Sundarapandian Vaidyanathan* R & D Centre,Vel

More information

Modelling of Pehlivan-Uyaroglu_2010 Chaotic System via Feed Forward Neural Network and Recurrent Neural Networks

Modelling of Pehlivan-Uyaroglu_2010 Chaotic System via Feed Forward Neural Network and Recurrent Neural Networks Modelling of Pehlivan-Uyaroglu_2010 Chaotic System via Feed Forward Neural Network and Recurrent Neural Networks 1 Murat ALÇIN, 2 İhsan PEHLİVAN and 3 İsmail KOYUNCU 1 Department of Electric -Energy, Porsuk

More information

Bidirectional Partial Generalized Synchronization in Chaotic and Hyperchaotic Systems via a New Scheme

Bidirectional Partial Generalized Synchronization in Chaotic and Hyperchaotic Systems via a New Scheme Commun. Theor. Phys. (Beijing, China) 45 (2006) pp. 1049 1056 c International Academic Publishers Vol. 45, No. 6, June 15, 2006 Bidirectional Partial Generalized Synchronization in Chaotic and Hyperchaotic

More information

Genesis and Catastrophe of the Chaotic Double-Bell Attractor

Genesis and Catastrophe of the Chaotic Double-Bell Attractor Proceedings of the 7th WSEAS International Conference on Systems Theory and Scientific Computation, Athens, Greece, August 24-26, 2007 39 Genesis and Catastrophe of the Chaotic Double-Bell Attractor I.N.

More information

Electric Load Forecasting Using Wavelet Transform and Extreme Learning Machine

Electric Load Forecasting Using Wavelet Transform and Extreme Learning Machine Electric Load Forecasting Using Wavelet Transform and Extreme Learning Machine Song Li 1, Peng Wang 1 and Lalit Goel 1 1 School of Electrical and Electronic Engineering Nanyang Technological University

More information

Introduction to Neural Networks: Structure and Training

Introduction to Neural Networks: Structure and Training Introduction to Neural Networks: Structure and Training Professor Q.J. Zhang Department of Electronics Carleton University, Ottawa, Canada www.doe.carleton.ca/~qjz, qjz@doe.carleton.ca A Quick Illustration

More information

Parameter Matching Using Adaptive Synchronization of Two Chua s Oscillators: MATLAB and SPICE Simulations

Parameter Matching Using Adaptive Synchronization of Two Chua s Oscillators: MATLAB and SPICE Simulations Parameter Matching Using Adaptive Synchronization of Two Chua s Oscillators: MATLAB and SPICE Simulations Valentin Siderskiy and Vikram Kapila NYU Polytechnic School of Engineering, 6 MetroTech Center,

More information

898 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 17, NO. 6, DECEMBER X/01$ IEEE

898 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 17, NO. 6, DECEMBER X/01$ IEEE 898 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 17, NO. 6, DECEMBER 2001 Short Papers The Chaotic Mobile Robot Yoshihiko Nakamura and Akinori Sekiguchi Abstract In this paper, we develop a method

More information

Approximation Properties of Positive Boolean Functions

Approximation Properties of Positive Boolean Functions Approximation Properties of Positive Boolean Functions Marco Muselli Istituto di Elettronica e di Ingegneria dell Informazione e delle Telecomunicazioni, Consiglio Nazionale delle Ricerche, via De Marini,

More information

Short Term Load Forecasting by Using ESN Neural Network Hamedan Province Case Study

Short Term Load Forecasting by Using ESN Neural Network Hamedan Province Case Study 119 International Journal of Smart Electrical Engineering, Vol.5, No.2,Spring 216 ISSN: 2251-9246 pp. 119:123 Short Term Load Forecasting by Using ESN Neural Network Hamedan Province Case Study Milad Sasani

More information

Dual Estimation and the Unscented Transformation

Dual Estimation and the Unscented Transformation Dual Estimation and the Unscented Transformation Eric A. Wan ericwan@ece.ogi.edu Rudolph van der Merwe rudmerwe@ece.ogi.edu Alex T. Nelson atnelson@ece.ogi.edu Oregon Graduate Institute of Science & Technology

More information

Identification of two-mass system parameters using neural networks

Identification of two-mass system parameters using neural networks 3ème conférence Internationale des énergies renouvelables CIER-2015 Proceedings of Engineering and Technology - PET Identification of two-mass system parameters using neural networks GHOZZI Dorsaf 1,NOURI

More information

Artificial Neural Networks Examination, June 2004

Artificial Neural Networks Examination, June 2004 Artificial Neural Networks Examination, June 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum

More information

MULTISTABILITY IN A BUTTERFLY FLOW

MULTISTABILITY IN A BUTTERFLY FLOW International Journal of Bifurcation and Chaos, Vol. 23, No. 12 (2013) 1350199 (10 pages) c World Scientific Publishing Company DOI: 10.1142/S021812741350199X MULTISTABILITY IN A BUTTERFLY FLOW CHUNBIAO

More information

Chua s Oscillator Using CCTA

Chua s Oscillator Using CCTA Chua s Oscillator Using CCTA Chandan Kumar Choubey 1, Arun Pandey 2, Akanksha Sahani 3, Pooja Kadam 4, Nishikant Surwade 5 1,2,3,4,5 Department of Electronics and Telecommunication, Dr. D. Y. Patil School

More information

Modelling Time Series with Neural Networks. Volker Tresp Summer 2017

Modelling Time Series with Neural Networks. Volker Tresp Summer 2017 Modelling Time Series with Neural Networks Volker Tresp Summer 2017 1 Modelling of Time Series The next figure shows a time series (DAX) Other interesting time-series: energy prize, energy consumption,

More information