Elektrotehniški vestnik 7(-2: 46 5, 23 Electrotechnical Review, Ljubljana, Slovenija Latched recurrent neural network Branko Šter University of Ljubljana, Faculty of Computer and Information Science, Laboratory of Adaptive Systems and Parallel Processing, Tržaška 25, Ljubljana, Slovenia E-pošta: branko.ster@fri.uni-lj.si Abstract. An extended architecture of recurrent neural networks is proposed. It is based on ignoring unimportant input information using a register of latches as the input layer of the network. The latch is implemented with a multiplexer 2/ whose output is differentiable with respect to all of its inputs, thus enabling the derivatives to be propagated through the network. The relevance of input vectors is learned together with the weights of the network using a gradient-based algorithm. Key words: recurrent neural networks, finite state automata, temporal processing, long-term dependencies, latch Rekurentna nevronska mreža z zapahi Povzetek. V članku je predstavljena razširjena arhitektura rekurentnih nevronskih mrež. Temelji na zadrževanju nepomembnih vhodnih informacij na podlagi registra zapahov v vhodni plasti mreže. Zapah je implementiran z izbiralnikom 2/, katerega izhod je odvedljiv po vseh vhodih, kar omogoča razširjanje odvodov po mreži v času. Uporabljeno je gradientno učenje, s katerim se mreža poleg uteži uči tudi relevantnosti vhodnih vektorjev. Ključne besede: rekurentne nevronske mreže, končni avtomati, procesiranje zaporedij, zapah Introduction Neural networks with feedback connections are called recurrent. A recurrent neural network (RNN can be trained to model dynamical systems. We are especially interested in discrete-time RNNs, which are applicable to sequence processing problems, such as sequence recognition and time-series prediction. RNNs were shown to be capable of behaving like finite state automata, thus providing a model of finite state computation by means of continuous dynamical systems. Automata can also be successfully extracted from RNNs 2. Recurrent neural networks are able to store and process context or state information, which is required in temporal processing tasks. It is known that recurrent neural networks are harder to train than feedforward networks. A particularly difficult problem is learning long-term dependencies, since the network must be able to store relevant information over long periods of time. It was shown in 3 that in Received May 22 Accepted 26 November 22 learning by gradient descent, propagation of derivatives through the network decays in time. Alternative learning paradigms which do not face this problem are nongradient methods and evolutionary algorithms. In the following, some of the existing solutions to the problem of learning long-term dependencies are presented. In 4, a hierarchy of neural networks to recursively decompose sequences was proposed. When a lower-level network cannot satisfactorily predict a subsequent output, a higher-level network becomes responsible. The system detects causal dependencies in the sequence. The approach is limited to sequence learning, since no external inputs are assumed. The LSTM (Long Short-Term Memory algorithm 5 was designed to overcome these problems by enforcing constant error flow. In 6, wavelet-based smoothing was integrated into a feedforward neural network and treated as a uniform trainable model for continuous time-series prediction. By filtering high frequency noise so as to decrease the prediction error, the network was able to learn smoothed timeseries more efficiently. Our idea is to exclude irrelevant input vectors from RNN processing and thereby to facilitate learning of RNN. The relevance of input vectors is established through the learning process. We introduce a modification of the 2-layer recurrent neural network, where input vectors to the network are selectively latched in order to suppress irrelevant ones, and the error signals at the RNN s outputs are weighted correspondingly. When an input vector is considered unimportant, the output error of the network is considered to be unimportant, too.
Latched recurrent neural network 47 2 Latched recurrent neural network First the architecture of the network is described, and then the steepest-descent learning rule is derived. 2. Architecture The latched recurrent neural network or LRNN has two feedforward processing layers and global feedback connections, as shown in Fig.. In addition, a register of Figure. Latched recurrent neural network. The dotted lines show optional feedback (necessary in multi-step prediction. We used the teacher-forcing technique, which applies the correct previous outputs. latches in the input layer suppresses assumably irrelevant input vectors by latching previous data and context information. The latch (Fig. 2a is controlled through the enable input, propagating input data while active or high, and remembering the previous data while low. This is in fact a D-type flip-flop with clock enable. A flip-flop is an edge-triggered (synchronous device and RNN also operates in a synchronous manner, i.e. we can imagine a clock signal driving the network. a. b. Figure 2. (a D flip-flop with clock enable and (b differentiable neural-based multiplexer 2/. To be able to propagate derivatives through time, we made the multiplexer s output differentiable with respect to its inputs. We applied a simple neural net with two sigmoid neurons with fixed weights and a summator, see Fig. 2b. The symbol L stands for a large value. The output of the multiplexer is governed by z mux(d,d, enable σ(w i d i + w ei enable Θ i ξ + ξ i with σ(x /( + e x being the sigmoid activation function. When enable, z σ(ld L/2 + σ(ld 3L/2 and d is propagated due to a large threshold for d. Similarly, when enable, z σ(ld 3L/2 + σ(ld L/2 and d is propagated. When < enable <, a mixture of both d i is propagated. There are Uo output units and U Uo context units in the output layer, H units in the hidden layer, U + I latches in the input layer, and I external inputs (Fig.. Only the context units provide feedback. As previous outputs the desired outputs are used, i.e. the teacher-forcing technique is applied. Outputs of the output units and of the context units are y k (t σ( v kl u l (t, k,.., U. ( The weights of the output layer are denoted as v kl. Note that v k, is the bias of the k-th neuron in the output layer, therefore u (t is a constant virtual input of unity. Outputs of the hidden units are U+I+ u l (t σ( w lm z m (t, l,.., H. (2 m The w l,u+i+ is the bias of the l-th hidden neuron. Outputs of the input layer at time t are z m (t mux(ỹ m (t,z m (t, enable (3 for m,.., Uo (previous desired outputs ỹ m, z m (t mux(y m (t,z m (t, enable (4 for m Uo+,.., U (context units, and z m (t mux(x m (t,z m (t, enable (5 for m U +,.., U + I (external inputs x m. The inputs to the input (register layer are: external inputs x m (t, delayed feedback outputs (previous desired outputs ỹ m (t and context values y m (t, and the local feedback of the latched data z m (t. The derivatives of the multiplexer s output z with respect to its inputs will be further required. They are ξ ( ξ w e + ξ ( ξ w e, enable ξ i ( ξ i w i, i,. d i
48 Šter The derivatives of z m of the context units (Eq. 4 with respect to the output weights v ij are m (t mux ym ym + mux m m ξ m( ξ mw y m + ξ m( ξ mw m, where y m (t and z m (t are abbreviated as y m and z m, and similar with respect to the hidden weights. Other z m (Eq. 3 and Eq. 5 are treated analogously. 2.2 Derivation of the learning rule In this section we derive the gradient learning rule for LRNN. It is customary to unfold the structure of a recurrent neural network in time to be able to derive learning rules easier. The unfolded LRNN is shown in Fig. 3. R(x I i β ix i, where β i will be called the relevance weight, denoting the relevance of the i-th input (or input line. If the step function is applied to R(x, then input vectors with R larger than some threshold Θ are enabled, f(t, and others are ignored, f(t. Also, the corresponding outputs either contribute or do not contribute to the error. Instead of the step function we may utilize some similar, but differentiable step-like function such as the sigmoid function f(t σ(r(x(t Θ /( + e a(r(x(t Θ in order to allow the gradient descent methods to be applied. The effect is that the enable signal becomes continuous or soft, i.e. an input vector is enabled with a degree between and, <f(t <, and the outputs contribute to the error with the same degree. Relevant input lines x i should have large relevance weights β i. When input symbols are -of-n encoded, this is equal to the relevance of a symbol or the corresponding input vector. There are two cases:. The relevances β i of the input signals x i are known or guessed (prior knowledge or hint and held constant, and the threshold Θ is optimized. By adjusting Θ, the enable of input vectors and the contribution of corresponding outputs to the error is optimized. When Θ is significantly below zero and inputs are binary, f(t is always close to and no latching occurs. When Θ increases, the ratio σ(r Θ/σ(R 2 Θ, where R >R 2, also increases. This is the indication that β i are correct. By ignoring irrelevant inputs, training of RNN is facilitated. 2. We have no clue as to what R(x might be. Therefore, β i are learned and Θ is held constant. The structure may be viewed as an additional neuron with the relevance weights β i, see Fig.. The error at the output of RNN is weighted by the continuous enable signal f(t and may be written as Err(t y(t ỹ(t 2 f(t, (6 Ef(t Figure 3. Unfolded LRNN (2 time-steps shown. The feedback connections of the latches are also unfolded in time. Teacher-forcing (TF correction means only that correct outputs are applied in the next step, rather than predicted ones. As a measure of performance, the mean-squared error (MSE at the network s outputs is usually considered. We weighted the MSE(t by the enable signal of the latch register at time t, which will be denoted in derivations as f(t, i.e. enable(t f(t. Let R(x Rbe a function called relevance of the input vector x (x,x 2,..., x I T. It should be large for relevant input vectors x and small otherwise, such as where E. denotes the expectation operator and ỹ(t denotes the desired output at time t. The weights are optimized by the steepest descent learning algorithm: v kl v kl η Err, w lm w lm η Err (7 v kl w lm with η being the learning step. The same holds for the threshold Θ and for the relevance weights β i in case they are optimized. The derivative of Err(t with respect to the output weights is Err(t y(t ỹ(t 2 f(t Ef(t (8
Latched recurrent neural network 49 Uo 2 (y k (t ỹ k (t y k f(t Ef, and similar for the hidden weights. The derivatives of y k, k,.., U, with respect to all the weights are required. For the output weights v: y k v kl u l vkl u l u l + v kl δ kl,ij u l + v kl u l U muo+ { δ ki u j + v kl u l ( u l m w lm U muo+ w lm } ξm( ξmw y m + ξ v m( ξmw m, ij where all quantities y, u, and z are at time t, except y denoting y(t, and z denoting z(t. The y k denotes the derivative of the sigmoid activation function and is equal to y k ( y k ; similarly u l. For the hidden weights: y k ( y k Since U+I+ m v kl u l U+I+ v kl u l ( u l (w lm z m δ il z j + m U muo we have U v kl u w l(δ il z j + ij { v ki u iz j + v kl u l muo (w lm z m. w lm m m w lm U muo w lm ξ m( ξ mw y m + ξ m( ξ mw m } The derivative with respect to the threshold Θ is Err(t y(t ỹ(t 2 f(t Ef + (9 y(t ỹ(t 2 ( f(t Ef The last term may be written as ( f(t Ef f(t Ef Ef f(t (Ef 2 af(t( f(tef f(te af(t( f(t (Ef 2 af(t Ef f(tef Ef 2 Ef Therefore Uo Err(t 2 ((y k ỹ k f(t Ef + Uo a (y k ỹ k 2 f(t f(tef Ef 2 Ef Ef Uo 2 ((y k ỹ k f(t Ef + ( ( aerr(t f(t Ef 2 Ef From Eq. we require also u l y k (v kl ( y k (v kl u l ( u l U+I m w lm m ( and from Eq. m is required, where z m are the outputs of the multiplexers: m ξ m( ξm w y m f + we ξ m( ξ m w m + we f + ξm( ξm w y m we af( f + ξm( ξm w m we af( f The derivatives with respect to the relevance weights β i are Err(t Uo 2 ((y k ỹ k f(t β i β i Ef + The β i m β i a ( k (y k ỹ k 2 f(t Ef ( f(tx i (t+ Ef 2 x i Efx i Ef is calculated as in Eq., but ξm( ξm w y m + we af( fx i + ξm( ξm w mθ+w e af( fx i..
5 Šter To summarize, the basic procedure is calculation of the derivatives of the network s outputs y k (t with respect to all adjustable weights v kl, w lm, Θ, and β i, using also derivatives from the past. At each time step the weighted error Err(t is calculated and the weights are updated accordingly. 3 Experiments The proposed method was tested on two tasks with long time lags. 3. Task In the first problem 7, there are n input symbols a, b, c, etc. with the -of-n encoding, i.e. only one of the n input lines is and all others are at any moment. The task is to output a immediately following the first occurrence of b after a has already appeared, no matter how long ago. All other signals have no influence and serve merely as distractors. After the occurrence of b, a is used up, and the next time the output should be is when a new a has been followed by its first matching b. This corresponds to a small finite automaton. The solution is simple, once the relevant signals are found. The hard part of the task is to find the distractors. The caution is necessary here. Namely, if the task were off-line and therefore with a limited number of examples, other meaningful automata might be induced. In our experiments n was. RNN had 2 output units ( output and context unit, 8 hidden units, and external inputs. After steps, RNN without latches fails when n. On the reduced problem with n 4,it succeeds after about 8 steps. In this task we chose only to test the predefined criterion R(x a + b Θ, i.e. β a β b and other β i are zero. This is of course the correct criterion and it remains only to observe the course of Θ. Fig. 4a shows an increase of Θ, which is understandable, since the ratio between f(t at R and f(t at R also increases. The weighted error is small, contrary to the ordinary error, which is large because of the errors when unimportant inputs appear. It is interesting to observe behavior of outputs y in Fig. 4c. After an occurrence of symbol b, e.g. at 995 (the occurrences of a and b are indicated also in Fig. 4b, the output is, as desired, but y continues to output a until the next a occurs about steps later. Since the error is weighted by f(t, errors of irrelevant symbols are not considered. Behavior of the context unit is also interesting. A single a activates this unit only partially (to about.45, while eventual subsequent occurrences of a activate it completely. mean-squared error, Theta mean-squared error RNN outputs.9.8.7.6.5.4.3.2..9.8.7.6.5.4.3.2..8.6.4.2 a. error weighted error Theta 2 4 6 8 b. f(t Ef error 992 994 996 998 c. target y y 992 994 996 998 Figure 4. LRNN on Task for n. (a MSE, weighted MSE, and Θ, (b enable, mean enable, and MSE during final steps, (c desired output ỹ, actual output y, and context value y. 3.2 Task 2 In the second task 5, there are p input symbols a x, a 2 y, a 3,..., a p. Each symbol is -of-n encoded with p binary input signals. The training sequence consists of random occurrences of only two similar subsequences: (x, a 3, a 4,..., a p, x and (y, a 3, a 4,..., a p, y. After a subsequence is finished, the next is selected with probability.5 for each, and so on. The sequence is fed to the network, which always has to predict the next symbol. The prediction of the first symbol of any subsequence is meaningless, since x and y occur with the same probability. The hard part is to predict the last symbol of a subsequence, since the first symbol has to be remembered over a long time lag (depending on p, which was set to. We demanded the network to correctly predict the last symbol. RNN had 2 output units ( outputs and 2 context units, hidden units, and external inputs. RNN without latches failed to predict correctly the last symbol of the subsequence, the error was large, about.4. There was no improvement even after steps. When ex-
Error and Theta RNN outputs.2.8.6.4.2 2 3 4 5 b. target x.8 target y output x output y.6 context context 2.4.2 a. weighted error Theta beta (x beta 2 (y other relevance weights 492 494 496 498 Figure 5. LRNN on Task 2 for p. (a weighted MSE, Θ, and relevance weights, (b desired outputs ỹ x, ỹ y, actual output y, and context value y during final steps. plicitly demanded, it mastered the easy part of the task, i.e. predicting a i, i 3,.., p. Using LRNN, this time we decided to learn the relevance weights β i, which signify the relevance of the i-th input signal. The Θ was set to.9 and all the β i started at.5. It is clear from Fig. 5a that only β (β x and β 2 (β y rise above Θ, while the others slightly decrease. This enabled the LRNN to learn the task quickly. From Fig. 5b it is obvious that the two context units remember or encode symbol x with a slightly larger activation than for symbol y. Since they encode the symbols in the same way, one of them is clearly redundant, as expected. An interesting fact is that they employ (attenuated binary encoding, not - of-n, for example. Outputs are set to final target values immediately after the first symbol of a subsequence. This is the explanation why the context values are not very pronounced. The output neurons themselves remember the first symbol, which is actually the simplest way to solve the task. 4 Conclusion An extended recurrent neural network architecture for dealing with temporal processing with long term dependencies was proposed. It is based on ignoring assumably irrelevant inputs using a register of latches in the input layer of the network. Latches are differentiable, such that gradient descent learning can be applied. The method yielded good results on two standard sequence processing tasks with long time lags, where irrelevant inputs were distracting symbols, rather than repeated symbols. On the latter type of problems when dealing with large finite automata with low-frequency properties, we intend to test the method in the near future. 5 References A. Cleeremans, D. Servan-Schreiber, J.L. McClelland, Finite State Automata and Simple Recurrent Networks, Neural Computation, vol., no. 3, pp. 372-38, 989. 2 I. Gabrijel, A. Dobnikar, On-line Identification and Reconstruction of Finite Automata with Generalized Recurrent Neural Networks, to appear in Neural Networks. 3 Y. Bengio, P. Simard, P. Frasconi, Learning Long-Term Dependencies with Gradient Descent is Difficult, IEEE Transactions on Neural Networks, vol. 5, no. 2, pp. 57-66, 994. 4 J. Schmidhuber, Learning complex, extended sequences using the principle of history compression, Neural Computation, vol. 4, no. 2, pp. 234-242, 992. 5 S. Hochreiter, J. Schmidhuber, Long short-term memory, Neural Computation, vol. 9, no. 8, pp. 735-78, 997. 6 U. Lotrič, Wavelet Based Denoising Integrated into Multilayered Perceptron, submitted to Neurocomputing, 22. 7 R. J. Williams, D. Zipser, A Learning Algorithm for Continually Running Fully Recurrent Neural Networks, Neural Computation, vol., no. 2, pp. 27-28, 989. Branko Šter received the Ph.D. degree in Computer and Information Science from the University of Ljubljana, in 999. He works at the Faculty of Computer and Information Science in Ljubljana. His research interests include neural networks, reinforcement learning, mobile robotics, and dynamical systems.