NL q theory: checking and imposing stability of recurrent neural networks for nonlinear modelling J.A.K. Suykens, J. Vandewalle, B. De Moor Katholieke

Size: px
Start display at page:

Download "NL q theory: checking and imposing stability of recurrent neural networks for nonlinear modelling J.A.K. Suykens, J. Vandewalle, B. De Moor Katholieke"

Transcription

1 NL q theory: checing and imposing stability of recurrent neural networs for nonlinear modelling J.A.K. Suyens, J. Vandewalle, B. De Moor Katholiee Universiteit Leuven, Dept. of Electr. Eng., ESAT-SISTA Kardinaal Mercierlaan 94, B-3001 Leuven (Heverlee), Belgium Tel: 3/16/ Fax: 3/16/ johan.suyens,joos.vandewalle,bart.demoor@esat.uleuven.ac.be (Author for correspondence: J. Suyens) to appear in IEEE-SP (special issue: NNs for SP), SP EDICS Abstract It is nown that many discrete time recurrent neural networs, such as e.g. neural state space models, multilayer Hopeld networs and locally recurrent globally feedforward neural networs, can be represented as NL q systems. Sucient conditions for global asymptotic stability and input/output stability of NL q systems are available, including three types of criteria: diagonal scaling and criteria depending on diagonal dominance and condition number factors of certain matrices. In this paper, it is discussed how Narendra's dynamic bacpropagation procedure, used for identifying recurrent neural networ from I/O measurements, can be modied with an NL q stability constraint in order to ensure globally asymptotically stable identied models. An example illustrates how system identication of an internally stable model, corrupted by process noise, may lead to unwanted limit cycle behaviour and how this problem can be avoided by adding the stability constraint. Keywords. multilayer recurrent neural networs, NL q systems, global asymptotic stability, LMIs, dynamic bacpropagation This research wor was carried out at the ESAT laboratory and the Interdisciplinary Center of Neural Networs ICNN of the Katholiee Universiteit Leuven, in the framewor of the Belgian Programme on Interuniversity Poles of Attraction, initiated by the Belgian State, Prime Minister's Oce for Science, Technology and Culture (IUAP-17, IUAP-50) and in the framewor of a Concerted Action Project MIPS (Modelbased Information Processing Systems) of the Flemish Community, ICCoS (Identication and Control of Complex Systems), Human Capital and Mobility Networ (SIMONET) and SCIENCE-ERNSI (SC1-CT9-0779). 1

2 1 Introduction Recently, NL q theory has been introduced as a modelbased neural control framewor with global asymptotic stability criteria [0, 4]. It consists of recurrent neural networ models and controllers in state space form, for which the closed-loop system can be represented in so-called NL q system form. NL q systems are discrete time nonlinear state space models with q layers of alternating linear and static nonlinear operators that satisfy a sector condition. It has been shown then how Narendra's dynamic bacpropagation, classically used to learn a controller trac a set of specic reference inputs, can be modied with NL q stability constraints. Furthermore several types of nonlinear behaviour, including systems with a unique equilibrium, multiple equilibria, (quasi)-periodic behaviour and chaos have been stabilized and controlled using the stability criteria [0]. In this paper we focuss on nonlinear modelling applications of NL q theory, instead of control applications. Lie for tracing problems, where Narendra's dynamic bacpropagation [9, 10] has been modied with a closed-loop stability constraint [0], we will modify dynamic bacpropagation for system identication with a stability constraint in order to obtain identied recurrent neural networs that are globally asymptotically stable. Also for linear lters (e.g. IIR lters [16, 17]) this has been an important issue. We will consider the class of discrete time recurrent neural networs, which is representable as NL q systems. Examples are e.g. neural state space models and locally recurrent globally feedforward neural networs, that are models consisting of global and local feedbac respectively. In order to chec stability of identied models, sucient conditions for global asymptotic stability of NL q s are applied [0]. A rst condition is called diagonal scaling, which is closely related to diagonal scaling criteria in robust control theory [3, 13]. Checing stability can be formulated then as an LMI (Linear Matrix Inequality) problem, which corresponds to solving a convex optimization problem. A second condition is based on diagonal dominance of certain matrices. Certain results of digital lters with overow characteristic [8] can be considered as a special case for q = 1 (one layer NL q ). Finally, we demonstrate how global asymptotic stability can be imposed on the identied models. This is done by modifying dynamic bacpropagation with stability constraints. Besides the diagonal scaling condition, criteria based on condition numbers of certain matrices [0] are proposed for this purpose. In many applications one has indeed the a priori nowledge that the true system is globally asymptotically stable or one is interested in a stable approximator. It is illustrated with an example that process noise can indeed cause identi-

3 ed models that show limit cycle behaviour, instead of global asymptotic stability as for the true system. We show how this problem can be avoided by applying the modied dynamic bacpropagation algorithm. This paper is organized as follows. In Section we present two examples of discrete time recurrent neural networs, that are representable as NL q systems: neural state space models and locally recurrent globally feedforward neural networs. In Section 3 we review the classical dynamic bacpropagation paradigm. In Section 4 we discuss sucient conditions for global asymptotic stability of identied models. In Section 5 Narendra's dynamic bacpropagation is modied with NL q stability constraints. In Section 6 an example is given for a system corrupted by process noise. NL q systems The following discrete time nonlinear state space model is called an NL q system [0]: 8 < : p +1 =? 1 ( V 1? ( V :::? q ( V q p + B q w )::: + B w ) + B 1 w ) e = 1 ( W 1 ( W ::: q ( W q p + D q w )::: + D w ) + D 1 w ) (1) with state vector p R np, input vector w R nw and output vector e R ne. The matrices V i, B i, W i, D i (i = 1; :::; q) are constant with compatible dimensions, the matrices? i = diagf i g, i = diagf i g (i = 1; :::; q) are diagonal with diagonal elements i (p ; w ), i (p ; w ) [0; 1] for all p ; w. The term 'NL q ' stands for the alternating sequence of linear and nonlinear operations in the q layered system description. In this Section we rst explain the lin between NL q s and multilayer Hopeld networs and then discuss two examples: neural state space models and locally recurrent globally feedforward networs, which are models with global feedbac and local feedbac respectively. Other examples of NL q s are e.g. (generalized) cellular neural networs, the discrete time Lur'e problem and linear fractional transformations with real diagonal uncertainty bloc [0, 3]..1 Multilayer Hopeld networs NL q systems are closely related to multilayer recurrent neural networs of the form: 8 < : p +1 = 1 ( V 1 ( V ::: q ( V q p + B q w )::: + B w ) + B 1 w ) e = 1 ( W 1 ( W ::: q ( W q p + D q w )::: + D w ) + D 1 w ) () with i (:), i (:) vector valued nonlinear functions that belong to sector [0; 1] [8]. 3

4 Let us illustrate the lin between (1) and () for the autonomous NL q system (zero external input) p +1 = ( qy i=1? i (p )V i ) p by means of the following autonomous Hopeld networ with synchronous updating: x +1 = tanh(w x ): (3) This can be written as x +1 =?(x )W x (4) with? = diagf i g and i = tanh(w T i x )=(w T i x ), which follows from the elementwise notation x i := tanh( P j wi j xj ) := tanh( P j wi j xj ) Pj wi j xj : P j wi j xj := i i P j wi j xj : (5) The time index is omitted here because of the assignment operator 0 := 0. The notation i i means that this corresponds to the diagonal matrix?(x ). In case wi T x = 0 de l' Hospital's rule can be applied or a Taylor expansion of tanh(:) can be taen, leading to i = 1. In a similar way the multilayer Hopeld neural networ x +1 = tanh(v tanh(w x )); (6) can be written as x +1 =? 1 (x )V? (x )W x (7) because x i := tanh( P j vi j tanh(p l wj l xl ) := tanh( P j vi j j i := 1i Pj vi j j j P j l wj l xl ) P l wj l xl : (8). Neural state space models Neural state space models (Fig.1) for nonlinear system identication have been introduced in [19] and are of the form: 8 < : ^x +1 = W AB tanh(v A^x + V B u + AB ) + K y = W CD tanh(v C ^x + V D u + CD ) + (9) 4

5 with estimated state vector ^x R n, input vector u R m, output vector y R l, zero mean white Gaussian noise input (corresponding to the prediction error y? ^y ). W, V are the interconnection matrices with compatible dimensions, are bias vectors and K is a steady Kalman gain. If the multilayer perceptrons in the state equation and output equation are replaced by linear mappings, the neural state space model corresponds to a Kalman lter. Dening p = x, w = [u ; ; 1] and e = y in (1), the neural state space model is an NL system with? 1 = I, V 1 = W AB, V = V A, B = [V B 0 AB ], B 1 = [0 K 0], 1 = I, W 1 = W CD, W = V C, D = [V D 0 CD ], D 1 = [0 I 0]. For the autonomous case and zero bias terms, the neural state space model becomes: ^x +1 = W AB tanh(v A^x ): (10) By introducing a new state variable = tanh(v A^x ), this can be written as the NL 1 system: 8 < : ^x +1 = W AB +1 = tanh(v A W AB ):.3 Locally recurrent globally feedforward networs In [6] the LRGF networ (Locally Recurrent Globally Feedforward networ), has been proposed, which aims at unifying several existing recurrent neural networ models. Starting from the McCulloch-Pitts model, many other architectures have been proposed in the past with local synapse feedbac, local activation feedbac and local output feedbac, time delayed neural networs etc., e.g. by Frasconi-Gori-Soda, De Vries-Principe and Poddar-Unnirishnan. The architecture of Tsoi & Bac (Fig.) includes most of the latter architectures. Assuming on Fig. that the transfer functions G i (z) (i = 1; ::; n) (which may have both poles and zeros) have a state space realization (A i ; B i ; C i ) an LRGF networ can be described in state space form as 8 >< >: (i) +1 = A (i) (i) z (i) = C (i) (i) + B(i) u (i) ; i = 1; :::; n? 1 (n) = A (n) (n) +1 + B (n) f( P n j=1 z (j) ) z (n) = C (n) (n) y = f( P n j=1 z (j) ): Here u (i) ; z(i) R (i = 1; :::; n? 1) are the inputs and local outputs of the networ, y R is the output of the networ and z (n) R the ltered output of the networ. f(:) is a static 5 (11) (1)

6 nonlinearity belonging to sector [0; 1]. Applying the state augmentation = f( P n j=1 z (j) ) an NL 1 system is obtained with state vector p = [ (1) w = [u (1) ; :::;u(n?1) ] and matrices V 1 = 6 4 ; () ; :::; (n?1) A (1) 0 0 A ()... A (n?1) ::: 0 A (n) B (n) C (1) A (1) C () A () ::: C (n?1) A (n?1) C (n) A (n) C (n) B (n) B 1 = 6 4 B (1) B ()... B (n?1) 0 0 ::: 0 C (1) B (1) C () B () ::: C (n?1) B (n?1) 3 Classical dynamic bacpropagation ; (n) ; ], input vector Dynamic bacpropagation, according to Narendra & Parthasarathy [9, 10], is a well-nown method for training recurrent neural networs. In this Section we shortly review this method for models in state space form, because this ts into the framewor of NL q representations. Let us consider discrete time recurrent neural networs that can be written in the form: 8 < : ^x +1 = (^x ; u ; ; ) ; ^x 0 = x 0 given ^y = (^x ; u ; ) where (:), (:) are twice continuously dierentiable nonlinear mappings and the weights ; are elements of the parameter vector, to be identied from a number of N input/output data Z N = fu ; y g =N: =1 min J N (; Z N ) = 1 N NX =1 : (13) l( ()): (14) A typical choice for l( ) in this prediction error algorithm is 1 T with prediction error = y? ^y. For gradient based optimization algorithms one computes the = 1 N NX =1 (? ): 6

7 Dynamic bacpropagation [9, 10] maes use then of a sensitivity model 8 >< ^x @ ^x @ ^x ^x @ in order to generate the gradient of the cost function. The sensitivity model is a dynamical system with state and at the Rl, Rn, driven by the input vector @ ^x R ln are evaluated around the nominal trajectory. Rn, Rl are generated. ^x R nn and Examples and applications of dynamic bacpropagation, applied to neural state space models, are discussed in [19, 0, 1]. pruning of neural networ models see e.g. [1, 15, 18]. For aspects of model validation, regularization and 4 Checing global asymptotic stability of identied models In many applications, recurrent neural networs have been used in order to model systems with a unique or multiple equilibria, (quasi)-periodic behaviour or chaos. On the other hand one is often interested in obtaining models that are globally asymptotically stable, e.g. in case one has this a priori nowledge about the true system or one is interested in such an approximator. In this Section, we present two criteria that are sucient in order to chec global asymptotic stability of the identied model, represented as NL q system. and let Theorem 1 [Diagonal scaling] [0]. Consider the autonomous NL q system V tot = 6 4 p +1 = ( 0 V 0 0 V V q V 1 0 qy i= ? i (p )V i ) p (17) ; V i R n h i n hi+1 ; nh1 = n h q+1 = n p: A sucient condition for global asymptotic stability of (17) is to nd a diagonal matrix D tot such that D tot V tot D?1 tot q = D < 1; (18) 7

8 where D tot = diagfd ; D 3 ; :::; D q ; D 1 g and D i R n h i n hi are diagonal matrices with nonzero diagonal elements. Proof: See [0]. The condition is based on the Lyapunov function V (p) = D 1 p, which is radially unbounded in order to prove that the origin is a unique equilibrium point. Finding such a diagonal matrix D tot for a given matrix V tot can be formulated as the LMI (Linear Matrix Inequality) in D tot: V T totd totv tot < D tot: (19) It is well-nown that this corresponds to solving a convex optimization problem [3, 1, 7]. Similar criteria are nown in the eld of control theory as 'diagonal scaling' [3, 7, 13]. However, it is also nown that such criteria can be conservative. Sharper criteria are obtained by considering a Lyapunov function V (p) = P 1 p a non-diagonal matrix P 1 with instead of D 1. The next Theorem is expressed then in terms of diagonal dominance of certain matrices. Let us recall that a matrix Q R nn is called diagonally dominant of level Q 1 if the following property holds [0]: q ii > Q The following Theorem holds then. n X j=1 (j6=i) jq ij j ; 8i = 1; :::; n: (0) Theorem [Diagonal dominance] [0]. A sucient condition for global asymptotic stability of the autonomous NL q system (17) is to nd matrices P i, N i such that qy i=1 Qi with P tot = blocdiagfp ; P 3 ; :::; P q ; P 1 g, and P i R n h i n hi ( Qi? 1 )1= P tot V tot P tot?1 q < 1 (1) full ran matrices. The matrices Q i = P T i P in i are diagonally dominant with Qi > 1 and N i are diagonal matrices with positive diagonal elements. Proof: See [0]. 8

9 In order to chec stability of an identied model, i.e. for a given matrix V tot, one might formulate an optimization problem in P i and N i such that (1) is satised. LMI conditions that correspond to (1) are derived in [0]. For the case of neural networs with a sat(:) activation function (linear characteristic with saturation) Theorem can be formulated in a sharper way [0]. In that case it is sucient to nd matrices P i, N i with: P tot V tot P?1 tot < 1 such that Qi = 1; (i = 1; :::; q): () The latter follows from a result on stability of digital lters with overow characteristic by Liu & Michel [8], which corresponds to the NL 1 system: x +1 = sat(v x ): (3) A sucient condition for global asymptotic stability is then to nd a matrix Q = P T P with: Also remar that for a linear system P V P?1 < 1 such that Q = 1: (4) x +1 = Ax (5) one obtains the condition P AP?1 < 1; (6) from the Lyapunov function V (x) = P x with P a full ran matrix. The spectral radius of A corresponds to (A) = min P R P AP?1 : (7) nn 5 Modied dynamic bacpropagation: imposing stability In this Section we discuss modied dynamic bacpropagation algorithms, by adding an NL q stability constraint to the cost function (14). In this way it will be possible to identify recurrent neural networ models that are guaranteed to be globally asymptotically stable. Based on the condition of Theorem 1, one may solve the following constrained nonlinear optimization problem: min J N (; Z N ) = 1 ;Dtot N NX =1 l( ()) such that V tot () T D totv tot () < D tot: (8) 9

10 The cost function is dierentiable but the constraint becomes non-dierentiable when the two largest eigenvalues of the matrix V tot () T D totv tot ()? D tot coincide [14]. Convergent algorithms for such non-convex non-dierentiable optimization problems have been described e.g. by Pola & Wardi [14]. The gradient based optimization method maes use of the concept of a generalized gradient [] for the constraint. An alternative formulation of the problem is: min J N (; Z N ) = 1 N NX =1 l( ()) such that min D tot V tot ()D tot?1 < 1: (9) Dtot The evaluation of the constraint corresponds then to solving a convex optimization problem. Though LMI conditions have been formulated for the condition of diagonal dominance of Theorem, the use of these LMI conditions is rather unpractical for a modied dynamic bacpropagation algorithm. Therefore we will mae use of another Theorem in order to impose stability on the identied models. Theorem 3 [Condition number factor] [0]. A sucient condition for global asymptotic stability of the autonomous NL q (17) is to nd matrices P i such that qy i=1 (P i ) P tot V tot P?1 tot q < 1 (30) where P tot = blocdiagfp ; P 3 ; :::; P q ; P 1 g and P i R n h i n hi are full ran matrices. The condition numbers (P i ) are by denition equal to P i P?1 i. Proof: See [0]. In practice, this Theorem is used as: min Ptot maxf(p i )g such that P tot V tot P tot?1 < 1: (31) i The constraint in (31) imposes local stability at the origin. The basin of attraction of the origin is enlarged by minimizing the condition numbers. Even when condition (30) is not satised, the basin of attraction can often be made very large (or probably innitely large as simulation results suggest). This principle has been demonstrated on stabilizing and controlling systems with one or multiple equilibria, periodic and quasi-periodic behaviour and chaos [0]. 10

11 Based on (31), dynamic bacpropagation is modied then as follows: min J N (; Z N ) = 1 ;Qtot N NX =1 l( ()) such that 8 >< >: V tot () T Q tot V tot () < Q tot I < Q i < i I: (3) The latter LMI corresponds to (P i ) < i and Q tot = P T tot P tot, Q i = P T i P i. An alternative formulation to (3) is: min J N (; Z N ) = 1 N NX =1 l( ()) such that min P tot V tot ()P tot?1 < 1 with maxf(p i )g < c Ptot (33) with c a user-dened upper bound on the condition numbers. For (9)-(33) the dierence in computational complexity between classical and modied dynamic bacpropagation is in the LMI constraint which has to be solved at each iteration step and is O(m 4 L 1:5 ) for L inequalities of size m [7]. One can avoid solving LMIs at each iteration step at the expense of introducing additional unnown parameters to the optimization problem as Q tot in (3). Finally, for the case of NL 1 systems, the problem can be formulated by solving a Lyapunov equation: min J N (; Z N ) = 1 ; 0<<1 N NX =1 l( ()) such that 8 >< >: V tot () T QV tot () + I = Q min 0 s:t: R = Q + I > 0 (P ) < c; R = P T P: 6 Example: a system corrupted by process noise Problem statement (34) In this example we consider system identication of the following neural state space model as true system: 8 < : x +1 = W AB tanh(v A x + V B u ) + ' y = W CD tanh(v C x + V D u ): It is corrupted by zero mean white Gaussian process noise '. I/O data were generated by (35) 11

12 taing?0:1986?0:635 0:408?1:033 3 W AB = 4 0:4157?0:006 0:160?0:037 1:171?0:0401?0:6084 0: VA = 4?0:315?0:839?0:33?0:5119?0:87?0:7385?0:4354 0:416 5 VB =?0:141 0:4840?0:966?0:007?0:1009?0:6593?0:5717?0:8109 W CD =?0:5546?0:603 1:3030?1:3587?0:8550?0:7005?0:0671 0:059 0:5938?0:655 0:5651?1: :156 1:334 1:0599?1:7554 V C = 4 0:3600 0:597 1:7870?1:4743 1:3145?0:945 0:0347 0:681 5 VD = 4 1:675?:3663?1:15?0:9360 0:355 1:7658?0:8700?0:7000 with zero initial state. The input u is zero mean white Gaussian noise with standard deviation 5. The standard deviation of the noise ' was chosen equal to 0.1. A set of 1000 data points was generated in this way, with the rst 500 data for the training set and the next 500 data for the test set. Some properties of the autonomous system are: (W AB V A ) = 0:98 < 1 and min D tot D totv tot D?1 tot = 1:66 > 1, which means that the origin is locally stable, but global asymptotic stability is not proven by the diagonal scaling condition. However, simulation of the autonomous system for several initial conditions suggests that the system is globally asymptotically stable (Fig.4). Because the state space representation of neural state space representation is only unique up to a similarity transformation and sign reversal of the hidden nodes [0], we are interested in identifying the true system, not in the sense of nding bac the original matrices, but in order to obtain the same qualitative behaviour for the autonomous system of the identied model as for the true system Application of classical versus modied dynamic bacpropagation System identication by using classical dynamic bacpropagation and starting from randomly chosen parameter vectors (deterministic neural neural state space model with the same number of hidden neurons as (35)), yields identied models with limit cycle behaviour for the autonomous system (Fig.5). This eect is due to the process noise '. No problems of this ind were met for the system (35) in the purely deterministic case or in the case of observation noise. In order to impose global asymptotic stability on the identied models, modied dynamic bacpropagation procedure was applied with diagonal scaling (8) and with condition number constraint (34). The autonomous system of (35) can be represented as the NL 1 system (11). Fig.3 shows a comparison between classical and modied dynamic bacpropagation for the error on the training set and test set, starting from 0 random parameter vectors for the 3 methods. Besides the fact that the identied models appear to be globally asymptotically stable for modied dynamic bacpropagation (Fig.6-7), the performance on 1

13 the test data is often better than with respect to classical dynamic bacpropagation. One might argue that by taing a Kalman gain into the predictor the limit cycle problem might be avoided as well. However, often the deterministic part is identied rst and secondly the Kalman gain, while eeping the other parameters constant. Moreover the stability constraint can also be taen into account for stochastic models. Software In the experiments, a quasi-newton optimization method with BFGS updating of the Hessian [4, 6] was used for classical dynamic bacpropagation (function fminu in Matlab [5]). For modied dynamic bacpropagation, sequential quadratic programming [4, 6] was used (function constr in Matlab). Numerical calculation of the gradients was done. In the experiments, 70 iteration steps were taen for the optimization (Fig.3). For the case of diagonal scaling the Matlab function psv was used in order to calculate (8). Other software for solving LMI problems is e.g. LMI lab [5]. For the case of condition numbers, c = 100 was chosen as upper bound in (34). 7 Conclusion In this paper we discussed checing and imposing stability of discrete time recurrent neural networ for nonlinear modelling applications. The class of recurrent neural networs that is representable as NL q systems has been considered. Sucient conditions for global asymptotic stability on diagonal scaling and diagonal dominance have been proposed for checing stability. Dynamic bacpropagation has been modied with NL q stability constraints in order to obtain models that are globally asymptotically stable. Therefore criteria of diagonal scaling and condition number factors have been used. It has been illustrated on an example how a globally asymptotically stable system, corrupted by process noise, may lead to unwanted limit cycle behaviour for the autonomous behaviour of the identied model, if one applies classical dynamic bacpropagation. The modied dynamic bacpropagation algorithm overcomes this problem. 13

14 References [1] Billings S., Jamaluddin H., Chen S., \Properties of neural networs with applications to modelling non-linear dynamical systems," International Journal of Control, Vol.55, No.1, pp.193-4, 199. [] Boyd S., Barratt C., Linear controller design, limits of performance, Prentice-Hall, [3] Boyd S., El Ghaoui L., Feron E., Balarishnan V., Linear matrix inequalities in system and control theory, SIAM (Studies in Applied Mathematics), Vol.15, [4] Fletcher R., Practical methods of optimization, second edition, Chichester and New Yor: John Wiley and Sons, [5] Gahinet, P., Nemirovsii A., \General-Purpose LMI Solvers with Benchmars," Proceedings Conference on Decision and Control, pp , [6] Gill P.E., Murray W., Wright M.H., Practical Optimization, London: Academic Press, [7] Kaszurewicz E., Bhaya A., \Robust stability and diagonal Liapunov functions," SIAM Journal on Matrix Analysis and Applications, Vol.14, No., pp , [8] Liu D., Michel A.N., \Asymptotic stability of discrete time systems with saturation nonlinearities with applications to digital lters," IEEE Transactions on Circuits and Systems-I, Vol.39, No.10, pp , 199. [9] Narendra K.S., Parthasarathy K., \Identication and control of dynamical systems using neural networs," IEEE Transactions on Neural Networs, Vol.1, No.1, pp. 4-7, [10] Narendra K.S., Parthasarathy K., \Gradient methods for the optimization of dynamical systems containing neural networs," IEEE Transactions on Neural Networs, Vol., No., pp.5-6, [11] Nesterov Y., Nemirovsii A., Interior point polynomial algorithms in convex programming, SIAM (Studies in Applied Mathematics), Vol.13, [1] Overton M.L., \On minimizing the maximum eigenvalue of a symmetric matrix," SIAM Journal on Matrix Analysis and Applications, Vol.9, No., pp.56-68,

15 [13] Pacard A., Doyle J., \The complex structured singular value," Automatica, Vol.9, No.1, pp , [14] Pola E., Wardi Y., \Nondierentiable optimization algorithm for designing control systems having singular value inequalities," Automatica, Vol.18, No. 3, pp.67-83, 198. [15] Reed R., \Pruning algorithms - a survey," IEEE Transactions on Neural Networs, Vol.4, No.5, pp , [16] Regalia Ph., Adaptive IIR ltering in signal processing and control, New Yor: Marcel Deer Inc., [17] Shyn J., \Adaptive IIR ltering," IEEE ASSP Magazine, pp.4-1, [18] Sjoberg J., Zhang Q., Ljung L., Benveniste A., Delyon B., Glorennrc P., Hjalmarsson H., Juditsy A., \Nonlinear blac-box modeling in system identication: a unied overview," Automatica, Vol.31, No.1, pp , [19] Suyens J.A.K., De Moor B., Vandewalle J., \Nonlinear system identication using neural state space models, applicable to robust control design," International Journal of Control, Vol.6, No.1, pp.19-15, [0] Suyens J.A.K., Vandewalle J.P.L., De Moor B.L.R., Articial neural networs for modelling and control of non-linear systems, Kluwer Academic Publishers, Boston, [1] Suyens J.A.K., Vandewalle J., \Learning a simple recurrent neural state space model to behave lie Chua's double scroll," IEEE Transactions on Circuits and Systems-I, Vol.4, No.8, pp , [] Suyens J.A.K., Vandewalle J., \Control of a recurrent neural networ emulator for the double scroll," IEEE Transactions on Circuits and Systems-I, Vol.43, No.6, pp , [3] Suyens J.A.K., Vandewalle J., \Discrete time Interconnected Cellular Neural Networs within NL q theory," International Journal of Circuit Theory and Applications (Special Issue on Cellular Neural Networs), Vol.4, pp.5-36, [4] Suyens J.A.K., De Moor B., Vandewalle J., \NL q theory: a neural control framewor with global asymptotic stability criteria," to appear in Neural Networs. 15

16 [5] The MathWors Inc., \Optimization Toolbox (Version 4.1), User's guide," & \ Robust Control Toolbox (Version 4.1), User's guide," Matlab, [6] Tsoi A.C., Bac A.D., \Locally recurrent globally feedforward networs: a critical review of architectures," IEEE Transactions on Neural Networs, Vol.5, No., pp. 9-39, [7] Vandenberghe L., Boyd S., \A primal-dual potential reduction method for problems involving matrix inequalities," Mathematical Programming, 69, pp.05-36, [8] Vidyasagar M., Nonlinear systems analysis, Prentice-Hall,

17 Captions of Figures Figure 1. Neural state space model, which is a discrete time recurrent neural networ with multilayer perceptrons for the state and output equation and a Kalman gain for taing into account process noise. Figure. Locally recurrent globally feedforward networ of Tsoi & Bac, consisting of linear lters G i (z) (i = 1; :::; n? 1) for local synapse feedbac and G n (z) for local output feedbac. Figure 3. Comparison between dynamic bacpropagation (-), modied dynamic bacpropagation with diagonal scaling (- -) and condition number constraint (..) for the error on the training data (*) and test data (o). The errors are plotted with respect to the experiment number. 0 random initial parameter vectors were chosen. The experiments were sorted with respect to the error on the training set. Figure 4. Behaviour of the true system to be identied. Shown are the state variables of the autonomous system for some randomly chosen initial state and zero noise. Figure 5. Unwanted limit cycle behaviour of the identied model, after applying classical dynamic bacpropagation. The I/O data were generated by a globally asymptotically stable system, corrupted by process noise. Shown are the state variables of the identied model for the autonomous case. Figure 6. Model identied by applying modied dynamic bacpropagation with diagonal scaling constraint. The model is guaranteed to be globally asymptotically stable as is illustrated for some randomly chosen initial state on this Figure. Shown are the state variables of the identied model for the autonomous case. Figure 7. Model identied by applying modied dynamic bacpropagation with condition number constraint. Local stability at the origin is imposed. Its region of attraction is determined by the condition number. Shown are the state variables of the identied model for the autonomous case. 17

18 -1 z f ^ x + ^ x +1 + Kalman Gain ε - + y g ^ y u Figure 1: 18

19 (1) u G 1 (z) (1) z () u G (z) () z a f(.) y (n-1) u G n-1 (z) (n-1) z z (n) G n (z) Figure : 19

20 Figure 3: 0

21 Figure 4: Figure 5: 1

22 Figure 6: Figure 7:

Discrete time Generalized Cellular. Johan A.K. Suykens and Joos Vandewalle. Katholieke Universiteit Leuven

Discrete time Generalized Cellular. Johan A.K. Suykens and Joos Vandewalle. Katholieke Universiteit Leuven Discrete time Generalized Cellular Neural Networks within NL q Theory Johan A.K. Suykens and Joos Vandewalle Katholieke Universiteit Leuven Department of Electrical Engineering, ESAT-SISTA Kardinaal Mercierlaan

More information

Master-Slave Synchronization using. Dynamic Output Feedback. Kardinaal Mercierlaan 94, B-3001 Leuven (Heverlee), Belgium

Master-Slave Synchronization using. Dynamic Output Feedback. Kardinaal Mercierlaan 94, B-3001 Leuven (Heverlee), Belgium Master-Slave Synchronization using Dynamic Output Feedback J.A.K. Suykens 1, P.F. Curran and L.O. Chua 1 Katholieke Universiteit Leuven, Dept. of Electr. Eng., ESAT-SISTA Kardinaal Mercierlaan 94, B-1

More information

Neural Controller. Plant. Plant. Critic. evaluation signal. Reinforcement Learning Controller

Neural Controller. Plant. Plant. Critic. evaluation signal. Reinforcement Learning Controller Neural Control Theory: an Overview J.A.K. Suykens, H. Bersini Katholieke Universiteit Leuven Department of Electrical Engineering, ESAT-SISTA Kardinaal Mercierlaan 94, B-3001 Leuven (Heverlee), Belgium

More information

Departement Elektrotechniek ESAT-SISTA/TR Dynamical System Prediction: a Lie algebraic approach for a novel. neural architecture 1

Departement Elektrotechniek ESAT-SISTA/TR Dynamical System Prediction: a Lie algebraic approach for a novel. neural architecture 1 Katholieke Universiteit Leuven Departement Elektrotechniek ESAT-SISTA/TR 1995-47 Dynamical System Prediction: a Lie algebraic approach for a novel neural architecture 1 Yves Moreau and Joos Vandewalle

More information

Optimal trac light control for a single intersection Bart De Schutter and Bart De Moor y ESAT-SISTA, KU Leuven, Kardinaal Mercierlaan 94, B-3 Leuven (

Optimal trac light control for a single intersection Bart De Schutter and Bart De Moor y ESAT-SISTA, KU Leuven, Kardinaal Mercierlaan 94, B-3 Leuven ( Katholieke Universiteit Leuven Departement Elektrotechniek ESAT-SISTA/TR 97- Optimal trac light control for a single intersection Bart De Schutter and Bart De Moor Proceedings of the 997 International

More information

ADAPTIVE INVERSE CONTROL BASED ON NONLINEAR ADAPTIVE FILTERING. Information Systems Lab., EE Dep., Stanford University

ADAPTIVE INVERSE CONTROL BASED ON NONLINEAR ADAPTIVE FILTERING. Information Systems Lab., EE Dep., Stanford University ADAPTIVE INVERSE CONTROL BASED ON NONLINEAR ADAPTIVE FILTERING Bernard Widrow 1, Gregory Plett, Edson Ferreira 3 and Marcelo Lamego 4 Information Systems Lab., EE Dep., Stanford University Abstract: Many

More information

Departement Elektrotechniek ESAT-SISTA/TR About the choice of State Space Basis in Combined. Deterministic-Stochastic Subspace Identication 1

Departement Elektrotechniek ESAT-SISTA/TR About the choice of State Space Basis in Combined. Deterministic-Stochastic Subspace Identication 1 Katholieke Universiteit Leuven Departement Elektrotechniek ESAT-SISTA/TR 994-24 About the choice of State Space asis in Combined Deterministic-Stochastic Subspace Identication Peter Van Overschee and art

More information

NP-hardness of the stable matrix in unit interval family problem in discrete time

NP-hardness of the stable matrix in unit interval family problem in discrete time Systems & Control Letters 42 21 261 265 www.elsevier.com/locate/sysconle NP-hardness of the stable matrix in unit interval family problem in discrete time Alejandra Mercado, K.J. Ray Liu Electrical and

More information

An artificial neural networks (ANNs) model is a functional abstraction of the

An artificial neural networks (ANNs) model is a functional abstraction of the CHAPER 3 3. Introduction An artificial neural networs (ANNs) model is a functional abstraction of the biological neural structures of the central nervous system. hey are composed of many simple and highly

More information

x 104

x 104 Departement Elektrotechniek ESAT-SISTA/TR 98-3 Identication of the circulant modulated Poisson process: a time domain approach Katrien De Cock, Tony Van Gestel and Bart De Moor 2 April 998 Submitted for

More information

Pattern Classification

Pattern Classification Pattern Classification All materials in these slides were taen from Pattern Classification (2nd ed) by R. O. Duda,, P. E. Hart and D. G. Stor, John Wiley & Sons, 2000 with the permission of the authors

More information

The model reduction algorithm proposed is based on an iterative two-step LMI scheme. The convergence of the algorithm is not analyzed but examples sho

The model reduction algorithm proposed is based on an iterative two-step LMI scheme. The convergence of the algorithm is not analyzed but examples sho Model Reduction from an H 1 /LMI perspective A. Helmersson Department of Electrical Engineering Linkoping University S-581 8 Linkoping, Sweden tel: +6 1 816 fax: +6 1 86 email: andersh@isy.liu.se September

More information

u e G x = y linear convolution operator. In the time domain, the equation (2) becomes y(t) = (Ge)(t) = (G e)(t) = Z t G(t )e()d; and in either domains

u e G x = y linear convolution operator. In the time domain, the equation (2) becomes y(t) = (Ge)(t) = (G e)(t) = Z t G(t )e()d; and in either domains Input-Output Stability of Recurrent Neural Networks with Delays using Circle Criteria Jochen J. Steil and Helge Ritter, University of Bielefeld, Faculty of Technology, Neuroinformatics Group, P.O.-Box

More information

On max-algebraic models for transportation networks

On max-algebraic models for transportation networks K.U.Leuven Department of Electrical Engineering (ESAT) SISTA Technical report 98-00 On max-algebraic models for transportation networks R. de Vries, B. De Schutter, and B. De Moor If you want to cite this

More information

Optimal traffic light control for a single intersection

Optimal traffic light control for a single intersection KULeuven Department of Electrical Engineering (ESAT) SISTA Technical report 97- Optimal traffic light control for a single intersection B De Schutter and B De Moor If you want to cite this report, please

More information

CONVEX OPTIMIZATION OVER POSITIVE POLYNOMIALS AND FILTER DESIGN. Y. Genin, Y. Hachez, Yu. Nesterov, P. Van Dooren

CONVEX OPTIMIZATION OVER POSITIVE POLYNOMIALS AND FILTER DESIGN. Y. Genin, Y. Hachez, Yu. Nesterov, P. Van Dooren CONVEX OPTIMIZATION OVER POSITIVE POLYNOMIALS AND FILTER DESIGN Y. Genin, Y. Hachez, Yu. Nesterov, P. Van Dooren CESAME, Université catholique de Louvain Bâtiment Euler, Avenue G. Lemaître 4-6 B-1348 Louvain-la-Neuve,

More information

x i2 j i2 ε + ε i2 ε ji1 ε j i1

x i2 j i2 ε + ε i2 ε ji1 ε j i1 T-S Fuzzy Model with Linear Rule Consequence and PDC Controller: A Universal Framewor for Nonlinear Control Systems Hua O. Wang Jing Li David Niemann Laboratory for Intelligent and Nonlinear Control (LINC)

More information

Nonlinear Control Design for Linear Differential Inclusions via Convex Hull Quadratic Lyapunov Functions

Nonlinear Control Design for Linear Differential Inclusions via Convex Hull Quadratic Lyapunov Functions Nonlinear Control Design for Linear Differential Inclusions via Convex Hull Quadratic Lyapunov Functions Tingshu Hu Abstract This paper presents a nonlinear control design method for robust stabilization

More information

Bifurcation Analysis of Chaotic Systems using a Model Built on Artificial Neural Networks

Bifurcation Analysis of Chaotic Systems using a Model Built on Artificial Neural Networks Bifurcation Analysis of Chaotic Systems using a Model Built on Artificial Neural Networs Archana R, A Unnirishnan, and R Gopiaumari Abstract-- he analysis of chaotic systems is done with the help of bifurcation

More information

Fuzzy control of a class of multivariable nonlinear systems subject to parameter uncertainties: model reference approach

Fuzzy control of a class of multivariable nonlinear systems subject to parameter uncertainties: model reference approach International Journal of Approximate Reasoning 6 (00) 9±44 www.elsevier.com/locate/ijar Fuzzy control of a class of multivariable nonlinear systems subject to parameter uncertainties: model reference approach

More information

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about Rank-one LMIs and Lyapunov's Inequality Didier Henrion 1;; Gjerrit Meinsma Abstract We describe a new proof of the well-known Lyapunov's matrix inequality about the location of the eigenvalues of a matrix

More information

On Computing the Worst-case Performance of Lur'e Systems with Uncertain Time-invariant Delays

On Computing the Worst-case Performance of Lur'e Systems with Uncertain Time-invariant Delays Article On Computing the Worst-case Performance of Lur'e Systems with Uncertain Time-invariant Delays Thapana Nampradit and David Banjerdpongchai* Department of Electrical Engineering, Faculty of Engineering,

More information

Optimization based robust control

Optimization based robust control Optimization based robust control Didier Henrion 1,2 Draft of March 27, 2014 Prepared for possible inclusion into The Encyclopedia of Systems and Control edited by John Baillieul and Tariq Samad and published

More information

ON THE PARAMETRIC UNCERTAINTY OF WEAKLY REVERSIBLE REALIZATIONS OF KINETIC SYSTEMS

ON THE PARAMETRIC UNCERTAINTY OF WEAKLY REVERSIBLE REALIZATIONS OF KINETIC SYSTEMS HUNGARIAN JOURNAL OF INDUSTRY AND CHEMISTRY VESZPRÉM Vol. 42(2) pp. 3 7 (24) ON THE PARAMETRIC UNCERTAINTY OF WEAKLY REVERSIBLE REALIZATIONS OF KINETIC SYSTEMS GYÖRGY LIPTÁK, GÁBOR SZEDERKÉNYI,2 AND KATALIN

More information

1 Introduction 198; Dugard et al, 198; Dugard et al, 198) A delay matrix in such a lower triangular form is called an interactor matrix, and almost co

1 Introduction 198; Dugard et al, 198; Dugard et al, 198) A delay matrix in such a lower triangular form is called an interactor matrix, and almost co Multivariable Receding-Horizon Predictive Control for Adaptive Applications Tae-Woong Yoon and C M Chow y Department of Electrical Engineering, Korea University 1, -a, Anam-dong, Sungbu-u, Seoul 1-1, Korea

More information

A Perturbation Analysis using Second Order Cone Programming for Robust Kernel Based Regression

A Perturbation Analysis using Second Order Cone Programming for Robust Kernel Based Regression A Perturbation Analysis using Second Order Cone Programg for Robust Kernel Based Regression Tillmann Falc, Marcelo Espinoza, Johan A K Suyens, Bart De Moor KU Leuven, ESAT-SCD-SISTA, Kasteelpar Arenberg

More information

A Reservoir Sampling Algorithm with Adaptive Estimation of Conditional Expectation

A Reservoir Sampling Algorithm with Adaptive Estimation of Conditional Expectation A Reservoir Sampling Algorithm with Adaptive Estimation of Conditional Expectation Vu Malbasa and Slobodan Vucetic Abstract Resource-constrained data mining introduces many constraints when learning from

More information

IN THIS PAPER, we consider a class of continuous-time recurrent

IN THIS PAPER, we consider a class of continuous-time recurrent IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 51, NO. 4, APRIL 2004 161 Global Output Convergence of a Class of Continuous-Time Recurrent Neural Networks With Time-Varying Thresholds

More information

LECTURE # - NEURAL COMPUTATION, Feb 04, Linear Regression. x 1 θ 1 output... θ M x M. Assumes a functional form

LECTURE # - NEURAL COMPUTATION, Feb 04, Linear Regression. x 1 θ 1 output... θ M x M. Assumes a functional form LECTURE # - EURAL COPUTATIO, Feb 4, 4 Linear Regression Assumes a functional form f (, θ) = θ θ θ K θ (Eq) where = (,, ) are the attributes and θ = (θ, θ, θ ) are the function parameters Eample: f (, θ)

More information

The Linear Dynamic Complementarity Problem is a special case of the Extended Linear Complementarity Problem B. De Schutter 1 B. De Moor ESAT-SISTA, K.

The Linear Dynamic Complementarity Problem is a special case of the Extended Linear Complementarity Problem B. De Schutter 1 B. De Moor ESAT-SISTA, K. Katholieke Universiteit Leuven Departement Elektrotechniek ESAT-SISTA/TR 9-1 The Linear Dynamic Complementarity Problem is a special case of the Extended Linear Complementarity Problem 1 Bart De Schutter

More information

Multilayer Perceptron

Multilayer Perceptron Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Single Perceptron 3 Boolean Function Learning 4

More information

NEURAL PREDICTIVE CONTROL TOOLBOX FOR CACSD IN MATLAB ENVIRONMENT

NEURAL PREDICTIVE CONTROL TOOLBOX FOR CACSD IN MATLAB ENVIRONMENT Copyright 2002 IFAC 15th Triennial World Congress, Barcelona, Spain NEURAL PREDICTIVE CONTROL TOOLBOX FOR CACSD IN MATLAB ENVIRONMENT Jesús M. Zamarreño Dpt. System Engineering and Automatic Control. University

More information

Robust Observer for Uncertain T S model of a Synchronous Machine

Robust Observer for Uncertain T S model of a Synchronous Machine Recent Advances in Circuits Communications Signal Processing Robust Observer for Uncertain T S model of a Synchronous Machine OUAALINE Najat ELALAMI Noureddine Laboratory of Automation Computer Engineering

More information

Adaptive linear quadratic control using policy. iteration. Steven J. Bradtke. University of Massachusetts.

Adaptive linear quadratic control using policy. iteration. Steven J. Bradtke. University of Massachusetts. Adaptive linear quadratic control using policy iteration Steven J. Bradtke Computer Science Department University of Massachusetts Amherst, MA 01003 bradtke@cs.umass.edu B. Erik Ydstie Department of Chemical

More information

Relative Irradiance. Wavelength (nm)

Relative Irradiance. Wavelength (nm) Characterization of Scanner Sensitivity Gaurav Sharma H. J. Trussell Electrical & Computer Engineering Dept. North Carolina State University, Raleigh, NC 7695-79 Abstract Color scanners are becoming quite

More information

Multilayer Perceptron = FeedForward Neural Network

Multilayer Perceptron = FeedForward Neural Network Multilayer Perceptron = FeedForward Neural Networ History Definition Classification = feedforward operation Learning = bacpropagation = local optimization in the space of weights Pattern Classification

More information

Graph and Controller Design for Disturbance Attenuation in Consensus Networks

Graph and Controller Design for Disturbance Attenuation in Consensus Networks 203 3th International Conference on Control, Automation and Systems (ICCAS 203) Oct. 20-23, 203 in Kimdaejung Convention Center, Gwangju, Korea Graph and Controller Design for Disturbance Attenuation in

More information

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St Structured Lower Rank Approximation by Moody T. Chu (NCSU) joint with Robert E. Funderlic (NCSU) and Robert J. Plemmons (Wake Forest) March 5, 1998 Outline Introduction: Problem Description Diculties Algebraic

More information

CLOSE-TO-CLEAN REGULARIZATION RELATES

CLOSE-TO-CLEAN REGULARIZATION RELATES Worshop trac - ICLR 016 CLOSE-TO-CLEAN REGULARIZATION RELATES VIRTUAL ADVERSARIAL TRAINING, LADDER NETWORKS AND OTHERS Mudassar Abbas, Jyri Kivinen, Tapani Raio Department of Computer Science, School of

More information

APPLICATION OF RECURRENT NEURAL NETWORK USING MATLAB SIMULINK IN MEDICINE

APPLICATION OF RECURRENT NEURAL NETWORK USING MATLAB SIMULINK IN MEDICINE ITALIAN JOURNAL OF PURE AND APPLIED MATHEMATICS N. 39 2018 (23 30) 23 APPLICATION OF RECURRENT NEURAL NETWORK USING MATLAB SIMULINK IN MEDICINE Raja Das Madhu Sudan Reddy VIT Unversity Vellore, Tamil Nadu

More information

Experiment design for batch-to-batch model-based learning control

Experiment design for batch-to-batch model-based learning control Experiment design for batch-to-batch model-based learning control Marco Forgione, Xavier Bombois and Paul M.J. Van den Hof Abstract An Experiment Design framewor for dynamical systems which execute multiple

More information

Departement Elektrotechniek ESAT-SISTA/TR 98- Stochastic System Identication for ATM Network Trac Models: a Time Domain Approach Katrien De Cock and Bart De Moor April 998 Accepted for publication in roceedings

More information

Lecture 16: Introduction to Neural Networks

Lecture 16: Introduction to Neural Networks Lecture 16: Introduction to Neural Networs Instructor: Aditya Bhasara Scribe: Philippe David CS 5966/6966: Theory of Machine Learning March 20 th, 2017 Abstract In this lecture, we consider Bacpropagation,

More information

New Lyapunov Krasovskii functionals for stability of linear retarded and neutral type systems

New Lyapunov Krasovskii functionals for stability of linear retarded and neutral type systems Systems & Control Letters 43 (21 39 319 www.elsevier.com/locate/sysconle New Lyapunov Krasovskii functionals for stability of linear retarded and neutral type systems E. Fridman Department of Electrical

More information

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Recursive Algorithms - Han-Fu Chen

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Recursive Algorithms - Han-Fu Chen CONROL SYSEMS, ROBOICS, AND AUOMAION - Vol. V - Recursive Algorithms - Han-Fu Chen RECURSIVE ALGORIHMS Han-Fu Chen Institute of Systems Science, Academy of Mathematics and Systems Science, Chinese Academy

More information

June Engineering Department, Stanford University. System Analysis and Synthesis. Linear Matrix Inequalities. Stephen Boyd (E.

June Engineering Department, Stanford University. System Analysis and Synthesis. Linear Matrix Inequalities. Stephen Boyd (E. Stephen Boyd (E. Feron :::) System Analysis and Synthesis Control Linear Matrix Inequalities via Engineering Department, Stanford University Electrical June 1993 ACC, 1 linear matrix inequalities (LMIs)

More information

Hand Written Digit Recognition using Kalman Filter

Hand Written Digit Recognition using Kalman Filter International Journal of Electronics and Communication Engineering. ISSN 0974-2166 Volume 5, Number 4 (2012), pp. 425-434 International Research Publication House http://www.irphouse.com Hand Written Digit

More information

A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models. Isabelle Rivals and Léon Personnaz

A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models. Isabelle Rivals and Léon Personnaz In Neurocomputing 2(-3): 279-294 (998). A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models Isabelle Rivals and Léon Personnaz Laboratoire d'électronique,

More information

Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate

Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate www.scichina.com info.scichina.com www.springerlin.com Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate WEI Chen & CHEN ZongJi School of Automation

More information

THE USE OF ROBUST OBSERVERS IN THE SIMULATION OF GAS SUPPLY NETWORKS

THE USE OF ROBUST OBSERVERS IN THE SIMULATION OF GAS SUPPLY NETWORKS UNIVERSITY OF READING DEPARTMENT OF MATHEMATICS THE USE OF ROBUST OBSERVERS IN THE SIMULATION OF GAS SUPPLY NETWORKS by SIMON M. STRINGER This thesis is submitted forthe degree of Doctorof Philosophy NOVEMBER

More information

Adaptive State Feedback Nash Strategies for Linear Quadratic Discrete-Time Games

Adaptive State Feedback Nash Strategies for Linear Quadratic Discrete-Time Games Adaptive State Feedbac Nash Strategies for Linear Quadratic Discrete-Time Games Dan Shen and Jose B. Cruz, Jr. Intelligent Automation Inc., Rocville, MD 2858 USA (email: dshen@i-a-i.com). The Ohio State

More information

Average Reward Parameters

Average Reward Parameters Simulation-Based Optimization of Markov Reward Processes: Implementation Issues Peter Marbach 2 John N. Tsitsiklis 3 Abstract We consider discrete time, nite state space Markov reward processes which depend

More information

Instituto Tecnológico y de Estudios Superiores de Occidente Departamento de Electrónica, Sistemas e Informática. Introductory Notes on Neural Networks

Instituto Tecnológico y de Estudios Superiores de Occidente Departamento de Electrónica, Sistemas e Informática. Introductory Notes on Neural Networks Introductory Notes on Neural Networs Dr. José Ernesto Rayas Sánche April Introductory Notes on Neural Networs Dr. José Ernesto Rayas Sánche BIOLOGICAL NEURAL NETWORKS The brain can be seen as a highly

More information

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK TRNKA PAVEL AND HAVLENA VLADIMÍR Dept of Control Engineering, Czech Technical University, Technická 2, 166 27 Praha, Czech Republic mail:

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 15: Nonlinear optimization Prof. John Gunnar Carlsson November 1, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 1, 2010 1 / 24

More information

Hybrid Systems Course Lyapunov stability

Hybrid Systems Course Lyapunov stability Hybrid Systems Course Lyapunov stability OUTLINE Focus: stability of an equilibrium point continuous systems decribed by ordinary differential equations (brief review) hybrid automata OUTLINE Focus: stability

More information

Signal Identification Using a Least L 1 Norm Algorithm

Signal Identification Using a Least L 1 Norm Algorithm Optimization and Engineering, 1, 51 65, 2000 c 2000 Kluwer Academic Publishers. Manufactured in The Netherlands. Signal Identification Using a Least L 1 Norm Algorithm J. BEN ROSEN Department of Computer

More information

IN neural-network training, the most well-known online

IN neural-network training, the most well-known online IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 1, JANUARY 1999 161 On the Kalman Filtering Method in Neural-Network Training and Pruning John Sum, Chi-sing Leung, Gilbert H. Young, and Wing-kay Kan

More information

Stability of linear time-varying systems through quadratically parameter-dependent Lyapunov functions

Stability of linear time-varying systems through quadratically parameter-dependent Lyapunov functions Stability of linear time-varying systems through quadratically parameter-dependent Lyapunov functions Vinícius F. Montagner Department of Telematics Pedro L. D. Peres School of Electrical and Computer

More information

3.4 Linear Least-Squares Filter

3.4 Linear Least-Squares Filter X(n) = [x(1), x(2),..., x(n)] T 1 3.4 Linear Least-Squares Filter Two characteristics of linear least-squares filter: 1. The filter is built around a single linear neuron. 2. The cost function is the sum

More information

Information, Covariance and Square-Root Filtering in the Presence of Unknown Inputs 1

Information, Covariance and Square-Root Filtering in the Presence of Unknown Inputs 1 Katholiee Universiteit Leuven Departement Eletrotechnie ESAT-SISTA/TR 06-156 Information, Covariance and Square-Root Filtering in the Presence of Unnown Inputs 1 Steven Gillijns and Bart De Moor 2 October

More information

Review of Controllability Results of Dynamical System

Review of Controllability Results of Dynamical System IOSR Journal of Mathematics (IOSR-JM) e-issn: 2278-5728, p-issn: 2319-765X. Volume 13, Issue 4 Ver. II (Jul. Aug. 2017), PP 01-05 www.iosrjournals.org Review of Controllability Results of Dynamical System

More information

John P.F.Sum and Peter K.S.Tam. Hong Kong Polytechnic University, Hung Hom, Kowloon.

John P.F.Sum and Peter K.S.Tam. Hong Kong Polytechnic University, Hung Hom, Kowloon. Note on the Maxnet Dynamics John P.F.Sum and Peter K.S.Tam Department of Electronic Engineering, Hong Kong Polytechnic University, Hung Hom, Kowloon. April 7, 996 Abstract A simple method is presented

More information

NONLINEAR PLANT IDENTIFICATION BY WAVELETS

NONLINEAR PLANT IDENTIFICATION BY WAVELETS NONLINEAR PLANT IDENTIFICATION BY WAVELETS Edison Righeto UNESP Ilha Solteira, Department of Mathematics, Av. Brasil 56, 5385000, Ilha Solteira, SP, Brazil righeto@fqm.feis.unesp.br Luiz Henrique M. Grassi

More information

NON-LINEAR CONTROL OF OUTPUT PROBABILITY DENSITY FUNCTION FOR LINEAR ARMAX SYSTEMS

NON-LINEAR CONTROL OF OUTPUT PROBABILITY DENSITY FUNCTION FOR LINEAR ARMAX SYSTEMS Control 4, University of Bath, UK, September 4 ID-83 NON-LINEAR CONTROL OF OUTPUT PROBABILITY DENSITY FUNCTION FOR LINEAR ARMAX SYSTEMS H. Yue, H. Wang Control Systems Centre, University of Manchester

More information

Dictionary Learning for L1-Exact Sparse Coding

Dictionary Learning for L1-Exact Sparse Coding Dictionary Learning for L1-Exact Sparse Coding Mar D. Plumbley Department of Electronic Engineering, Queen Mary University of London, Mile End Road, London E1 4NS, United Kingdom. Email: mar.plumbley@elec.qmul.ac.u

More information

Identication and Control of Nonlinear Systems Using. Neural Network Models: Design and Stability Analysis. Marios M. Polycarpou and Petros A.

Identication and Control of Nonlinear Systems Using. Neural Network Models: Design and Stability Analysis. Marios M. Polycarpou and Petros A. Identication and Control of Nonlinear Systems Using Neural Network Models: Design and Stability Analysis by Marios M. Polycarpou and Petros A. Ioannou Report 91-09-01 September 1991 Identication and Control

More information

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University 8 September 2003 European Union RTN Summer School on Multi-Agent

More information

Generalized Shifted Inverse Iterations on Grassmann Manifolds 1

Generalized Shifted Inverse Iterations on Grassmann Manifolds 1 Proceedings of the Sixteenth International Symposium on Mathematical Networks and Systems (MTNS 2004), Leuven, Belgium Generalized Shifted Inverse Iterations on Grassmann Manifolds 1 J. Jordan α, P.-A.

More information

Optimization in. Stephen Boyd. 3rd SIAM Conf. Control & Applications. and Control Theory. System. Convex

Optimization in. Stephen Boyd. 3rd SIAM Conf. Control & Applications. and Control Theory. System. Convex Optimization in Convex and Control Theory System Stephen Boyd Engineering Department Electrical University Stanford 3rd SIAM Conf. Control & Applications 1 Basic idea Many problems arising in system and

More information

Learning Long Term Dependencies with Gradient Descent is Difficult

Learning Long Term Dependencies with Gradient Descent is Difficult Learning Long Term Dependencies with Gradient Descent is Difficult IEEE Trans. on Neural Networks 1994 Yoshua Bengio, Patrice Simard, Paolo Frasconi Presented by: Matt Grimes, Ayse Naz Erkan Recurrent

More information

Stability Analysis of Linear Systems with Time-varying State and Measurement Delays

Stability Analysis of Linear Systems with Time-varying State and Measurement Delays Proceeding of the th World Congress on Intelligent Control and Automation Shenyang, China, June 29 - July 4 24 Stability Analysis of Linear Systems with ime-varying State and Measurement Delays Liang Lu

More information

H State-Feedback Controller Design for Discrete-Time Fuzzy Systems Using Fuzzy Weighting-Dependent Lyapunov Functions

H State-Feedback Controller Design for Discrete-Time Fuzzy Systems Using Fuzzy Weighting-Dependent Lyapunov Functions IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL 11, NO 2, APRIL 2003 271 H State-Feedback Controller Design for Discrete-Time Fuzzy Systems Using Fuzzy Weighting-Dependent Lyapunov Functions Doo Jin Choi and PooGyeon

More information

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology c Chapter 7 Interconnected

More information

Linear Least-Squares Based Methods for Neural Networks Learning

Linear Least-Squares Based Methods for Neural Networks Learning Linear Least-Squares Based Methods for Neural Networks Learning Oscar Fontenla-Romero 1, Deniz Erdogmus 2, JC Principe 2, Amparo Alonso-Betanzos 1, and Enrique Castillo 3 1 Laboratory for Research and

More information

A new robust delay-dependent stability criterion for a class of uncertain systems with delay

A new robust delay-dependent stability criterion for a class of uncertain systems with delay A new robust delay-dependent stability criterion for a class of uncertain systems with delay Fei Hao Long Wang and Tianguang Chu Abstract A new robust delay-dependent stability criterion for a class of

More information

Porcupine Neural Networks: (Almost) All Local Optima are Global

Porcupine Neural Networks: (Almost) All Local Optima are Global Porcupine Neural Networs: (Almost) All Local Optima are Global Soheil Feizi, Hamid Javadi, Jesse Zhang and David Tse arxiv:1710.0196v1 [stat.ml] 5 Oct 017 Stanford University Abstract Neural networs have

More information

Automatic Structure and Parameter Training Methods for Modeling of Mechanical System by Recurrent Neural Networks

Automatic Structure and Parameter Training Methods for Modeling of Mechanical System by Recurrent Neural Networks Automatic Structure and Parameter Training Methods for Modeling of Mechanical System by Recurrent Neural Networks C. James Li and Tung-Yung Huang Department of Mechanical Engineering, Aeronautical Engineering

More information

Nonlinear System Analysis

Nonlinear System Analysis Nonlinear System Analysis Lyapunov Based Approach Lecture 4 Module 1 Dr. Laxmidhar Behera Department of Electrical Engineering, Indian Institute of Technology, Kanpur. January 4, 2003 Intelligent Control

More information

System Identification by Nuclear Norm Minimization

System Identification by Nuclear Norm Minimization Dept. of Information Engineering University of Pisa (Italy) System Identification by Nuclear Norm Minimization eng. Sergio Grammatico grammatico.sergio@gmail.com Class of Identification of Uncertain Systems

More information

Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions

Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions Artem Chernodub, Institute of Mathematical Machines and Systems NASU, Neurotechnologies

More information

A Distributed Newton Method for Network Utility Maximization

A Distributed Newton Method for Network Utility Maximization A Distributed Newton Method for Networ Utility Maximization Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie Abstract Most existing wor uses dual decomposition and subgradient methods to solve Networ Utility

More information

MODELING NONLINEAR DYNAMICS WITH NEURAL. Eric A. Wan. Stanford University, Department of Electrical Engineering, Stanford, CA

MODELING NONLINEAR DYNAMICS WITH NEURAL. Eric A. Wan. Stanford University, Department of Electrical Engineering, Stanford, CA MODELING NONLINEAR DYNAMICS WITH NEURAL NETWORKS: EXAMPLES IN TIME SERIES PREDICTION Eric A Wan Stanford University, Department of Electrical Engineering, Stanford, CA 9435-455 Abstract A neural networ

More information

Eects of small delays on stability of singularly perturbed systems

Eects of small delays on stability of singularly perturbed systems Automatica 38 (2002) 897 902 www.elsevier.com/locate/automatica Technical Communique Eects of small delays on stability of singularly perturbed systems Emilia Fridman Department of Electrical Engineering

More information

arxiv: v1 [math.oc] 23 Oct 2017

arxiv: v1 [math.oc] 23 Oct 2017 Stability Analysis of Optimal Adaptive Control using Value Iteration Approximation Errors Ali Heydari arxiv:1710.08530v1 [math.oc] 23 Oct 2017 Abstract Adaptive optimal control using value iteration initiated

More information

/97/$10.00 (c) 1997 AACC

/97/$10.00 (c) 1997 AACC Optimal Random Perturbations for Stochastic Approximation using a Simultaneous Perturbation Gradient Approximation 1 PAYMAN SADEGH, and JAMES C. SPALL y y Dept. of Mathematical Modeling, Technical University

More information

A note on the characteristic equation in the max-plus algebra

A note on the characteristic equation in the max-plus algebra K.U.Leuven Department of Electrical Engineering (ESAT) SISTA Technical report 96-30 A note on the characteristic equation in the max-plus algebra B. De Schutter and B. De Moor If you want to cite this

More information

SVD-based optimal ltering with applications to noise reduction in speech signals Simon Doclo ESAT - SISTA, Katholieke Universiteit Leuven Kardinaal Me

SVD-based optimal ltering with applications to noise reduction in speech signals Simon Doclo ESAT - SISTA, Katholieke Universiteit Leuven Kardinaal Me Departement Elektrotechniek ESAT-SISTA/TR 999- SVD-based Optimal Filtering with Applications to Noise Reduction in Speech Signals Simon Doclo, Marc Moonen April, 999 Internal report This report is available

More information

Embedding Recurrent Neural Networks into Predator-Prey Models Yves Moreau, St phane Louies 2, Joos Vandewalle, L on renig 2 Katholieke Universiteit Le

Embedding Recurrent Neural Networks into Predator-Prey Models Yves Moreau, St phane Louies 2, Joos Vandewalle, L on renig 2 Katholieke Universiteit Le Katholieke Universiteit Leuven Departement Elektrotechniek ESAT-SISTA/TR 998-2 Embedding Recurrent Neural Networks into Predator-Prey Models Yves Moreau 2, Stephane Louies 3, Joos Vandewalle 2, Leon renig

More information

arxiv: v5 [cs.ne] 3 Feb 2015

arxiv: v5 [cs.ne] 3 Feb 2015 Riemannian metrics for neural networs I: Feedforward networs Yann Ollivier arxiv:1303.0818v5 [cs.ne] 3 Feb 2015 February 4, 2015 Abstract We describe four algorithms for neural networ training, each adapted

More information

Singular perturbation analysis of an additive increase multiplicative decrease control algorithm under time-varying buffering delays.

Singular perturbation analysis of an additive increase multiplicative decrease control algorithm under time-varying buffering delays. Singular perturbation analysis of an additive increase multiplicative decrease control algorithm under time-varying buffering delays. V. Guffens 1 and G. Bastin 2 Intelligent Systems and Networks Research

More information

Stability Analysis for Linear Systems under State Constraints

Stability Analysis for Linear Systems under State Constraints Stabilit Analsis for Linear Sstems under State Constraints Haijun Fang Abstract This paper revisits the problem of stabilit analsis for linear sstems under state constraints New and less conservative sufficient

More information

Nonlinear Model Predictive Control for Periodic Systems using LMIs

Nonlinear Model Predictive Control for Periodic Systems using LMIs Marcus Reble Christoph Böhm Fran Allgöwer Nonlinear Model Predictive Control for Periodic Systems using LMIs Stuttgart, June 29 Institute for Systems Theory and Automatic Control (IST), University of Stuttgart,

More information

Linear Matrix Inequality (LMI)

Linear Matrix Inequality (LMI) Linear Matrix Inequality (LMI) A linear matrix inequality is an expression of the form where F (x) F 0 + x 1 F 1 + + x m F m > 0 (1) x = (x 1,, x m ) R m, F 0,, F m are real symmetric matrices, and the

More information

ARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC ALGORITHM FOR NONLINEAR MIMO MODEL OF MACHINING PROCESSES

ARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC ALGORITHM FOR NONLINEAR MIMO MODEL OF MACHINING PROCESSES International Journal of Innovative Computing, Information and Control ICIC International c 2013 ISSN 1349-4198 Volume 9, Number 4, April 2013 pp. 1455 1475 ARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC

More information

Linear Systems with Saturating Controls: An LMI Approach. subject to control saturation. No assumption is made concerning open-loop stability and no

Linear Systems with Saturating Controls: An LMI Approach. subject to control saturation. No assumption is made concerning open-loop stability and no Output Feedback Robust Stabilization of Uncertain Linear Systems with Saturating Controls: An LMI Approach Didier Henrion 1 Sophie Tarbouriech 1; Germain Garcia 1; Abstract : The problem of robust controller

More information

Novel determination of dierential-equation solutions: universal approximation method

Novel determination of dierential-equation solutions: universal approximation method Journal of Computational and Applied Mathematics 146 (2002) 443 457 www.elsevier.com/locate/cam Novel determination of dierential-equation solutions: universal approximation method Thananchai Leephakpreeda

More information

Correspondence should be addressed to Chien-Yu Lu,

Correspondence should be addressed to Chien-Yu Lu, Hindawi Publishing Corporation Discrete Dynamics in Nature and Society Volume 2009, Article ID 43015, 14 pages doi:10.1155/2009/43015 Research Article Delay-Range-Dependent Global Robust Passivity Analysis

More information

Krylov Techniques for Model Reduction of Second-Order Systems

Krylov Techniques for Model Reduction of Second-Order Systems Krylov Techniques for Model Reduction of Second-Order Systems A Vandendorpe and P Van Dooren February 4, 2004 Abstract The purpose of this paper is to present a Krylov technique for model reduction of

More information

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a WHEN IS A MAP POISSON N.G.Bean, D.A.Green and P.G.Taylor Department of Applied Mathematics University of Adelaide Adelaide 55 Abstract In a recent paper, Olivier and Walrand (994) claimed that the departure

More information