Input-Output Stability of Recurrent Neural Networks with Time-Varying Parameters

Size: px
Start display at page:

Download "Input-Output Stability of Recurrent Neural Networks with Time-Varying Parameters"

Transcription

1 Input-Output Stability of Recurrent Neural Networks with Time-Varying Parameters Jochen J. Steil, August 3, 2000 Abstract We provide input-output stability conditions for additive recurrent neural networks regarding them as dynamical operators between their input and output function spaces. The stability analysis is based on methods from non-linear feedback system theory and includes the case of time-varying weights, for instance introduced by on-line adaptation. The results assure that there are regions in weight space in which a network operates stably regardless to changes of the weights within the respective region. Bounds on the allowed weight deviations are obtained computationally efficient in the framework of interior point optimization methods for linear matrix inequalities and, under certain conditions, are also valid for convergence of corresponding state-space solutions. We apply the methodology to a non-trivial trajectory learning task where we obtain stability regions large enough to cope with parameter drift in the reference model by means of provably stable on-line adaptation. Keywords recurrent neural networks, input-output behavior, stability, on-line adaptation, non-linear feedback system I. INTRODUCTION ÇNE of the most interesting properties of recurrent neural networks (RNN) is their capability to approximate the time-development of arbitrary dynamical systems [1], [2]. While applications of recurrent networks for content addressable memory [3] and quadratic optimization [4], [5], [6] require convergence to stable fixed points recently there is growing interest to use the network s dynamical behavior for processing of time-varying input. In this area there are mainly two types of important network dynamics, on the one hand periodic, quasiperiodic, or even chaotic motions [7],[8], which are used for instance as pattern generator for periodic tasks [9]. On the other hand recurrent networks can approximate a non-linear mapping between their input and output function spaces for given timevarying input and output reference trajectories. With a number of algorithms available to incrementally adapt a network using time-dependent error signals [9], [10], [11] such networks can solve for instance identification and adaptive control tasks [12], [13], [14], [15] and are widely used in time-series prediction [16], [17], [18]. In control applications the proper functioning of the system crucially depends on the stability behavior of the network, which is hard to analyze due to its non-linear nature. This problem becomes even harder if one of the strongest indications in favor of neural networks, on-line learning, is used. From the viewpoint of dynamical systems on-line learning introduces uncertain, time-varying parameters and therefore affects all basic properties of the system including its stability. Then a basic requirement to obtain a proper mapping from input to output functions is to avoid unbounded responses and, more intuitively, this means for instance that the development of internally driven periodic or chaotic motions, which persist if the input vanishes, must be prohibited. In this paper we approach this problem in the following way. We regard a recurrent network as dynamical The author is with the Neuroinformatics Group at the Faculty of Technology, University of Bielefeld, Germany. jochen@jsteil.de, operator implementing a mapping between input-output pairs of functions and investigate its input-output stability. This means to bound the function norm of the output relative to the input norm, where input and output functions usually are taken from the space Ä ¾ of square integrable functions. Thus the concept of input-output stability is related to the whole time-development of the inputs and outputs and not to singular internal states of the system and we can transfer methods of the classical non-linear feedback system theory [19], [20], [21], [22] for this case. We further consider the case of on-line adaptation. In general it is of large interest to know weight ranges in which networks operate stably despite of weight changes within that range. Our key idea to approach this problem is a reparametrisation of the time-variance in the weights as additional time-varying feedback in an equivalent auxiliary system with time-invariant linear part. This allows to treat time-invariant and time-varying recurrent networks in a common framework and avoids the employment of advanced and complicated methods like the theory of differential inclusions and interval systems, which have been proposed in this context before [23], [24]. In summary, we try to solve the following absolute stability problem for recurrent networks: Find a class of nonlinear transfer functions and a set of weight matrices, such that for all choices of the transfer function within the class and all weight matrices within the set the resulting network is input-output stable. To regard a recurrent network as input-output system is natural from the point of view of applications, where it is designed to approximate a map from its inputs to reference outputs. On the other hand it is artificial because neural network models are by definition given in state space with states corresponding to activity of the formal neurons and dynamics given by a set of differential equations. Thus, despite the input-output focus of our work, we use the state space formulation as starting point to define the involved operators and then analyze their stability. A more indirect approach to input-output stability could also be the usage of Ljapunov methods to find a globally asymptotic stable (GAS) equilibrium in the state space [5], [25], [26], [27], which then under the additional assumption of global Lipschitz continuity or controllability of the network can imply inputoutput results. However, these assumptions may not hold and, especially when time-varying weights are involved and therefore we prefer the direct input-output approach and rather show that under certain conditions input-output stability results have statespace equivalents without additional assumptions. This leads in particular to new state space results for networks with timevarying weights. The paper is organized as follows. In Section II we derive the input-output framework and in Section III we introduce a new reparametrisation scheme to include time-varying weights as additional feedbacks in that framework. In Section IV we

2 Ù Ï Üµ C C Ü ³ Ü µ Fig. 1. The recurrent network as feedback system with linear forward operators C, feedback operator, input Ù, and output Ü. In the box for we illustrate graphically the sector conditions of inequality (4). The graph of ³ Ü µ must lie between Ü µ and Ü. derive the main stability results in form of frequency domain conditions and Section V shows how to evaluate these numerically in the framework of interior-point optimization. Section 6 discusses the connections to state space stability and known results about interval systems. In Section VII we present numerical results and apply the proposed methods for an example of recurrent on-line learning in the presence of a drifting model parameter. II. THE INPUT-OUTPUT FRAMEWORK Most classical stability results for non-linear feedback systems are based on a decomposition principle: the system is subdivided in a feedforward path, described by a linear operator which we below denote by and a feedback path, given by a non-linear operator which we denote by. Then conditions on and, treated as independent from each other, can be evaluated and are combined by an inequality to prove stability of the closed loop µ. To apply this approach we reformulate the RNN as input-output feedback system in terms of operators and develop the stability theory in the operator space. A. The RNN as non-linear feedback system We consider recurrent networks with state space equations Ü Ü Ü Ï Ü Øµ ٠ص (1) where Ü¾Ê Ò is the state vector, Ͼ Ê Ò Ò is the weight matrix and Ü Øµ ص ³ Ü Øµ ص ³ Ò Ü Ò Øµ صµ Ì the vector of non-linear, possibly time-varying activation functions ³. Application of the Laplace transform to (1) yields Ü µ Ü µ ÏÄ Ü Øµµ Ù µ Ü µ Á µ ÏÄ Ü Øµµ Á µ Ù µ (2) In (2) we identify two linear operators and denote by C (without argument) the operator defined by the complex transfer function Ü C µ Á µ and by the operator given by C µï respectively 1. In a similar way denotes the non-linear operator defined in the time domain by Ü Øµµ or in the complex domain by Ä Ü Øµµ. In operator notation (2) then becomes Ü C ٠ܵ C Ù Ï Üµµ (3) The resulting feedback system is shown as block diagram in Fig. 1. In the following we stick to the operator notation to derive stability conditions and return to the definitions in the time or complex domain only when concretely evaluating respective conditions. Remark 1: Note the difference of the loop with positive feedback in Fig. 1 to the standard approach Ü Ù Üµµ with negative feedback used in the non-linear feedback theory ([20], [21], [22]). If we define Ù Ï Ù and C Ï we can transform (3) to the conventional form, however at the cost of introducing the inverse Ï which leads to worse stability conditions [29]. B. Sector conditions and parameterization of the feedback In the feedback path we assume that the non-linear activation functions are unbiased, i.e. ³ ص Ø and that there hold so called sector conditions ³ Ü Øµ Ü (4) As illustrated in Fig. 1 this means geometrically that the graph of ³ Ü µ lies between the lines Ü µ and Ü µ Ü for all Ø. In the following we use for (4) the shorthand notation ³ ¾ or ¾ if all components have a uniform sector bound. The conditions (4) bound the amplification of Ü caused by application of ³ but neither imply monotony, continuity, or differentiability, nor boundedness or bounded slope of ³. Therefore they allow for a large class of activations including (unbounded) linear threshold functions and the usual sigmoid neural network activations as eg. ³ Ü µ ØÒ Ü µ ¾. The sector conditions allow to parameterize the activation: in case of time-invariant feedback ³ Ü Øµ ³ Ü µ Ü µü where Ü µ ¾ is an autonomous parameter dependent on Ü only, and in case of time-varying feedback as ³ Ü Øµ صÜ, where ص ¾ is an non-autonomous timevarying parameter. Thus can rewrite the original system (1) as Ü Ü ÏÃ ÜµÜ Ù Øµ (time-invariant) (5) Ü Ü ÏÃ ØµÜ Ù Øµ (time-varying) (6) where instead of there appears the diagonal interval matrix à ܵ Ü µ or à ص ص with entries bounded by the sector limits. For instance the function ³ Ü µ ØÒ Ü µ is time-invariant and belongs to the sector, i.e. it can be parametrised as ØÒ Üµ ÜµÜ Üµ ¾. In the following we assume w.r. ³ ¾. In principle C can also be defined in the time domain as convolution kernels derived from the inverse Laplace transform of C µ µ [28].

3 Ï Á Ô Á Ü ÏÊ Ï Á Ô Á Ù Ï Ï Ï Ù C C Ü Ê Ý (a) (b) ص ص because their multiplication does not change the positive sectors for ص ص. To incorporate the ص ص as additional feedback components into the network equations we enlarge the weight matrix and introduce an additional output matrix Ê. Consider for instance a 2-neuron network with Ï Û¾ and only one time-varying weight Ï Û¾ Û ¾. We rewrite (7) as Ü Ü ÏÃ ØµÝ Ù Øµ Ý ÊÜ (8) Û¾ Ï Ê Û ¾ à ص Ý µ ¾ Ý ¾ µ ¾ ص ¾ ص Fig. 2. (a) The time-varying network (7) and its reparametrisation (b). III. TIME-VARYING WEIGHTS AND ON-LINE ADAPTATION A main motivation to apply neural networks for input-output learning is the incremental nature of common learning algorithms which allows on-line adaptation of the weights to account for instance for changing system parameters or moving noise bias. However, in this case stability becomes very difficult to assure because the weights change in an unpredictable nonautonomous way. In the following we introduce a new scheme to reparametrise this time-variation of weights as additional sector bounded time-varying feedback. Then the resulting auxiliary system has the form (6) which allows to investigate stability of the time-invariant and time-varying systems within a unified framework. This further shows that the standard network (1) with time-varying non-linear functions in fact is the generic case to be investigated. We assume that the time-variation of the weights is bounded and thus the system can be described as Ü Ü Ï Ï Øµµ ܵ ٠ص (7) where the matrix Ï Øµ has unknown, independently varying parameters Û Øµ ¾. The corresponding system diagram is shown in Fig. 2 (a). For each Û Øµ we introduce two positive parameters ص and ص such that Û Øµ³ Ü Øµµ Û Øµ Ü µü ص Ü µü ص Ü µü ØµÜ ØµÜ and ص ص ¾ and ص ص ¾. It is equivalent to show stability for all Û Øµ ¾ or for ص and ص in their respective positive ranges. Note that the time-independent Ü µ parametrising the activation functions ³ ¾ can be subsumed in the time-varying and for each further time-varying weight the matrix Ï is enlarged by two additional columns and the output matrix Ê by two further rows in the obvious way. The auxiliary system (8) shown in Fig. 2 (b) has the same form as (1) with sector bounded feedbacks parametrised as in (6). The complex transfer function of the corresponding time-invariant feedforward operator for Æ time-varying weights is µ Ê Á µ Ï ¾ Ê Ò ¾Æ (9) IV. FREQUENCY DOMAIN RESULTS FOR INPUT-OUTPUT STABILITY In the traditional state space stability theory conditions involving the frequency function 2 µ corresponding to (9) are mostly used to assure the existence of a Lyapunov function of the Lur e-postnikov type [19], [22]. In the single-input singleoutput case this is very useful because simple graphical tests involving the plot of µ for all can easily be evaluated. For the general case, however, it has been pointed out (eg. [5], Remark 4) that their application is unsatisfactory because checking positive definiteness of a frequency dependent matrix for every frequency practically is not possible and because the construction of the corresponding Lyapunov function requires a controllability assumption on the weight matrix. However, the last remarks does not hold for input-output stability criteria which lead in a natural way directly to frequency conditions without a need to resort to Lyapunov methods. Further, in Section V we will use a version of the Kalman- Yakubovich-Lemma to replace the frequency condition by an equivalent linear matrix inequality which can efficiently be solved by interior point methods. Finally in Section VI we will point out that even state space results can directly be obtained from the frequency conditions. Therefore we believe that frequency methods provide powerful and effective tools which we will apply and specialize in this section to derive a number of input-output stability theorems for recurrent networks. ¾ denotes frequency and the imaginary unit.

4 A. Gains and input-output stability We first provide the basic definitions related to input-output stability. We assume that input and output functions and operators are defined in the space Ä ¾ of square integrable functions. Ä ¾ -stability (input-output stability) of the dynamical operator À ÒØ given by the loop µ in Fig.1 means that Ä ¾ -norm of the output defined by Ü Øµ ¾ ¾ Ü Øµ Ü Øµ Ø must not exceed the Ä ¾ norm of the input ٠ص by more than a positive gain ÒØ : Note that for unbiased operators Ê Ü À ÒØ Ùµ ÒØ Ù (10) Àµ À À ܵ Ò Ü Ü defines a multiplicative norm in operator space and thus by Ü À ÒØ Ùµ À ÒØ Ù µ ÒØ À ÒØ À ÒØ µ we obtain an estimate of the network gain. This norm can be evaluated by means of the frequency function À µ as À ÙÔ ÑÜ À Ì µà µµ ¾ (11) Now the condition À ÒØ µ implies stability, whereas À ÒØ µ Å yields a concrete upper bound Å for the maximal network amplification. As we lack access to À ÒØ in closed form, we have to resort to the classical approach of nonlinear feedback system theory to estimate À ÒØ µ in terms of properties of and. Example 1: The small gain theorem for RNN. Taking norms on both sides of (3) yields Ü C ٠ܵ C Ù Ü Ü C µ µ µ Ù Therefore, the system is stable if the small gain condition µ µ holds and the network gain is bounded in terms of the component gains by ÒØ Ï because from (11) we easily obtain C µ ÑÒ µ, and µ Ï ÑÜ Ï Ì ¾ ϵ ¾. Remark 2: An similar small gain theorem was given by Guzelis & Chua [30]. They include the weight matrix in a modified feedback Ï to simplify the forward path to C but the upper sector bounds for Ï then become Ï and the respective more conservative gain formula is ÒØ Ï µ B. The passivity approach The small gain approach used in Example 1 gives conservative results because the norms employ only amplitude information of the signals involved. To include also frequency information of an arbitrary operator Ê Ä ¾ Ä ¾ we use in the following the scalar product Ý between its input and output Ý Ê µ. Evaluating the inequality Ý Ê µ ¾ µ (12) we obtain a dissipation coefficient. The operator Ê is called (strictly) passive if µ. In Appendix A is shown that this coefficient can be evaluated by means of the real part of the frequency matrix Ò ÑÒ Ê µ µ where Ê ¾ µ µ Ì. Passivity of ¾ directly follows from the sector conditions, because Ý Ýµ Šݵ ݵ «Ýµ ¾ (13) The coefficient Æ µ relates passivity of to the inverse sector bound, i.e. Æ µ for the standard time-invariant case or Æ µ ÑÜ µ µ if the parametrisation described in Section III is applied to replace by à ص. A positive sum of the dissipation coefficients for and now indicates input-output stability: Theorem 1: The loop µ shown in Fig. 1 is Ä ¾ -stable if µ Æ µ Ò ÑÒ Ê µµ (14) It holds the loop gain estimate µ Àµ C µ (15) µ Æ µ The proof is found in the Appendix B. As pointed out before, a direct evaluation of the frequency condition (14) requires to check eigenvalues of µ for all, which is hardly possible in practice. Therefore we delay the evaluation of this (and the further frequency conditions we develop below) until Section V. Note the dependency of the gain estimation formula (15) on the stability condition (14): the closer the sum µ Æ µµ approaches zero the larger is the bound on the gain. Remark 3: There are known passivity results leading to identical passivity conditions ([20], [21], [22]) for the negative feedback configuration described in Remark 1. In comparison with these Theorem 2 holds for the positive feedback of the recurrent network and yields a simpler gain formula yielding a sharper gain estimation (because Ï is not used). C. Improvements by loop transforms We can improve the frequency condition (14) considering the auxiliary system shown in Fig. 3, which is equivalent with respect to stability to the original µ of Fig. 1. It is derived performing the following steps: first we define for Ö ¾ µ Ï Ö Ï Ö and Ö Ö, such that Ï Ö Ö Ï and replace Ï and by Ï Ö and Ö. Then we apply a standard loop transformation explained in Fig. 3 to obtain new feedforward and feedback operators (shown in dashed boxes) Ö CÏ Ö Ã µì Ö Ì Ö Ã Öµ

5 Ü Ì Ì Ö Ì Ì Ù Ï Ö C C Ã Ö Ü Ã Ã Öµ Fig. 3. The loop transformation for the recurrent network passivity criterion. We supply around the feedback operator Ö the positive feedback à to introduce the componentwise sector bounds. This is balanced in the forward path by subtraction of Ã. Finally we multiply scaling matrices Ì Ì. The new forward operator and feedback are indicated by the dashed boxes. where Ì Ø Ø Ø is an arbitrary diagonal scaling matrix and à are the upper sector bounds for the original ³ and the additionally feedbacks ص ص introduced by the time-variation. The parameter Ö ¾ µ is included to assure that the operator à ֵ exists. There are no changes to the input signal Ù but the output is transformed to Ü Ã ÖµÜ. As long as à ֵ exists, i.e. Ö, the original loop µ and the new loop Ö Ö µ are equivalent with respect to stability. We obtain Theorem 2 (RNN passivity criterion) The loop µ is input-output stable, if µ Ò ÑÒ Proof: From the sector conditions we obtain that for every Ö ¾ µ the feedback Ö is passive, i.e. Æ Öµ. Further, if is strict passive for Ö as (16) requires, then the continuous dependency of the eigenvalues on the matrix entries yields that also for Ö we have Ö µ. Thus Ö µ Æ Ö µ and application of Theorem 1 to Ö Ö µ yields the result. Ö Ê C µï Ã Ì (16) The condition (16) improves Theorem 1 in two important aspects: it includes the arbitrary scaling matrix Ì who s entries appear in Section V as free parameters to be optimized and it introduces component wise sector bounds as entries of Ã. Thus we avoid to replace the sector limits by their maximum as in Theorem 1 and their explicit occurrence is the key to optimize the stability range. Theorem 1 is a special case with Ì Á of (16) when the maximum of all sector bounds is equal to. D. The Popov multiplier for time invariant feedback The Theorems 1 and 2 hold for arbitrary time-varying Ü Øµ à صÜ, where ص are restricted only by the sector conditions. For most standard recurrent networks, however, is in a positive sector, differentiable, and time-invariant, i.e. Ü Øµ ܵ à ܵÜ, Ü µ ¾ and we can draw on this additional information. using a Popov-multiplier Á ɵ É Õ in the feedforward path [19], [21], Ü [22]. We replace by È Á ɵ µ, where É is an additional scaling matrix. Here only Õ which correspond to timeinvariant feedbacks may be positive, but it is possible to handle a mixture of time-invariant and time-varying components which occur by means of the reparametrisation scheme introduced in Section III. Typically there are at first Ò time-invariant components Ü µ corresponding to ³ Ü µ and then ¾Æ time-varying ص ص for Æ time-varying weights. In the Popov multiplier this corresponds to a matrix É Õ, where only the first Ò entries may be positive and the other ¾Æ entries Õ Õ are set to zero. The corresponding È is passive and we can apply the same loop transformations as for Theorem 2 to obtain the auxiliary and ÖÈ. As the transformed ÖÈ is passive as well ÖÈ and Ã Ö È µ exists ([22], [29]) it is possible to apply the Theorem 2 to the modified forward operator È : Theorem 3 (RNN Popov criterion) The system (7) is inputoutput stable for all Û Øµ ¾ and all non-linear time-stationary feedback functions ³ Ü µ Ü µü Ü µ ¾, if there exist diagonal matrices Ì Ø Ø Ø, É Õ such that Ò ÑÒ Ê Á ɵ µ à µì (17) where µ Ê Áµ Ï ¾ Ê Ò ¾Æµ ¾ is the frequency matrix of the corresponding reparametrised system and Ã. It is interesting that in case there are only time-invariant feedbacks the Popov theorem has a direct algebraic equivalent. Lemma 1: Consider the system µ with µ Á µ Ï and time-invariant feedback ¾. Then the Popov frequency condition (17) is equivalent to the condition that there exists Ì Ø such that ÑÒ Ê Ï ÁµÌ (18) Proof: (17) µ (18): Choose in (17) the frequency. (18) µ (17): Choose in (17) É, then Á µ Á µ and substitution into (17) directly yields (18). V. NUMERICAL EVALUATION The frequency conditions (14), (16), and (17) can all be transferred into a respective feasibility problem for a linear matrix inequality (LMI) obtained by the Kalman-Yakubovich-Lemma and (18) is already in the desired LMI form. The LMI problem then can efficiently be solved by interior point methods ([31], for links to implementations see [32]), which in this case means to decide whether there exists a matrix Ì, such that it is feasible or to choose Ì optimal in order to obtain the largest possible stability range. Application of the K.-Y.-Lemma in the form given in [33] to the most general condition (17) yields that it is equivalent to the existence of À À Ì ¾ Ê Ò Ò Ì É such that À À À Ï Ê Ì Ì Ê Ì É Ï Ì À ÌÊ ÉÊ ÉÊ Ï Ï Ì Ê Ì É ¾Ì à (19)

6 Successive specialization of (19) for Ï Ï Ê Á yields the time-invariant case, then É yields (16) and finally Ì Á leads to (14). For given sector bounds, i.e. known fixed matrix à we only have to evaluate feasibility of (19). For the case of time-varying weights, however, we rather want to optimize to obtain large sector bounds which define the maximal allowed deviation of a weight compliant with the stability conditions. In this case both Ì and à have free parameters and to avoid the quadratic terms in Ì Ã we substitute a matrix È Ô Ô Ô, Ô Ø Ô Ø Ô Ø in (19) and solve the optimization problem maximize Ô Ö subject to µ À À Ì Ì É Õ È A very convenient feature of this LMI formulation is the possibility to add constraints on the parameters. For instance to freeze a weight Û we simply have to require or to enforce a minimal allowed stability range for every weight we can require. This is especially useful for the evaluation of noise effects. VI. RELATION TO STATE SPACE RESULTS In this section we point out implications which the frequency domain results for input-output stability have for global asymptotic stability (GAS) of the state space solutions of Ü Ü Ï Üµ ٠ص (20) for the different cases Ù Ù ÓÒ Ø, and general ٠ص. A. Direct conclusions for state space trajectories It is known that from Ä ¾ -stability the stability of the solutions for the unforced state space system follows only under the additional assumption that ϵ is a controllable pair and the system is either time-invariant or globally Lipschitz continuous ([22], Th ) 3. The former assumption, however, may not hold for a number of practically important recurrent networks, for instance derived for quadratic optimization [5], whereas the latter may be violated, if time-varying weights are included. Therefore it may not be possible to construct a corresponding Lyapunov function of the Lur e-postnikov type and it is important, but less well known, that it is as well possible to show directly from the frequency conditions that the unforced state space trajectories approach zero for any initial conditions Ü µ [34]. Therefore all our results, also for time-varying weights, imply that the corresponding unforced state space trajectories for ٠ص asymptotically approach the origin. B. Time-invariant networks The case of time-invariant networks together with constant input ٠ص Ù is the most investigated in the stability literature (see eg. [5], [26], [27] and the reference therein). Usually it is In principle also observability must be required, but this can be skipped here because we treat the state as output. assumed that the activation functions ³ are globally Lipschitz continuous, differentiable, bounded (sigmoid), and that ³ Ü Øµ ³ Ü Øµ Ü Ü (21) The inequalities (21) are also called incremental sector conditions and imply the simple sector conditions (4) with sector bounds. Under these assumptions there always exists an equilibrium Ü relative to which a change of coordinates Þ Ü Ü can be performed to obtain the auxiliary system Þ Þ Ï Þ Üµ Ï Üµ Ï Þ Üµ (22) where is unbiased and obeys the same incremental sector conditions (21) as. For the system (22) the best known result for global asymptotical stability was obtained by Forti & Tesi [5], who show that the system (22) is GAS if there exists a diagonal scaling matrix Ì Ø such that Ê ÏµÌ (23) This condition is equivalent to the inequality (18) we obtained from the Popov-Theorem in the frequency domain. Indeed, Ï ÁµÌ Ï µì and we can replace Ï by Ï because that operation corresponds to a change of coordinates from the original Ü in (20) to Þ Ü (see [27]) and thus we obtain (23). This result contains a direct equivalence proof of input-output stability and state-space stability for time-invariant recurrent networks, which does not rely on any further assumptions of controllability, nor on the K.- Y.-Lemma or other advanced concepts from system theory. In view of this result we conjecture that for this type of system the conditions (18) or (23) respectively can not be further improved, neither for state space nor for input-output stability. C. Time-varying input in state space If ٠ص is arbitrary there does not exist an equilibrium point and we need to generalize the state space stability concept to trajectories. The best we can expect in this case is that from inputoutput stability it follows that for a given fixed input function ٠ص all solutions Ü Ù Øµµ for all initial conditions Ü ¾ Ê converge to a unique solution, i.e. ÐÑ Ø Ü Ù Øµµ Ü Ù Øµµ (24) for any Ü Ü. This can be shown using a similar coordinate transformation as for constant input. Denote by Ü Øµ an solution for arbitrary but fixed initial Ü and Ù and define Þ Øµ Ü Øµ Ü Øµ ÙÞ Øµ ٠ص ٠ص ÜÞ µ Ü Ü Substitution in (20) yields for the trajectory Þ Øµ relative to the reference solution Ü Þ Þ Ï Þ Ü Øµµ Ü Øµµ ÙÞ Øµ Þ Ï Þ Øµ ÙÞ Øµ Under the assumption that the time-varying obeys the incremental sector conditions (21) it follows from the frequency conditions (14), (16) that ÐÑ Ø Þ Øµ for ÙÞ Øµ and arbitrary initial conditions which in turn implies (24). In comparison with existing results relating Ä ¾ -stability to stability of solutions [35] we can once more avoid a controllability assumption here.

7 D. Systems with interval matrices So far the problem of time-varying weights has mainly been studied as a special case of a time-varying linear system Ü ØµÜ, where ص is an element of the convex polytope of Ò Ò matrices obeying the Ò ¾ interval restrictions on their elements ص that result from the constraints on the ³ and Û. In this case there can either be solved ¾ Ò Lyapunov equations for the ¾ Ò vertices of in parallel [36] or a matrix measure must be chosen and evaluated at all these vertices [25], [37]. Though both problems can in principle be solved numerically by interior point methods [31] the large number of vertex matrices of the polytope renders them intractable in practice for a larger number of varying parameters. In principle the problem to decide whether all matrices of a polytope are stable is known to be NP-hard [38] and thus can not be expected to be solved. However, the reparametrisation introduced in Section II reduces the complexity substantially to the task to numerically evaluate the frequency conditions through the respective LMI for the auxiliary system enlarged by ¾Æ feedback parameters. We found it possible to evaluate the condition for up to 50 time-varying parameters, which is a large amount in comparison to the extraordinary large number of ¾ vertices of the corresponding matrix polytope. A. Numerical validation VII. COMPUTATIONAL RESULTS In general the stability criteria presented have sufficient character and it is difficult to judge their validity respect to necessity. Further their numerical accuracy is difficult to estimate. Therefore we start with some generic examples where we can determine from the structure of the system a worst case linear system, which allows to compute a necessary condition and thus can serve as benchmark. Consider first the problem to compute the largest regular hypercube ««Æ for which the system Ü Ü Ï Øµ ܵ ٠ص Û ¾ ««(25) is input-output stable for ¾ (i.e. Ï in (7)). Denoting the matrix with all entries equal to one by Â Ò ¾ Ê Ò Ò we easily obtain the necessary condition that the worst case linear system Ü Ü «Â Ò Ü «Â Ò µü must be stable in state space. Obviously for «Ò the matrix Á «Â Ò µ is strictly diagonal dominant and has negative eigenvalues such that we even obtain a sharp necessary and sufficient stability margin. Numerical evaluation of the corresponding linear matrix inequality (19) derived from the frequency condition of Theorem 3 approaches this margin with accuracy. This case corresponds to a fully interconnected recurrent network with independently time-varying connection strengths smaller than «. The situation becomes more complex when we add in (25) a fixed interconnection matrix Ï (we choose Á for simplicity) Ü Ü Ï Üµ Ï Øµ ܵ ٠ص α α W 1,α= α 01 W 01 2, α W 3, α W 3, α W 2, α Fig. 4. For Ï the margin ««is necessary and sufficient. The gap between necessary (linear) conditions indicated by «and sufficient (nonlinear) conditions «is shaded for Ï ¾ and Ï. and evaluate the stability margin with respect to the entries of Ï. We still obtain a necessary condition from the linear system Ü Ü ÏÜ «Â Ò Ü Á Ï «Â Ò µü by evaluation of the maximal «which preserves negative real parts for all eigenvalues. However, due to the lack of symmetry in Ï in general this necessary margin is not sufficient and we expect a gap to the results for the full non-linear range obtained by methods proposed above. In Fig. 4 the maximal «for the linear case together with the largest sufficient margin «obtained numerically for the non-linear case is shown for Ò and Ï Ö Ö Ï¾ Ö Ö Ö Ï Ö Ö ¾ Ö Ö Ö Proportional to the magnitude Ö ¾ and number of connections defined by elements of Ï ¾ the respective stability margins ««shrink. It is interesting that for Ï, which is upper triangular with only positive entries, the margins ««coincide 4 In the other cases there is a grey shaded region between «and «, where we can neither conclude stability nor instability. In all cases the corresponding hypercube ««Æ centered at Ï defines a region, where for all weight changes inside this region the system remains provably stable. B. Application to on-line trajectory learning Typically we proceed in three steps: (i) we train the network by recurrent backpropagation off-line without monitoring any stability conditions, (ii) we prove that the derived network is input-output stable, and (iii) we derive a stability region in weight space centered at the learned weight matrix to allow online adaptation and to tolerate noise. In the following we analyze the stability of network which is adapted to behave for given input Ù like a first order filter given by Þ Þ Ù with certain time-constant. For step one we use a fully connected network of five neurons with input We tested this with the same result for many other positive upper triangular matrices. This may be a whole class of systems, where linear and non-linear margins coincide, eg. where the linear worst case condition is necessary and sufficient. Ö ¾ Ö

8 y ref net u -180 y ref net NRMS weights adapted online: no 15 input output input/ output all max τ 25 Fig. 5. The network learned to behave like a filter from a chaotic sample trajectory (the learned and the reference trajectory are almost coincident). It is validated for a step signal. ٠ص, output Ü Øµ, and reference output Ü Øµ, which results in the network equations Ü Ü Û ³ Ü µ Û Ù Øµ (26) where ³ Ü µ ØÒ Ü µ in the experiments. For adaptation we employ fully continuous backpropagation (as in [10]) to minimize the error functional Ø Ø µ Ø Ø Ü Øµ Ü Øµµ ¾ Ø The input signal is chosen to be the first coordinate function of the well known three dimensional chaotic Roessler dynamics, which was numerically integrated over time-steps. The reference output is the corresponding filtered signal, both shown (partially) in Fig. 5. As can be seen from the unit-step response, the network indeed learns the corresponding dynamical operator from the chaotic input-output pair, however with some overshoot. The corresponding matrix Ï together with the input weights obtained is given in the Appendix C. Proceeding with the stability analysis of step (ii) we first evaluate the stability of the time-invariant system (26) for fixed Ï using Lemma 1. We easily obtain that the corresponding linear matrix inequality Ê Á Ï µì is feasible even for Ì Á, i.e. all eigenvalues of Á ¾ Ï Ï Ì µ are positive and we obtain that the learned system will for arbitrary Ä ¾ -input functions yield an Ä ¾ output function 5. Finally we turn to the most important and interesting step (iii) and investigate the size of the provable stable regions, which can be used for on-line adaptation. The chosen task is to cope with (slowly) varying properties of the referenced model presented here in form of a slowly increasing time constant ص given for times Ø ¾ and Å ¾ as ص Å Ó Øµµ (27) and results in Ø ÑÜ ¾Å. Using online adaptation to balance this non-linear drift has in general two disadvantages: it is Here scaling by Ì is not necessary but in more complicated cases, as for instance reported in [39], scaling with non-trivial Ì is in general essential. Fig. 6. The normalized mean square error (NRMS) for slowly drifting reference model parameter ص ¾ ¾Å according to (27). output weight deviations Û Û ¾ Û Û Û Û Ñ Û Û Ð Fig. 7. indicate the maximal allowed range according to the stability criteria, whereas Û Ñ Û Ð are the maximal deviations, which occurred while on-line learning. The system learned provably stable. time and memory consuming and the numerical effort to prove stability of the network in the presence of time-varying weights quickly becomes intractable, even with the methods proposed here. Further the stability range will be small if many weight changes are allowed and may interact in complex ways. For the example matrix Ï this results in a uniform margin «. A better compromise between flexibility and stability in the network and a practically reasonable approach is to adapt only a subset of the weights on-line. It is intuitive to choose for adaptation the weights of the output node(s) and the performance results displayed in Fig. 6 indicate that this strategy can be indeed very useful and can yield good performance. The uniform stability margin for this subset, the output weights, increases then to «. Fig. 6 also shows that adaptation of the input weights contributes very much to the performance, though there are irrelevant with respect to stability. In Fig. 7 the maximal deviations which numerically occurred when using on-line learning are shown together with the maximal allowed deviation obtained by optimizing the stability range for the output weights using the LMI (19). By optimization we obtained component wise upper and lower bounds and which prove that the on-line adapted network never left the stability region. In the converse direction we can use this bounds to restrict the learning to the allowed region to assure stability. VIII. CONCLUSIONS We derived a framework to analyze the stability of the inputoutput behavior of a recurrent network based on a reparametrisation scheme for time-variance of the weights, the application of

9 frequency domain conditions, and the usage of interior point optimization for corresponding linear matrix inequalities. We gain from this analysis that an online adapted network is provably stable as long as the (optimized) bounds for the time-varying weights are respected. Though they methods derived are theoretically involved and use a number of advanced concepts from non-linear feedback system theory, the results can easily be applied, because there are a number of reference implementations for interior point optimization to solve the respective linear matrix inequality problems. It is especially appealing that this requires only data of the network, i.e. that there are no magic parameters or numerical problems hidden. The computational effort remains tractable for a reasonable number of time-varying weights up to. We regard this approach to guarantee the network s stability as a step towards application of recurrent networks for dynamical tasks in more critical domains as for instance engineering systems, where the costs of misbehavior and reset of the system can be very high. A main area of further research is the problem that a priori it is not clear if the obtained stability ranges are large enough to allow reasonably effective on-line adaptation. The example given here and a second case study in trajectory learning [39] show that this is the case for some non-trivial tasks and currently more systematic studies are carried out to further clarify this point. Though such studies require a large amount of simulations and are numerically costly we believe that this investment contributes to a better basis for gaining benefit from exploitation of the vast dynamical possibilities of recurrent networks in real applications. APPENDIX B Proof of Theorem 1: We apply the passivity inequality (13) for Æ µ: Æ µ ܵ ¾ ܵ Ü Üµ C ٠ܵ ܵ C µù µ ܵ ¾ ܵ C µ Æ µ µ Ù where we need passivity of as in (12) in the second and the condition (14) of the theorem in the last step. It follows Ï Ý C µ ٠ݵµ C µ µ Æ µ µ APPENDIX C: EXAMPLE WEIGHT MATRIX Ù ¾ ¾ ¾ ¾ ¾ ¾ ¾ ¾ used together with input weights Û ¾ ¾ ¾ APPENDIX A: THE DISSIPATION COEFFICIENT In the time domain the linear operator defined by µ is represented by a convolution kernel which is obtained from the inverse Laplace transform ص Ä µ. Application of to a time function ص yields Ý Øµ µ ص µ ص Ø Ø µ µ Assuming that µ is the transfer matrix of a stable linear convolution operator we only need to consider solutions with Ü µ for which the Fourier transform Ü Øµ Ü µ exists. Then we can compute the dissipation coefficient using the Parseval formula Ü Ü Ü Ì Øµ ܵ ØµØ Ü Ì µ µü µ ¾ ¾ Ò ÑÒ Ê µ µ Ü Ì µ ¾ µ Ì µ Ü µ Ò ÑÒ Ê µ µ Ü ¾ ¾ Ü Ì µü µ

10 ACKNOWLEDGMENTS The author was supported by the German Research Foundation (DFG) under grant Ri 621/2-1. He would like to acknowledge many helpful discussions with Prof. I.B. Junger and Prof. H. Ritter. REFERENCES [1] K. Funahashi and Y. Nakamura, Approximation of dynamical systems by continuous time recurrent neural networks, Neural Networks, vol. 6, pp , [2] Elias B. Kosmatopoulos, Marios M. Polycarpou, Manolis A. Christodoulou, and Petros A. Ioannou, High-order neural network structures for identification of dynamical systems, IEEE Transactions on Neural Networks, vol. 6, no. 2, pp , March [3] J. J. Hopfield, Neurons with graded responses have collective computational properties like those of two-state neurons, Proc. Nat. Acad. Sc. USA, vol. 81, pp , [4] X.-Y. Wu, Y.-S. Xia, J. Li, and W.-K. Chen, A high-performance neural network for solving linear and quadratic programming problems, IEEE Transactions on Neural Networks, vol. 7, no. 3, pp , May [5] M. Forti and A. Tesi, New conditions for global stability of neural networks with application to linear and quadratic programming problems, IEEE Transactions on Circuits and Systems-I: Fundamental Theory and Applications, vol. 42, no. 7, pp , [6] Abdesselam Bouzerdoum and Tim R. Pattison, Neural network for quadratic optimisation with bound constraints, IEEE Transactions on Neural Networks, vol. 4, no. 2, pp , March [7] M. Galicki, L. Leistritz, and H. Witte, Learning continuous trajectories in recurrent neural networks with time-dependent weights, IEEE Trans. Neural Networks, vol. 10, no. 4, pp , [8] L. Wang, Oscillatory and chaotic dynamics in neural networks under varying operating conditions, IEEE Transactions on Neural Networks, vol. 7, no. 6, pp , November [9] A. Ruiz, D. H. Owens, and S. Townley, Existence, learning, and replication of periodic motions in recurrent neural networks, IEEE Trans. Neural Networks, vol. 9, pp , [10] B. A. Pearlmutter, Gradient calculations for dynamic recurrent neural networks: A survey, IEEE Tansactions on Neural Networks, vol. 6, no. 5, pp , [11] M. K. Sundareshan and T. A. Condarcure, Recurrent neural-network training by a learning automaton approach for trajectory learning and control system, IEEE Transactions on Neural Networks, vol. 9, no. 3, pp , May [12] S. Lu and T. Basar, Robust nonlinear system identification using neuralnetwork models, IEEE Transactions on Neural Networks, vol. 9, no. 3, pp , May [13] E. Rios-Patron and R. D. Braatz, On the Identification and control of dynamical systems using neural networks, IEEE Transactions on Neural Networks, vol. 8, no. 2, pp. 452, March [14] Asriel U. Levin and Kumpati S. Narendra, Control of nonlinear dynamical systems using neural networks part II: Observability, identification, and control, IEEE Transactions on Neural Networks, vol. 7, no. 1, pp , January [15] K. J. Hunt, D. Sbarbaro, R. Zbikowski, and P. J. Garthrop, Neural networks for control systems a survey, Automatica, vol. 28, no. 6, pp , [16] F. Badran and S. Thiria, Neural network smoothing in correlated time series context, Neural Networks, vol. 10, no. 8, pp , [17] Vassilios Petridis and Athanasios Kehagias, A recurrent network implementation of time series classification, Neural Computation, vol. 8, no. 2, pp , [18] Hong Pi and Carsten Peterson, Finding the embedding dimension and variable dependencies in time series, Neural Computation, vol. 6, no. 3, pp , [19] K. S. Narendra and J. H. Taylor, Frequency Domain Criteria for Absolute Stability, Academic Press, New York, [20] C. Desoer and M. Vidyasagar, Feedback Systems: input-output properties, Academic Press, New York, [21] C. Harris and J. Valenca, The Stability of Input-Output Dynamical Systems, Academic Press, London, [22] M. Vidyasagar, Nonlinear Systems Analysis, Prentice Hall, 2. edition, [23] H. Ye, A. N. Michel, and K. Wang, Robust stability of nonlinear timedelay systems with applications to neurl networks, IEEE Transactions on Circuits and Systems-I:Fundamental Theory and Applications, vol. 43, no. 7, pp , [24] K. Tanaka, An approach to stability criteria of neural-network control systems, IEEE Transactions on Neural Networks, vol. 7, no. 3, pp , [25] Y. Fang and T. G. Kincaid, Stability analysis of dynamical neural networks, IEEE Tansactions on Neural Networks, vol. 7, no. 4, pp , [26] X. Liang and L. Wu, Global exponential stability of Hopfield-type neural network and its applications, Science in China (Series A), vol. 38, no. 6, pp , [27] K. Matsuoka, Stability conditions for nonlinear continuous neural networks with asymmetric connection weights, Neural Networks, vol. 5, pp , [28] Jochen J. Steil and Helge Ritter, Input-output stability of recurrent neural networks with delays using circle criteria, in Proc. Int. ICSC/IFAC Symp. Neural Comp , pp , ICSC Academic Press. [29] Jochen J. Steil, Input-Output Stability of Recurrent Neural Networks, Cuvillier Verlag, Göttingen, 1999, (Also: Phd.-Dissertation, Faculty of Technology, Bielefeld University, 1999). [30] C. Guzelis and L.O. Chua, Stability analysis of generalized cellular neural networks, Int. J. Circuit Theory and Applications, vol. 21, no. 1, pp. 1 33, [31] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, vol. 15 of SIAM Studies in Applied Mathematics, SIAM, Philadelphia, [32] Interior point methods online, [33] Anders Rantzer, On the Kalman-Yakubovich-Popov lemma, Systems & Control Letters, vol. 28, pp. 7 10, [34] I. B. Junger, Criterion of absolute stability for automatic systems with nonlinear vector elements, Automation and Remote Control, vol. 50, no. 2, pp , [35] V. Fromion, S. Monaco, and C. Normand-Cyrot, A link between inputoutput stability and Lyapunov stability, Systems & Control Letters, vol. 27, pp , [36] K. Wang and A. M. Michel, Stability analysis of differential inclusions in Banach space with applications to nonlinear systems with time delays, IEEE Transactions on Circuits and Systems-I:Fundamental Theory and Applications, vol. 43, no. 8, pp , [37] Y. Fang, K. A. Lopardo, and X. Feng, A sufficient condition for stablity of a polytope of matrices, Systems & Control Letters, vol. 23, pp , [38] A. Nemirovskij, Several NP-hard problems arising in robust stability analysis, Mathematics of Control, Signals, and Systems, vol. 6, no. 1, pp , [39] J. J. Steil and H. Ritter, Recurrent learning of input-output stable behaviour in function space: A case study with the Roessler attractor, in Proc. ICANN , pp , IEE.

u e G x = y linear convolution operator. In the time domain, the equation (2) becomes y(t) = (Ge)(t) = (G e)(t) = Z t G(t )e()d; and in either domains

u e G x = y linear convolution operator. In the time domain, the equation (2) becomes y(t) = (Ge)(t) = (G e)(t) = Z t G(t )e()d; and in either domains Input-Output Stability of Recurrent Neural Networks with Delays using Circle Criteria Jochen J. Steil and Helge Ritter, University of Bielefeld, Faculty of Technology, Neuroinformatics Group, P.O.-Box

More information

CONVEX OPTIMIZATION OVER POSITIVE POLYNOMIALS AND FILTER DESIGN. Y. Genin, Y. Hachez, Yu. Nesterov, P. Van Dooren

CONVEX OPTIMIZATION OVER POSITIVE POLYNOMIALS AND FILTER DESIGN. Y. Genin, Y. Hachez, Yu. Nesterov, P. Van Dooren CONVEX OPTIMIZATION OVER POSITIVE POLYNOMIALS AND FILTER DESIGN Y. Genin, Y. Hachez, Yu. Nesterov, P. Van Dooren CESAME, Université catholique de Louvain Bâtiment Euler, Avenue G. Lemaître 4-6 B-1348 Louvain-la-Neuve,

More information

Observations on the Stability Properties of Cooperative Systems

Observations on the Stability Properties of Cooperative Systems 1 Observations on the Stability Properties of Cooperative Systems Oliver Mason and Mark Verwoerd Abstract We extend two fundamental properties of positive linear time-invariant (LTI) systems to homogeneous

More information

IN THIS PAPER, we consider a class of continuous-time recurrent

IN THIS PAPER, we consider a class of continuous-time recurrent IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 51, NO. 4, APRIL 2004 161 Global Output Convergence of a Class of Continuous-Time Recurrent Neural Networks With Time-Varying Thresholds

More information

On Computing the Worst-case Performance of Lur'e Systems with Uncertain Time-invariant Delays

On Computing the Worst-case Performance of Lur'e Systems with Uncertain Time-invariant Delays Article On Computing the Worst-case Performance of Lur'e Systems with Uncertain Time-invariant Delays Thapana Nampradit and David Banjerdpongchai* Department of Electrical Engineering, Faculty of Engineering,

More information

Parameterized Linear Matrix Inequality Techniques in Fuzzy Control System Design

Parameterized Linear Matrix Inequality Techniques in Fuzzy Control System Design 324 IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 9, NO. 2, APRIL 2001 Parameterized Linear Matrix Inequality Techniques in Fuzzy Control System Design H. D. Tuan, P. Apkarian, T. Narikiyo, and Y. Yamamoto

More information

Margin Maximizing Loss Functions

Margin Maximizing Loss Functions Margin Maximizing Loss Functions Saharon Rosset, Ji Zhu and Trevor Hastie Department of Statistics Stanford University Stanford, CA, 94305 saharon, jzhu, hastie@stat.stanford.edu Abstract Margin maximizing

More information

Analysis of Spectral Kernel Design based Semi-supervised Learning

Analysis of Spectral Kernel Design based Semi-supervised Learning Analysis of Spectral Kernel Design based Semi-supervised Learning Tong Zhang IBM T. J. Watson Research Center Yorktown Heights, NY 10598 Rie Kubota Ando IBM T. J. Watson Research Center Yorktown Heights,

More information

Optimization based robust control

Optimization based robust control Optimization based robust control Didier Henrion 1,2 Draft of March 27, 2014 Prepared for possible inclusion into The Encyclopedia of Systems and Control edited by John Baillieul and Tariq Samad and published

More information

Lyapunov Stability of Linear Predictor Feedback for Distributed Input Delays

Lyapunov Stability of Linear Predictor Feedback for Distributed Input Delays IEEE TRANSACTIONS ON AUTOMATIC CONTROL VOL. 56 NO. 3 MARCH 2011 655 Lyapunov Stability of Linear Predictor Feedback for Distributed Input Delays Nikolaos Bekiaris-Liberis Miroslav Krstic In this case system

More information

Stability of linear time-varying systems through quadratically parameter-dependent Lyapunov functions

Stability of linear time-varying systems through quadratically parameter-dependent Lyapunov functions Stability of linear time-varying systems through quadratically parameter-dependent Lyapunov functions Vinícius F. Montagner Department of Telematics Pedro L. D. Peres School of Electrical and Computer

More information

Hybrid Systems Course Lyapunov stability

Hybrid Systems Course Lyapunov stability Hybrid Systems Course Lyapunov stability OUTLINE Focus: stability of an equilibrium point continuous systems decribed by ordinary differential equations (brief review) hybrid automata OUTLINE Focus: stability

More information

H State-Feedback Controller Design for Discrete-Time Fuzzy Systems Using Fuzzy Weighting-Dependent Lyapunov Functions

H State-Feedback Controller Design for Discrete-Time Fuzzy Systems Using Fuzzy Weighting-Dependent Lyapunov Functions IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL 11, NO 2, APRIL 2003 271 H State-Feedback Controller Design for Discrete-Time Fuzzy Systems Using Fuzzy Weighting-Dependent Lyapunov Functions Doo Jin Choi and PooGyeon

More information

A Generalization of Principal Component Analysis to the Exponential Family

A Generalization of Principal Component Analysis to the Exponential Family A Generalization of Principal Component Analysis to the Exponential Family Michael Collins Sanjoy Dasgupta Robert E. Schapire AT&T Labs Research 8 Park Avenue, Florham Park, NJ 7932 mcollins, dasgupta,

More information

GLOBAL ANALYSIS OF PIECEWISE LINEAR SYSTEMS USING IMPACT MAPS AND QUADRATIC SURFACE LYAPUNOV FUNCTIONS

GLOBAL ANALYSIS OF PIECEWISE LINEAR SYSTEMS USING IMPACT MAPS AND QUADRATIC SURFACE LYAPUNOV FUNCTIONS GLOBAL ANALYSIS OF PIECEWISE LINEAR SYSTEMS USING IMPACT MAPS AND QUADRATIC SURFACE LYAPUNOV FUNCTIONS Jorge M. Gonçalves, Alexandre Megretski y, Munther A. Dahleh y California Institute of Technology

More information

Research Article Convex Polyhedron Method to Stability of Continuous Systems with Two Additive Time-Varying Delay Components

Research Article Convex Polyhedron Method to Stability of Continuous Systems with Two Additive Time-Varying Delay Components Applied Mathematics Volume 202, Article ID 689820, 3 pages doi:0.55/202/689820 Research Article Convex Polyhedron Method to Stability of Continuous Systems with Two Additive Time-Varying Delay Components

More information

RECENTLY, many artificial neural networks especially

RECENTLY, many artificial neural networks especially 502 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 54, NO. 6, JUNE 2007 Robust Adaptive Control of Unknown Modified Cohen Grossberg Neural Netwks With Delays Wenwu Yu, Student Member,

More information

ADAPTIVE control of uncertain time-varying plants is a

ADAPTIVE control of uncertain time-varying plants is a IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 56, NO. 1, JANUARY 2011 27 Supervisory Control of Uncertain Linear Time-Varying Systems Linh Vu, Member, IEEE, Daniel Liberzon, Senior Member, IEEE Abstract

More information

We are devoted to advance in the study of the behaviour of nonlinear discrete-time systems by means of its energy properties.

We are devoted to advance in the study of the behaviour of nonlinear discrete-time systems by means of its energy properties. Chapter 1 Introduction In this chapter, the reasons for the dissipativity and passivity-related properties to be studied in nonlinear discrete-time systems will be described. The new contributions and

More information

THIS paper studies the input design problem in system identification.

THIS paper studies the input design problem in system identification. 1534 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 50, NO. 10, OCTOBER 2005 Input Design Via LMIs Admitting Frequency-Wise Model Specifications in Confidence Regions Henrik Jansson Håkan Hjalmarsson, Member,

More information

Global Asymptotic Stability of a General Class of Recurrent Neural Networks With Time-Varying Delays

Global Asymptotic Stability of a General Class of Recurrent Neural Networks With Time-Varying Delays 34 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL 50, NO 1, JANUARY 2003 Global Asymptotic Stability of a General Class of Recurrent Neural Networks With Time-Varying

More information

A DELAY-DEPENDENT APPROACH TO DESIGN STATE ESTIMATOR FOR DISCRETE STOCHASTIC RECURRENT NEURAL NETWORK WITH INTERVAL TIME-VARYING DELAYS

A DELAY-DEPENDENT APPROACH TO DESIGN STATE ESTIMATOR FOR DISCRETE STOCHASTIC RECURRENT NEURAL NETWORK WITH INTERVAL TIME-VARYING DELAYS ICIC Express Letters ICIC International c 2009 ISSN 1881-80X Volume, Number (A), September 2009 pp. 5 70 A DELAY-DEPENDENT APPROACH TO DESIGN STATE ESTIMATOR FOR DISCRETE STOCHASTIC RECURRENT NEURAL NETWORK

More information

Nonlinear System Analysis

Nonlinear System Analysis Nonlinear System Analysis Lyapunov Based Approach Lecture 4 Module 1 Dr. Laxmidhar Behera Department of Electrical Engineering, Indian Institute of Technology, Kanpur. January 4, 2003 Intelligent Control

More information

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption ANDRÉ NUNES DE SOUZA, JOSÉ ALFREDO C. ULSON, IVAN NUNES

More information

Fast Fourier Transform Solvers and Preconditioners for Quadratic Spline Collocation

Fast Fourier Transform Solvers and Preconditioners for Quadratic Spline Collocation Fast Fourier Transform Solvers and Preconditioners for Quadratic Spline Collocation Christina C. Christara and Kit Sun Ng Department of Computer Science University of Toronto Toronto, Ontario M5S 3G4,

More information

Several ways to solve the MSO problem

Several ways to solve the MSO problem Several ways to solve the MSO problem J. J. Steil - Bielefeld University - Neuroinformatics Group P.O.-Box 0 0 3, D-3350 Bielefeld - Germany Abstract. The so called MSO-problem, a simple superposition

More information

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about Rank-one LMIs and Lyapunov's Inequality Didier Henrion 1;; Gjerrit Meinsma Abstract We describe a new proof of the well-known Lyapunov's matrix inequality about the location of the eigenvalues of a matrix

More information

Global Analysis of Piecewise Linear Systems Using Impact Maps and Surface Lyapunov Functions

Global Analysis of Piecewise Linear Systems Using Impact Maps and Surface Lyapunov Functions IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 48, NO 12, DECEMBER 2003 2089 Global Analysis of Piecewise Linear Systems Using Impact Maps and Surface Lyapunov Functions Jorge M Gonçalves, Alexandre Megretski,

More information

Static Output Feedback Stabilisation with H Performance for a Class of Plants

Static Output Feedback Stabilisation with H Performance for a Class of Plants Static Output Feedback Stabilisation with H Performance for a Class of Plants E. Prempain and I. Postlethwaite Control and Instrumentation Research, Department of Engineering, University of Leicester,

More information

Handout 2: Invariant Sets and Stability

Handout 2: Invariant Sets and Stability Engineering Tripos Part IIB Nonlinear Systems and Control Module 4F2 1 Invariant Sets Handout 2: Invariant Sets and Stability Consider again the autonomous dynamical system ẋ = f(x), x() = x (1) with state

More information

Further Results on Model Structure Validation for Closed Loop System Identification

Further Results on Model Structure Validation for Closed Loop System Identification Advances in Wireless Communications and etworks 7; 3(5: 57-66 http://www.sciencepublishinggroup.com/j/awcn doi:.648/j.awcn.735. Further esults on Model Structure Validation for Closed Loop System Identification

More information

Frequency domain representation and singular value decomposition

Frequency domain representation and singular value decomposition EOLSS Contribution 643134 Frequency domain representation and singular value decomposition AC Antoulas Department of Electrical and Computer Engineering Rice University Houston, Texas 77251-1892, USA e-mail:

More information

An LQ R weight selection approach to the discrete generalized H 2 control problem

An LQ R weight selection approach to the discrete generalized H 2 control problem INT. J. CONTROL, 1998, VOL. 71, NO. 1, 93± 11 An LQ R weight selection approach to the discrete generalized H 2 control problem D. A. WILSON², M. A. NEKOUI² and G. D. HALIKIAS² It is known that a generalized

More information

IT IS common engineering practice to work with the simplest

IT IS common engineering practice to work with the simplest IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 42, NO. 6, JUNE 1997 819 System Analysis via Integral Quadratic Constraints Alexandre Megretski, Member, IEEE, and Anders Rantzer, Member, IEEE Abstract This

More information

Global stabilization of feedforward systems with exponentially unstable Jacobian linearization

Global stabilization of feedforward systems with exponentially unstable Jacobian linearization Global stabilization of feedforward systems with exponentially unstable Jacobian linearization F Grognard, R Sepulchre, G Bastin Center for Systems Engineering and Applied Mechanics Université catholique

More information

Minimum-Phase Property of Nonlinear Systems in Terms of a Dissipation Inequality

Minimum-Phase Property of Nonlinear Systems in Terms of a Dissipation Inequality Minimum-Phase Property of Nonlinear Systems in Terms of a Dissipation Inequality Christian Ebenbauer Institute for Systems Theory in Engineering, University of Stuttgart, 70550 Stuttgart, Germany ce@ist.uni-stuttgart.de

More information

Hybrid Systems - Lecture n. 3 Lyapunov stability

Hybrid Systems - Lecture n. 3 Lyapunov stability OUTLINE Focus: stability of equilibrium point Hybrid Systems - Lecture n. 3 Lyapunov stability Maria Prandini DEI - Politecnico di Milano E-mail: prandini@elet.polimi.it continuous systems decribed by

More information

Stable Adaptive Momentum for Rapid Online Learning in Nonlinear Systems

Stable Adaptive Momentum for Rapid Online Learning in Nonlinear Systems Stable Adaptive Momentum for Rapid Online Learning in Nonlinear Systems Thore Graepel and Nicol N. Schraudolph Institute of Computational Science ETH Zürich, Switzerland {graepel,schraudo}@inf.ethz.ch

More information

Cost Distribution Shaping: The Relations Between Bode Integral, Entropy, Risk-Sensitivity, and Cost Cumulant Control

Cost Distribution Shaping: The Relations Between Bode Integral, Entropy, Risk-Sensitivity, and Cost Cumulant Control Cost Distribution Shaping: The Relations Between Bode Integral, Entropy, Risk-Sensitivity, and Cost Cumulant Control Chang-Hee Won Abstract The cost function in stochastic optimal control is viewed as

More information

Expressions for the covariance matrix of covariance data

Expressions for the covariance matrix of covariance data Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden

More information

State and Parameter Estimation Based on Filtered Transformation for a Class of Second-Order Systems

State and Parameter Estimation Based on Filtered Transformation for a Class of Second-Order Systems State and Parameter Estimation Based on Filtered Transformation for a Class of Second-Order Systems Mehdi Tavan, Kamel Sabahi, and Saeid Hoseinzadeh Abstract This paper addresses the problem of state and

More information

LMI based Stability criteria for 2-D PSV system described by FM-2 Model

LMI based Stability criteria for 2-D PSV system described by FM-2 Model Vol-4 Issue-018 LMI based Stability criteria for -D PSV system described by FM- Model Prashant K Shah Department of Electronics Engineering SVNIT, pks@eced.svnit.ac.in Abstract Stability analysis is the

More information

Randomized Simultaneous Messages: Solution of a Problem of Yao in Communication Complexity

Randomized Simultaneous Messages: Solution of a Problem of Yao in Communication Complexity Randomized Simultaneous Messages: Solution of a Problem of Yao in Communication Complexity László Babai Peter G. Kimmel Department of Computer Science The University of Chicago 1100 East 58th Street Chicago,

More information

A State-Space Approach to Control of Interconnected Systems

A State-Space Approach to Control of Interconnected Systems A State-Space Approach to Control of Interconnected Systems Part II: General Interconnections Cédric Langbort Center for the Mathematics of Information CALIFORNIA INSTITUTE OF TECHNOLOGY clangbort@ist.caltech.edu

More information

Filter Design for Linear Time Delay Systems

Filter Design for Linear Time Delay Systems IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 49, NO. 11, NOVEMBER 2001 2839 ANewH Filter Design for Linear Time Delay Systems E. Fridman Uri Shaked, Fellow, IEEE Abstract A new delay-dependent filtering

More information

Auxiliary signal design for failure detection in uncertain systems

Auxiliary signal design for failure detection in uncertain systems Auxiliary signal design for failure detection in uncertain systems R. Nikoukhah, S. L. Campbell and F. Delebecque Abstract An auxiliary signal is an input signal that enhances the identifiability of a

More information

Research Article An Equivalent LMI Representation of Bounded Real Lemma for Continuous-Time Systems

Research Article An Equivalent LMI Representation of Bounded Real Lemma for Continuous-Time Systems Hindawi Publishing Corporation Journal of Inequalities and Applications Volume 28, Article ID 67295, 8 pages doi:1.1155/28/67295 Research Article An Equivalent LMI Representation of Bounded Real Lemma

More information

Adaptive Predictive Observer Design for Class of Uncertain Nonlinear Systems with Bounded Disturbance

Adaptive Predictive Observer Design for Class of Uncertain Nonlinear Systems with Bounded Disturbance International Journal of Control Science and Engineering 2018, 8(2): 31-35 DOI: 10.5923/j.control.20180802.01 Adaptive Predictive Observer Design for Class of Saeed Kashefi *, Majid Hajatipor Faculty of

More information

Final exam: Computer-controlled systems (Datorbaserad styrning, 1RT450, 1TS250)

Final exam: Computer-controlled systems (Datorbaserad styrning, 1RT450, 1TS250) Uppsala University Department of Information Technology Systems and Control Professor Torsten Söderström Final exam: Computer-controlled systems (Datorbaserad styrning, RT450, TS250) Date: December 9,

More information

Correspondence should be addressed to Chien-Yu Lu,

Correspondence should be addressed to Chien-Yu Lu, Hindawi Publishing Corporation Discrete Dynamics in Nature and Society Volume 2009, Article ID 43015, 14 pages doi:10.1155/2009/43015 Research Article Delay-Range-Dependent Global Robust Passivity Analysis

More information

Applied Nonlinear Control

Applied Nonlinear Control Applied Nonlinear Control JEAN-JACQUES E. SLOTINE Massachusetts Institute of Technology WEIPING LI Massachusetts Institute of Technology Pearson Education Prentice Hall International Inc. Upper Saddle

More information

Block-tridiagonal matrices

Block-tridiagonal matrices Block-tridiagonal matrices. p.1/31 Block-tridiagonal matrices - where do these arise? - as a result of a particular mesh-point ordering - as a part of a factorization procedure, for example when we compute

More information

Copyrighted Material. 1.1 Large-Scale Interconnected Dynamical Systems

Copyrighted Material. 1.1 Large-Scale Interconnected Dynamical Systems Chapter One Introduction 1.1 Large-Scale Interconnected Dynamical Systems Modern complex dynamical systems 1 are highly interconnected and mutually interdependent, both physically and through a multitude

More information

LINEAR variational inequality (LVI) is to find

LINEAR variational inequality (LVI) is to find IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 18, NO. 6, NOVEMBER 2007 1697 Solving Generally Constrained Generalized Linear Variational Inequalities Using the General Projection Neural Networks Xiaolin Hu,

More information

arxiv: v3 [math.oc] 1 Sep 2018

arxiv: v3 [math.oc] 1 Sep 2018 arxiv:177.148v3 [math.oc] 1 Sep 218 The converse of the passivity and small-gain theorems for input-output maps Sei Zhen Khong, Arjan van der Schaft Version: June 25, 218; accepted for publication in Automatica

More information

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases 2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary

More information

arxiv: v1 [math.dg] 19 Mar 2012

arxiv: v1 [math.dg] 19 Mar 2012 BEREZIN-TOEPLITZ QUANTIZATION AND ITS KERNEL EXPANSION XIAONAN MA AND GEORGE MARINESCU ABSTRACT. We survey recent results [33, 34, 35, 36] about the asymptotic expansion of Toeplitz operators and their

More information

QUANTITATIVE L P STABILITY ANALYSIS OF A CLASS OF LINEAR TIME-VARYING FEEDBACK SYSTEMS

QUANTITATIVE L P STABILITY ANALYSIS OF A CLASS OF LINEAR TIME-VARYING FEEDBACK SYSTEMS Int. J. Appl. Math. Comput. Sci., 2003, Vol. 13, No. 2, 179 184 QUANTITATIVE L P STABILITY ANALYSIS OF A CLASS OF LINEAR TIME-VARYING FEEDBACK SYSTEMS PINI GURFIL Department of Mechanical and Aerospace

More information

Temporal Backpropagation for FIR Neural Networks

Temporal Backpropagation for FIR Neural Networks Temporal Backpropagation for FIR Neural Networks Eric A. Wan Stanford University Department of Electrical Engineering, Stanford, CA 94305-4055 Abstract The traditional feedforward neural network is a static

More information

VARIATIONAL CALCULUS IN SPACE OF MEASURES AND OPTIMAL DESIGN

VARIATIONAL CALCULUS IN SPACE OF MEASURES AND OPTIMAL DESIGN Chapter 1 VARIATIONAL CALCULUS IN SPACE OF MEASURES AND OPTIMAL DESIGN Ilya Molchanov Department of Statistics University of Glasgow ilya@stats.gla.ac.uk www.stats.gla.ac.uk/ ilya Sergei Zuyev Department

More information

A Delay-dependent Condition for the Exponential Stability of Switched Linear Systems with Time-varying Delay

A Delay-dependent Condition for the Exponential Stability of Switched Linear Systems with Time-varying Delay A Delay-dependent Condition for the Exponential Stability of Switched Linear Systems with Time-varying Delay Kreangkri Ratchagit Department of Mathematics Faculty of Science Maejo University Chiang Mai

More information

Adaptive Control of a Class of Nonlinear Systems with Nonlinearly Parameterized Fuzzy Approximators

Adaptive Control of a Class of Nonlinear Systems with Nonlinearly Parameterized Fuzzy Approximators IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 9, NO. 2, APRIL 2001 315 Adaptive Control of a Class of Nonlinear Systems with Nonlinearly Parameterized Fuzzy Approximators Hugang Han, Chun-Yi Su, Yury Stepanenko

More information

Robust Observer for Uncertain T S model of a Synchronous Machine

Robust Observer for Uncertain T S model of a Synchronous Machine Recent Advances in Circuits Communications Signal Processing Robust Observer for Uncertain T S model of a Synchronous Machine OUAALINE Najat ELALAMI Noureddine Laboratory of Automation Computer Engineering

More information

Introduction to Machine Learning Spring 2018 Note Neural Networks

Introduction to Machine Learning Spring 2018 Note Neural Networks CS 189 Introduction to Machine Learning Spring 2018 Note 14 1 Neural Networks Neural networks are a class of compositional function approximators. They come in a variety of shapes and sizes. In this class,

More information

Analysis of stability for impulsive stochastic fuzzy Cohen-Grossberg neural networks with mixed delays

Analysis of stability for impulsive stochastic fuzzy Cohen-Grossberg neural networks with mixed delays Analysis of stability for impulsive stochastic fuzzy Cohen-Grossberg neural networks with mixed delays Qianhong Zhang Guizhou University of Finance and Economics Guizhou Key Laboratory of Economics System

More information

Narrowing confidence interval width of PAC learning risk function by algorithmic inference

Narrowing confidence interval width of PAC learning risk function by algorithmic inference Narrowing confidence interval width of PAC learning risk function by algorithmic inference Bruno Apolloni, Dario Malchiodi Dip. di Scienze dell Informazione, Università degli Studi di Milano Via Comelico

More information

Secure Communications of Chaotic Systems with Robust Performance via Fuzzy Observer-Based Design

Secure Communications of Chaotic Systems with Robust Performance via Fuzzy Observer-Based Design 212 IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL 9, NO 1, FEBRUARY 2001 Secure Communications of Chaotic Systems with Robust Performance via Fuzzy Observer-Based Design Kuang-Yow Lian, Chian-Song Chiu, Tung-Sheng

More information

Riccati difference equations to non linear extended Kalman filter constraints

Riccati difference equations to non linear extended Kalman filter constraints International Journal of Scientific & Engineering Research Volume 3, Issue 12, December-2012 1 Riccati difference equations to non linear extended Kalman filter constraints Abstract Elizabeth.S 1 & Jothilakshmi.R

More information

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications MAX PLANCK INSTITUTE Elgersburg Workshop Elgersburg February 11-14, 2013 The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications Timo Reis 1 Matthias Voigt 2 1 Department

More information

The Rationale for Second Level Adaptation

The Rationale for Second Level Adaptation The Rationale for Second Level Adaptation Kumpati S. Narendra, Yu Wang and Wei Chen Center for Systems Science, Yale University arxiv:1510.04989v1 [cs.sy] 16 Oct 2015 Abstract Recently, a new approach

More information

Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate

Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate www.scichina.com info.scichina.com www.springerlin.com Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate WEI Chen & CHEN ZongJi School of Automation

More information

State feedback gain scheduling for linear systems with time-varying parameters

State feedback gain scheduling for linear systems with time-varying parameters State feedback gain scheduling for linear systems with time-varying parameters Vinícius F. Montagner and Pedro L. D. Peres Abstract This paper addresses the problem of parameter dependent state feedback

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface

More information

An Observation on the Positive Real Lemma

An Observation on the Positive Real Lemma Journal of Mathematical Analysis and Applications 255, 48 49 (21) doi:1.16/jmaa.2.7241, available online at http://www.idealibrary.com on An Observation on the Positive Real Lemma Luciano Pandolfi Dipartimento

More information

The Connectivity of Boolean Satisfiability: Computational and Structural Dichotomies

The Connectivity of Boolean Satisfiability: Computational and Structural Dichotomies The Connectivity of Boolean Satisfiability: Computational and Structural Dichotomies Parikshit Gopalan Georgia Tech. parik@cc.gatech.edu Phokion G. Kolaitis Ý IBM Almaden. kolaitis@us.ibm.com Christos

More information

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang

More information

Passivity Indices for Symmetrically Interconnected Distributed Systems

Passivity Indices for Symmetrically Interconnected Distributed Systems 9th Mediterranean Conference on Control and Automation Aquis Corfu Holiday Palace, Corfu, Greece June 0-3, 0 TuAT Passivity Indices for Symmetrically Interconnected Distributed Systems Po Wu and Panos

More information

Gramians based model reduction for hybrid switched systems

Gramians based model reduction for hybrid switched systems Gramians based model reduction for hybrid switched systems Y. Chahlaoui Younes.Chahlaoui@manchester.ac.uk Centre for Interdisciplinary Computational and Dynamical Analysis (CICADA) School of Mathematics

More information

The ϵ-capacity of a gain matrix and tolerable disturbances: Discrete-time perturbed linear systems

The ϵ-capacity of a gain matrix and tolerable disturbances: Discrete-time perturbed linear systems IOSR Journal of Mathematics (IOSR-JM) e-issn: 2278-5728, p-issn: 2319-765X. Volume 11, Issue 3 Ver. IV (May - Jun. 2015), PP 52-62 www.iosrjournals.org The ϵ-capacity of a gain matrix and tolerable disturbances:

More information

Research Article Stabilization Analysis and Synthesis of Discrete-Time Descriptor Markov Jump Systems with Partially Unknown Transition Probabilities

Research Article Stabilization Analysis and Synthesis of Discrete-Time Descriptor Markov Jump Systems with Partially Unknown Transition Probabilities Research Journal of Applied Sciences, Engineering and Technology 7(4): 728-734, 214 DOI:1.1926/rjaset.7.39 ISSN: 24-7459; e-issn: 24-7467 214 Maxwell Scientific Publication Corp. Submitted: February 25,

More information

Lecture 21: Spectral Learning for Graphical Models

Lecture 21: Spectral Learning for Graphical Models 10-708: Probabilistic Graphical Models 10-708, Spring 2016 Lecture 21: Spectral Learning for Graphical Models Lecturer: Eric P. Xing Scribes: Maruan Al-Shedivat, Wei-Cheng Chang, Frederick Liu 1 Motivation

More information

Control of Mobile Robots

Control of Mobile Robots Control of Mobile Robots Regulation and trajectory tracking Prof. Luca Bascetta (luca.bascetta@polimi.it) Politecnico di Milano Dipartimento di Elettronica, Informazione e Bioingegneria Organization and

More information

Characterization of Convex and Concave Resource Allocation Problems in Interference Coupled Wireless Systems

Characterization of Convex and Concave Resource Allocation Problems in Interference Coupled Wireless Systems 2382 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 59, NO 5, MAY 2011 Characterization of Convex and Concave Resource Allocation Problems in Interference Coupled Wireless Systems Holger Boche, Fellow, IEEE,

More information

Graph and Controller Design for Disturbance Attenuation in Consensus Networks

Graph and Controller Design for Disturbance Attenuation in Consensus Networks 203 3th International Conference on Control, Automation and Systems (ICCAS 203) Oct. 20-23, 203 in Kimdaejung Convention Center, Gwangju, Korea Graph and Controller Design for Disturbance Attenuation in

More information

A Language for Task Orchestration and its Semantic Properties

A Language for Task Orchestration and its Semantic Properties DEPARTMENT OF COMPUTER SCIENCES A Language for Task Orchestration and its Semantic Properties David Kitchin, William Cook and Jayadev Misra Department of Computer Science University of Texas at Austin

More information

THIS paper deals with robust control in the setup associated

THIS paper deals with robust control in the setup associated IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 50, NO 10, OCTOBER 2005 1501 Control-Oriented Model Validation and Errors Quantification in the `1 Setup V F Sokolov Abstract A priori information required for

More information

USING DYNAMIC NEURAL NETWORKS TO GENERATE CHAOS: AN INVERSE OPTIMAL CONTROL APPROACH

USING DYNAMIC NEURAL NETWORKS TO GENERATE CHAOS: AN INVERSE OPTIMAL CONTROL APPROACH International Journal of Bifurcation and Chaos, Vol. 11, No. 3 (2001) 857 863 c World Scientific Publishing Company USING DYNAMIC NEURAL NETWORKS TO GENERATE CHAOS: AN INVERSE OPTIMAL CONTROL APPROACH

More information

Interval solutions for interval algebraic equations

Interval solutions for interval algebraic equations Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya

More information

Indirect Model Reference Adaptive Control System Based on Dynamic Certainty Equivalence Principle and Recursive Identifier Scheme

Indirect Model Reference Adaptive Control System Based on Dynamic Certainty Equivalence Principle and Recursive Identifier Scheme Indirect Model Reference Adaptive Control System Based on Dynamic Certainty Equivalence Principle and Recursive Identifier Scheme Itamiya, K. *1, Sawada, M. 2 1 Dept. of Electrical and Electronic Eng.,

More information

SUBOPTIMALITY OF THE KARHUNEN-LOÈVE TRANSFORM FOR FIXED-RATE TRANSFORM CODING. Kenneth Zeger

SUBOPTIMALITY OF THE KARHUNEN-LOÈVE TRANSFORM FOR FIXED-RATE TRANSFORM CODING. Kenneth Zeger SUBOPTIMALITY OF THE KARHUNEN-LOÈVE TRANSFORM FOR FIXED-RATE TRANSFORM CODING Kenneth Zeger University of California, San Diego, Department of ECE La Jolla, CA 92093-0407 USA ABSTRACT An open problem in

More information

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES Danlei Chu Tongwen Chen Horacio J Marquez Department of Electrical and Computer Engineering University of Alberta Edmonton

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Optimization of Quadratic Forms: NP Hard Problems : Neural Networks

Optimization of Quadratic Forms: NP Hard Problems : Neural Networks 1 Optimization of Quadratic Forms: NP Hard Problems : Neural Networks Garimella Rama Murthy, Associate Professor, International Institute of Information Technology, Gachibowli, HYDERABAD, AP, INDIA ABSTRACT

More information

A Strict Stability Limit for Adaptive Gradient Type Algorithms

A Strict Stability Limit for Adaptive Gradient Type Algorithms c 009 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional A Strict Stability Limit for Adaptive Gradient Type Algorithms

More information

Delay-dependent Stability Analysis for Markovian Jump Systems with Interval Time-varying-delays

Delay-dependent Stability Analysis for Markovian Jump Systems with Interval Time-varying-delays International Journal of Automation and Computing 7(2), May 2010, 224-229 DOI: 10.1007/s11633-010-0224-2 Delay-dependent Stability Analysis for Markovian Jump Systems with Interval Time-varying-delays

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. 35, No., May 010, pp. 84 305 issn 0364-765X eissn 156-5471 10 350 084 informs doi 10.187/moor.1090.0440 010 INFORMS On the Power of Robust Solutions in Two-Stage

More information

On Information Maximization and Blind Signal Deconvolution

On Information Maximization and Blind Signal Deconvolution On Information Maximization and Blind Signal Deconvolution A Röbel Technical University of Berlin, Institute of Communication Sciences email: roebel@kgwtu-berlinde Abstract: In the following paper we investigate

More information

Acceleration of Levenberg-Marquardt method training of chaotic systems fuzzy modeling

Acceleration of Levenberg-Marquardt method training of chaotic systems fuzzy modeling ISSN 746-7233, England, UK World Journal of Modelling and Simulation Vol. 3 (2007) No. 4, pp. 289-298 Acceleration of Levenberg-Marquardt method training of chaotic systems fuzzy modeling Yuhui Wang, Qingxian

More information

Cascade Neural Networks with Node-Decoupled Extended Kalman Filtering

Cascade Neural Networks with Node-Decoupled Extended Kalman Filtering Cascade Neural Networks with Node-Decoupled Extended Kalman Filtering Michael C. Nechyba and Yangsheng Xu The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 Abstract Most neural networks

More information

THE NOTION of passivity plays an important role in

THE NOTION of passivity plays an important role in 2394 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 9, SEPTEMBER 1998 Passivity Analysis and Passification for Uncertain Signal Processing Systems Lihua Xie, Senior Member, IEEE, Minyue Fu, and Huaizhong

More information