A Systematic Design of Emulators for Multivariable Non Square and Nonlinear Systems

Size: px
Start display at page:

Download "A Systematic Design of Emulators for Multivariable Non Square and Nonlinear Systems"

Transcription

1 International Journal of Automation and Computing 14(6), December 2017, DOI: /s A Systematic Design of Emulators for Multivariable Non Square and Nonlinear Systems Nesrine Bahri 1 Asma Atig 1 Ridha Ben Abdennour 1 Fabrice Druaux 2 Dimitri Lefebvre 2 1 University of Gabes, National School of Engineers of Gabes-Tunisia, Street Omar Ibn Khattab 6029, Gabes 2 University of Havre, 25 Rue Philippe Lebon, Le Havre, France Abstract: In this paper, multimodel and neural emulators are proposed for uncoupled multivariable nonlinear plants with unknown dynamics. The contributions of this paper are to extend the emulators to multivariable non square systems and to propose a systematic method to compute the multimodel synthesis parameters. The effectiveness of the proposed emulators is shown through two simulation examples. The obtained results are very satisfactory, they illustrate the performance of both emulators and show the advantages of the multimodel emulator relatively to the neural one. Keywords: Uncoupled multimodel, neural networks, emulation, multivariable nonlinear systems, parameters estimation. 1 Introduction The study of industrial processes requires generally a precise knowledge of the process dynamics. The presence of strong nonlinearities in complex system dynamics makes the linearized models inefficient. Thus, the use of nonlinear models becomes necessary. Promising alternatives to the usual linearization methods are obtained with the use of neural networks and multimodel approaches. Thanks to their approximation properties, neural networks have been successfully investigated and applied for the emulation of nonlinear plants [1 3]. In our recent works, we have developed emulators based on recurrent neural networks and a real time recurrent algorithm [4, 5]. These emulators are composed of a small number of nodes with a set of parameters that evolve autonomously from zero initial conditions. They were proved to be able to track the system dynamics within a short time window. Consequently, they provide useful information for the control design of nonlinear systems and have been successfully applied in the domain of chemical reactors [5, 6]. Unfortunately, these emulators suffer from two strong limitations: they were developed only for square multi-input multi-output (MIMO) systems and their performance depends strongly on a specific initialization parameter which is difficult to compute. The present paper aims to overcome these limitations. The first contribution is to extend the neural emulator (NE) for non square MIMO nonlinear systems. The second contribution is to replace NE by an MIMO multimodel emulator in order to avoid the problem related to the selection of the initialization parameter. For this purpose, we continue our investigation with multimodel emulators that have been originally Research Article Manuscript received April 2, 2014; accepted March 4, 2015; published online June 20, 2016 Recommended by Associate Editor Sheng Chen c Institute of Automation, Chinese Academy of Sciences and Springer-Verlag GmbH Germany 2016 developed for single input single output (SISO) nonlinear systems [7, 8]. The basic idea of this approach is the decomposition of the full operation range of the process into several operating regimes. In each operating regime, a simple local model is considered. The parameters of the local models result from an offline identification procedure of the considered nonlinear system. The global multimodel output can be obtained by fusion of different outputs of models library [9 14]. In this paper, an uncoupled multimodel approach is preferred [7 13]. The main advantages of this structure are to have completely independent sub-models and to result in an easy application of the linear system techniques. The fusion requires the determination of weighting function parameters. Another contribution of this work is to extend the classification methods introduced for this determination in the case of SISO systems [9, 15, 16] to MIMO nonlinear systems. The performance of the obtained emulators are finally compared. This paper is organized as follows. Section 2 is about neural emulator algorithm. The multimodel emulator is presented in Section 3. The systematic determination of the synthesis parameters is also detailed in Section 3. Section 4 concludes the paper. 2 Neural emulator (NE) Let us define N IN and N OUT, respectively as the number of plant inputs and outputs where IN and OUT represent the set of inputs and outputs. The NE developed in this section extends our previous works that have been originally proposed for SISO systems [8] and square MIMO systems [6]. In this work, we investigate non square MIMO systems. The proposed NE is developed with fully connected recurrent neural networks (Fig. 1). Inputs and outputs signals are normalized in the range [ 1, 1]. In the proposed struc-

2 N. Bahri et al. / A Systematic Design of Emulators for Multivariable Non Square and Nonlinear Systems 743 tures, any node is either an input node or an output one but not both at the same time. There is a total number N e of neurons that satisfies N e = N IN +N OUT. These neurons are separated into input neurons and output neurons to decouple the output and the input signals. The N IN inputs of the NE are the system inputs and the N OUT outputs are the estimated system outputs. The size of such networks depends only on the number of inputs and outputs. Moreover, input signals do not perturb the outputs. It is also important to notice that the aim of the NE is to emulate the instantaneous outputs but not to memorize the dynamics of the system. For this reason, the size of the resulting NE remains small even for complex systems. 1 τ e(k) where η e(k) is the emulator learning rate. The variations of y nl with respect to w ij are estimated with the introduction of sensitivity functions whose dynamics have been studied in our previous work [8] and that are trivially extended to the non square MIMO case in the present work. An advantage of the proposed method is that the adaptive time parameter and learning rate ηe(k) arealso updated with (4) and (5). As a consequence, the NE was proved to emulate efficiently nonlinear dynamics [4, 6]. Δη e(k) = ΔT OUT l=1 Δτ e(k) = η e(k) ΔT (ỹ l (k 1)) yn l (k 1) (4) η e OUT l=1 (ỹ l (k 1)) yn l (k 1). (5) τ e Fig. 1 Neural emulator with fully connected structure The variations of y nl with respect to η e and τ e are estimated with the introduction of another set of sensitivity functions v l (k) whose variations are given by (6) v l (k +1)=e τe(k) ΔT v l (k)+ N e (1 e τe(k) ΔT )(α l (k) w lh (k)v h (k)+ εe τ ). h=1 e(k) (6) 2.1 Autonomous adaptation algorithm In discrete time, k stands for the time variable with sampling period ΔT.Fori =1,,N e, the dynamics of the N e neurons of the neural emulator, that generalizes our previous work [8], are defined by the following equation s i(k +1)=e τe(k) ΔT s i(k)+ ( ) ) (1 e τe(k) ΔT e tanh w ij(k)s j(k)+x i(k) j=1 where s i(k) is the state of the i-th neuron whose input is 1 x i(k), w ij(k) and τ e(k) represent respectively the weights from j-th neuron to i-th neuron and the adaptive time parameter, x i(k) = u i(k) if i {1,,N IN}, x i(k) = 0 if i {N IN +1,,N e}, and y ni NIN (k) = s i(k) if i {N IN+1,,N e} represents the estimation of the plant outputs. As in [8], real time recurring learning (RTRL) algorithm is used in discrete time to update the parameters of NE. For each time k, the global estimation error e e(k) is introduced as e e(k) = 1 2 OUT l=1 (ỹ l (k)) 2 = OUT l=1 (1) e el (k) 2 (2) where ỹ l (k) =y nl (k) y l (k) is the scalar instantaneous emulation error for output y l (k). A gradient method is used to update the parameters w ij(k) with(3) Δw ij(k) = η e(k) ΔT OUT l=1 (ỹ l (k 1)) yn l (k 1) (3) w ij In (6), let us focus on the parameter ε e. This parameter [4, 6, 8] has been introduced and studied in our previous works in order to start with zero initial conditions for all other parameters: w ij, η e and τ e. Consequently, another characteristic of the proposed NE is that their performance depends on the value of a single parameter. The selection of this parameter is also the main difficulty with this method. 2.2 Numerical examples In this subsection, simulations are proposed to illustrate the performance of NE for square and non square MIMO nonlinear systems. The structure and the dynamics of the considered systems (subsequently defined by (9) and (10) are assumed completely unknown to the adaptation process. For each output y l of the considered systems, the mean square error (MSE l ) and the variance-accounted-for (VAF l ) are respectively computed with (7) and (8) N H MSE l = 1 (y nl (k) y l (k)) 2 (7) N H k=1 VAF l = { max 1 var {yn (k) y } l l(k) : k =1 N H}, 0 100% var {y nl (k) : k =1 N H} (8) where y nl (k) andy l (k) are respectively the emulator and the system outputs and N H is the simulation horizon. MSE is a mean indicator for the estimation error over a complete trajectory. On the contrary, VAF gives an estimation of the dispersion of the instantaneous errors.

3 744 International Journal of Automation and Computing 14(6), December NE for square MIMO nonlinear systems To evaluate the effectiveness of the neural emulator, for square MIMO nonlinear system, we consider the following two-inputs, two-outputs nonlinear system: y 1(k) =0.2y 1(k 1) + u 2(k 2) 3 + u 1(k 2) 2 0.2y 1(k 1)u 1(k 2) 0.1u 2(k 1) y 2(k) =0.05y 2(k 1) + u 1(k 2) 3 + u 2(k 1) y 2(k 2)u 2(k 1). Then a four-neuron neural network is capable to emulate the considered nonlinear system. This number of neurons depends only on the input and the output numbers (N e = N IN + N OUT =4,N IN =2andN OUT =2). Accordingtothesystem s dynamics, the sampling period is retained equal to 0.1 s. The term ε e is chosen to ensure the starting of the system emulation with zero initial conditions for all other parameters. With a good choice of ε e, we obtain good performance. For example, results of neural emulation using ε e = 10 are given by Fig. 2 (the emulated outputs are represented with solid lines and the system outputs are represented with dashed lines). The neural emulator provides, in this case, a satisfactory estimation of the process outputs. The low values of the outputs estimation errors plotted in logarithmic scale (Fig. 3) confirm this constatation NE for non square MIMO nonlinear systems To evaluate the effectiveness of the neural emulator, for non square MIMO process, we consider the following oneinput, two-output nonlinear system: y 1(k) = 1 (( z11(k 1))y1(k 1)+ 2 z 11(k 1)u 1(k 1)) (9) y 2(k) = 1 ((1.5 z12(k 1) 0.2z22(k 1))y2(k 1)+ 2 z 12(k 1)u 1(k 1)) (10) with z 11(k 1) = 0.6u1(k 1) 0.06y1(k 1) 1+0.2y 1(k 1) 0.5u1(k 1) z 12(k 1)= 1+0.5y 2(k 1) 0.07y2(k 1) z 22(k 1)= 1+0.1y 2(k 1). In this case, a three-neuron neural network is capable to emulate the considered nonlinear system. This number of neurons still depends only on the input and the output numbers (N e = N IN + N OUT =3,N IN =1andN OUT =2). The sampling period is retained equal to 0.1 s. With a suitable choice of ε e = 20, we obtain the results in Figs. 4 and 5 shows the variations of the outputs estimation errors plotted in logarithmic scale. These figures show that the neural emulator provides, in this case, a relatively satisfactory estimation of the process outputs. It should be noted that the selection of an appropriate value of ε e is not obtained in a systematic way. This choice is made by trial and error. Indeed, an arbitrary choice of ε e can affect the performance. For example, the choice of ε e equals to 0.4 for the square MIMO system and ε e equals to 0.8 for the non square MIMO system lead to an emulation with relatively important outputs estimation errors. To confirm that obtained performance depends on the emulator term, the mean square error (MSE l ) and the variance-accounted-for (VAF l ) have been calculated for an arbitrary and a good choice of ε e. They are summarized in Tables 1 and 2 respectively for the square and non square MIMO nonlinear systems. We also note that the online neural emulation needs an important computing load. To overcome these problems, a multimodel emulator for multivariable nonlinear systems will be proposed in the next section. Fig. 2 Variations of the square MIMO nonlinear system outputs and the corresponding NE outputs (ε e = 10)

4 N. Bahri et al. / A Systematic Design of Emulators for Multivariable Non Square and Nonlinear Systems 745 Fig. 3 Variations of the neural emulation squared errors in logarithmic scale (square MIMO nonlinear system, ε e = 10) Fig. 4 Variations of the non square MIMO nonlinear system outputs and the corresponding NE outputs (ε e = 20) Fig. 5 Variations of the neural emulation squared errors in logarithmic scale (non square MIMO nonlinear system, ε e = 20) Table 1 MSE l and VAF l (l =1, 2) for both outputs, in open-loop emulation case with an arbitrary and a good choice of ε e (square MIMO nonlinear system) Neural emulator ε e =0.4 Neural emulator ε e =10 MSE MSE VAF % 95.49% VAF % 96.99% Table 2 MSE l and VAF l (l =1, 2) for both outputs, in open-loop emulation case with an arbitrary and a good choice of ε e (non square MIMO nonlinear system) Neural emulator ε e =0.8 Neural emulator ε e =20 MSE MSE VAF % 91.33% VAF % 93.51%

5 746 International Journal of Automation and Computing 14(6), December Multimodel emulator (ME) The multimodel formalism is based on the decomposition of the full operating range of the process into a finite number of operating regimes. In each operating regime, a linear local model is considered. A nonlinear interpolation between these linear submodels is, then, used to yield the global model (Fig. 6). Thus the multimodel structure gives the possibility to reduce the complexity of nonlinear systems. Different techniques exist to obtain the multimodel. In all cases, three major problems should be resolved. At first, the premise variables ξ(k) of weighting functions should be defined. In the present work, these variables are defined as the input signals. Secondly, the operating space should be decomposed and the corresponding weighting functions be characterized. Initially, the static characteristic of the considered system has been used for the operating space decomposition [7, 8, 17, 18]. Then, methods based on classification using Kohonen map and the Chiu classification were proposed for SISO nonlinear systems [9 11, 16, 19, 20]. In the present work, this method will be extended to the MIMO case. Finally, the structure of the multimodel should be determined and the parameters of each submodel should be identified. In the present work, a Levenberg-Marquardt optimization type algorithm is used for that purpose. 3.1 The multimodel emulator structure Our goal is to represent a multivariable nonlinear system with a decoupled multimodel. For simplicity reasons, MIMO model is decomposed into several multi input single output (MISO) multimodels: each MISO multimodel is attached to one of the system outputs. Thereafter identification tools available in [8, 12, 17], although developed in the context of SISO systems, are directly used to identify the MIMO system. The uncoupled structure, used for the l-th base of model B l representing every MISO multimodel l {1,,N OUT }, is given by a state space representation in discrete time (Fig. 6) X l,i (k +1)=A l,i (θ l,i )X l,i (k)+b l,i (θ l,i )U(k) (11) y l,i (k) =C l,i (θ l,i )X l,i (k) (12) where, X l,i R n l,i and U(k) =[u 1(k),,u e(k),,u NIN (k)] T R N IN are the state vector and the input vector, y l,i (k) and n l,i are, respectively, the output and the dimension of the i-th local model. A l,i (θ l,i ), B l,i (θ l,i )and C l,i (θ l,i ) are, respectively, the state matrix, the input matrix and the output matrix of dimension 1 n l,i. The multimodel representing the l-th system output y ml (k) is defined by y ml (k) = ml i=1 μ l,i (ξ(k))y l,i (k) (13) where N ml is the number of local models for the l-th system output. ξ(k) =[ξ 1(k),,ξ N(k)] T is the decision variable vector (N = N IN or N = N OUT depending on the choice of the decision variables). In this work, the input vector is set as decision variables (ξ(k) = U(k)). The weighting function μ l,i (ξ(k)) quantifies the relative weight of the i-th submodel (A l,i,b l,i,c l,i )withrespectto the other ones in the global model. Such functions are generated by the decision variables vector to model the non- Fig. 6 The ME based structure for multivariable nonlinear systems

6 N. Bahri et al. / A Systematic Design of Emulators for Multivariable Non Square and Nonlinear Systems 747 linearities of the system. These functions are either sigmoidal, triangular or Gaussian ones. In the present work, the Gaussian functions are used to generate the weighting functions: w l,i (ξ(k)) = N IN e=1 (ξe(k) c l,ie) 2 e σl,ie 2. (14) The weighting functions μ l,i (ξ(k)) are obtained by normalizing the Gaussian functions w l,i (ξ(k)) (see Fig. 6) μ l,i (ξ(k)) = w l,i(ξ(k)). (15) w l,p (ξ(k)) N ml p=1 These functions must satisfy the following convex sums properties [13, 14] : ml i=1 μ l,i (ξ(k)) = 1, 0 μ l,i (ξ(k)) 1 i =1,,N ml where c l,ie and σ l,ie are respectively the center and the dispersion of the i-th weighting function [7, 8, 11]. The choice of N ml, c l,ie and σ l,ie has a great influence on the modeling precision. In our previous work, the choice of these parameters was made according to an a priori information about the static characteristics of the considered system [18]. In this work, they will be selected systematically using clustering techniques. We present in the next section, a systematic method to generate the weighting functions centers basing on Chiu s classification method [15]. 3.2 Systematic generation of weighting function centers and dispersions Consider a set of numerical data, the classification procedure consists of selecting the cluster centers between these data using Chiu s method for the numerical data classification. In this work, we aim to extend this method to a systematic determination of the weighting function parameters for multimodel representation of MIMO nonlinear systems. Considering the set of persistent identification data sufficiently rich to cover the whole operating range of the considered MIMO nonlinear system, we define, for each MISO multimodel, a set of regression vector ϑ l,j = [y l (j),y l (j 1),u 1(j 1),,u NIN (j 1)] T,j = 1,,N H. Using the Chiu s algorithm to classify these vectors we associate, firstly, a potential Pot l,j (1) to each regression vector ϑ l,j as follows: Pot l,j (1) = N H h=1 e 2 4 ϑ l,j ϑ l,h r 2 l,a (16) where r l,a is a positive parameter controlling the decreasing rate of the potential. Each potential is a function of the distance of the corresponding regression vector with respect to all other ones. Thus, the potential increases exponentially as ϑ l,j has a few neighbouring regression vectors. After calculating all potentials, the first regression vector cluster center ϑ l,1 is selected as the vector whose potential Pot l,1 given by (15) is the maximum. In the next steps, after the (i 1)-th cluster center ϑ l,i 1 with the maximum potential Pot l,i 1 has been chosen, we revise the potentiel of each regression vector as follows: Pot l,j (i) =Pot l,j (i 1) Pot l,i 1e 4 ϑ l,j ϑ 2 l,i 1 r 2 l,b (17) where Pot l,j (i 1) are the potentials calculated previously. Chiu [15] introduced two positive parameters ε 1 and ε 2 (with the restriction ε 1 >ε 2) to determine the cluster selection. He proposed the following algorithm as stopping criterion for the selection of the clusters centers. Algorithm 1. if Pot l,j >ε 1Pot l,1 Selection sanctioned: accept ϑ l,j as a cluster center and continue else if Pot l,j <ε 2Pot l,1 Selection stopped: reject ϑ l,j and end the clustering process else Let d min be the shortest distance within the distances between ϑ l,j and all previously found cluster centers if ( dmin 1 Pot l,j ) r l,a Pot l,1 Accept ϑ l,j as a cluster center and continue. else Reject ϑ l,j and set the corresponding potential to 0. Select the regression vector with the next highest potential as the new ϑ l,j and re-test end if end if ϑ l,j is the current regression vector center and ϑ l,1, ϑ l,2,,ϑ l,j 1 are the last selected ones. Once the regression vector cluster centers are selected, referring to the decision variable to be considered, the weighting functions centers are chosen. Since in this work the input vector will serve as decision variables (ξ(k) = U(k)), so the weighting functions centers are the N IN last elements of the regression vectors. Finally, it remains to determine the corresponding dispersion. To do this, we can use one of the following two expressions: Expression 1. The dispersion σ l,ie can be adjusted accordingtothemeandistancetothen nearest neighbors [20] : 1 n σ l,ie = α l c l,ie c l,ke (18) n k=1 where c l,ie is the current center and c l,ke are the n nearest neighbor centers to c l,ie. α l is a scaling factor defining the degree of overlap between the weighting functions. Expression 2. The dispersion σ l,ie is considered to be in proportion to the distance separating the center c l,ie and the n nearest centers [20] σ l,ie = 1 α l min ( c l,ie c l,ke ) (19) k=1,,n

7 748 International Journal of Automation and Computing 14(6), December 2017 where α l is a positive factor defining the degree of overlap between the weighting functions. 3.3 Parametric estimation procedure For each MISO multimodel, let us define the vector of unknown parameters θ l as follows: θ l =[θ T l,1 θ T l,iθ T l,n ml ] T. (20) This vector is partitioned into N ml column blocks where each column block θ l,i is formed by the q l,i unknown parameters (A l,i,b l,i,c l,i ) of the particular submodel. The identification of the set of all parameters are then based on the optimization of a quadratic global criterion expressed as a function of the difference between the system outputs and the multimodel ones over a simulation horizon N H J ml = 1 2 N H k=1 (y ml (k) y l (k)) 2 = N H k=1 e ml (k). (21) Different algorithms can be used to ensure the minimisation procedure. In this work, Levenberg-Marquardt s algorithm is used [21] θ l (it +1)=θ l (it) Δ l (it)(h l (θ l )+λ l (it)i) 1 G l (θ l ) (22) where H l (θ l )andg l (θ l ) are respectively the Hessian matrix and the gradient vector. They are computed from the calculation of sensitivity functions of output multimodel with respect to local models parameters ( y l,i(k) ) as follows: θ l with G l (θ l )= J H ml = θ l H l (θ l )= 2 J ml θ l θ T l (y ml (k) y l (k)) y ml(k) θ k=1 l N H y = ml (k) y ml (k) θ k=1 l θl T N y ml (k) ml = μ l,i (ξ(k)) y l,i(k). θ l θ l i=1 (23) I is the identity matrix of appropriate dimension. At each iteration it, θ l (it) is the vector of the l-th multimodel parameters. Δ l (it) is a relaxation coefficient introduced to minimise the criterion in the direction of vector H 1 l G l and λ l (it) is the regularization parameter. The values of λ l (it) and Δ l (it) are adjusted, usually by means of a heuristic based on the evolution of the criterion [21]. To improve the understanding an algorithm explaining, the parametric estimation procedure of the multimodel is provided in the Appendix. 3.4 Numerical examples In order to illustrate the performance of the multimodel emulator, we consider the square and non square MIMO nonlinear systems described respectively by (9) and (10) ME for square MIMO nonlinear systems The identification of the two MISO multimodels is realized with a global criterion (defined by (20). The inputs u e(k), e=1, 2 of the system, that are set as decision variables for the weighting functions (ξ(k) = U(k)), consist of a signal with variable amplitude (u e(k) [ 1, 1], e=1, 2). The identification data set is used to generate the set of regression vectors ϑ l,j = [y l (j),y l (j 1),u 1(j 1),u 2(j 1)] T for every MISO multimodel. Using the extended classification procedure we treat these vectors to generate the regression vector cluster centers. For each MISO multimodel, the algorithm generates three cluster centers. By exploiting these selected regression vectors and the expressions (13) and (14), every MISO multimodel comprises (N m = 9) sub-models. The corresponding dispersions are deduced using the relation (17). Fig. 7 gives the resulting weighting functions for each MISO multimodel. Tables 3 and 4 detail the centers and dispersions obtained with the systematic approach (Table 3) and with the static characteristics (Table 4). One can notice that the systematic method generates additional models in comparison with the other method. Fig. 7 The weighting functions using the systematic generation of synthesis parameters (square MIMO nonlinear system)

8 N. Bahri et al. / A Systematic Design of Emulators for Multivariable Non Square and Nonlinear Systems 749 Table 3 The weighting functions parameters for both MISO multimodels using the systematic generation method (square MIMO nonlinear system) Table 4 The weighting functions parameters for both MISO multimodels using the static characteristics (square MIMO nonlinear system) Multimodel synthesis parameters (systematic method) c 1,11 = , σ 1,11 = c 1,21 = , σ 1,21 = c 1,31 = , σ 1,31 = c 1,ie and σ 1,ie c 1,12 = , σ 1,12 = c 1,22 = , σ 1,22 = c 1,32 = , σ 1,32 = c 2,11 = , σ 2,11 = c 2,21 = , σ 2,21 = c 2,31 = , σ 2,31 = c 2,ie and σ 2,ie c 2,12 = , σ 2,12 = Multimodel synthesis parameters (classical choice [18] ) c 1,11 = 0.6, σ 1,11 =0.4 c 1,21 =0.6, σ 1,21 =0.4 c 1,ie and σ 1,ie c 1,12 = 0.7, σ 1,12 =0.2 c 1,22 =0.4, σ 1,22 =0.2 c 1,32 =0.7, σ 1,32 =0.2 c 2,11 = 0.6, σ 2,11 =0.4 c 2,21 =0,σ 2,21 =0.4 c 2,ie and σ 2,ie c 2,31 =0.6, σ 2,31 =0.4 c 2,12 = 0.5, σ 2,12 =0.6 c 2,22 =0.6, σ 2,22 =0.6 c 2,22 = , σ 2,22 = c 2,32 = , σ 2,32 = Fig. 8 Variation of square MIMO nonlinear system and multimodel emulator outputs using the systematic generation of synthesis parameters Fig. 9 Variations of the multimodel emulation squared errors in logarithmic scale (square MIMO nonlinear system)

9 750 International Journal of Automation and Computing 14(6), December 2017 We note that, in general, the local models must have a structure as simple as possible. Only the offline validation phase of the base of models may increase the structure complexity. In this case, a base of first order models provides sufficient precision. Indeed, Fig. 8 (the emulated outputs are represented with solid lines and the system outputs are represented with dashed lines) illustrates the validation of the multimodel emulation results obtained using the systematic generation of the weighting functions parameters. This figure shows the variations of the real nonlinear system and the multimodel outputs (y l (k) andy ml (k), l =1, 2). Fig. 9 shows that the multimodel emulation squared errors obtained in this case and plotted in the logarithmic scale are smaller than those obtained using the neural emulator (Fig. 3). The simulation results confirm that the uncoupled multimodel emulator offers a satisfactory modeling precision compared to the neural one. Table 5 summarizes the mean square error and the variance-accounted-for calculated in open-loop emulation case for neural emulator and both multimodel emulators (using the usual and the proposed method to select the weighting functions parameters). In this table, we note that neural emulation depends on the choice of the starting parameter Results obtained using a multimodel emulator remain better than the ones obtained for neural emulator. The performance of both multimodel emulators are quite similar. However, it is important to notice that the systematic method proposed in the present paper avoids any tedious effort to find a good value of the synthesis parameters. In comparison, the method based on the static characteristics is efficient only if numerous candidates are tested and checked for the centers and dispersions. To conclude, the present determination is a nice way to quickly obtain an accurate multimodel. Table 5 MSE l and VAF l (l =1, 2) for both outputs, in open-loop emulation case for both neural and multimodel emulators (square MIMO nonlinear system) Classical Systematic Neural multimodel multimodel emulator emulator [18] emulator MSE (ε e =0.4) (ε e = 10) MSE (ε e =0.4) (ε e = 10) 72.93% (ε e =0.4) 99.26% 99.15% VAF % (ε e = 10) 82.9% (ε e =0.4) 99.93% 99.65% VAF % (ε e = 10) ME for non square MIMO nonlinear systems For the non square MIMO system, two SISO multimodels are identified with a global criterion. The inputs u(k) of the system, that will serve as decision variables for the weighting functions (ξ(k) = u(k)), consist of a signal with variable amplitude (u(k) [ 1, 1]). The identification data set is used to generate the set of regression vectors ϑ l,j =[y l (j),y l (j 1),u(j 1)] T for every MISO multimodel. Using the extended classification procedure we treat these vectors to generate the regression vector cluster centers. For each MISO multimodel, the algorithm generates three clusters centers. By exploiting these selected regression vectors and the expressions (13) and (14), every MISO multimodel comprises (N m = 3) submodels. The corresponding dispersions are deduced using the relation (18). Fig. 10 gives the resulting weighting functions evolution for each MISO multimodel. The centers and dispersions obtained with the systematic approach and the static characteristics are respectively presented in Tables 6 and 7. In this case, we can also notice that the systematic method generates additional models in comparison with the classical method. Figs. 11 and 12 illustrate the multimodel validation results. Fig. 11 gives the variations of the non square MIMO nonlinear system and the multimodel emulator outputs. The multimodel can describe properly the nonlinear system behaviour. Fig. 12 gives the variations of the multimodel emulation squared errors in logarithmic scale. Compared to Fig. 5, this figure confirms that performance recorded using a multimodel emulator is far better than that using neural one. Table 6 The weighting functions parameters for both MISO multimodels using the systematic generation method (non square MIMO nonlinear system) Multimodel synthesis parameters (systematic method) c 1,1 = , σ 1,1 = c 1,i and σ 1,i c 1,2 = , σ 1,2 = c 1,3 = , σ 1,3 = c 2,1 = , σ 2,1 = c 2,i and σ 2,i c 2,2 = , σ 2,2 = c 2,3 = , σ 2,3 = Table 7 The weighting functions parameters for both MISO multimodels using the static characteristics (non square MIMO nonlinear system) Multimodel synthesis parameters (classical choice [18] ) c 1,i and σ 1,i c 1,1 = 0.9, σ 1,1 =0.8 c 1,2 =0.9, σ 1,2 =0.8 c 2,i and σ 2,i c 2,1 = 0.7, σ 2,1 =0.8 c 2,2 =0.7, σ 2,2 =0.8

10 N. Bahri et al. / A Systematic Design of Emulators for Multivariable Non Square and Nonlinear Systems 751 Fig. 10 The weighting functions using the systematic generation of synthesis parameters (non square MIMO nonlinear system) Fig. 11 Variation of non square MIMO nonlinear system and multimodel emulator outputs using the systematic generation of synthesis parameters Fig. 12 Variations of the multimodel emulation squared errors in logarithmic scale (non square MIMO nonlinear system)

11 752 International Journal of Automation and Computing 14(6), December 2017 Table 8 summarizes the mean square error and the variance-accounted-for calculated in open-loop emulation case for both neural and multimodel emulators. In this table, we note that neural emulation depends on the choice of the starting parameter ε e. Results obtained using a multimodel emulator with a systematic selection of the weighting functions parameters are far better than the ones obtained for both classical multimodel and neural emulator. Table 8 MSE l and VAF l (l =1, 2) for both outputs, in open-loop emulation case for both neural and multimodel emulators (non square MIMO nonlinear system) Neural Classical Systematic emulator multimodel multimodel emulator [18] emulator MSE (ε e =0.8) (ε e = 20) MSE (ε e =0.8) (ε e = 20) VAF % (ε e =0.8) 98.41% 99.67% 91.33% (ε e = 20) VAF % ( ε e =0.8) 97.10% 99.62% 93.51% (ε e = 20) 4 Conclusions In this paper, neural and multimodel emulators are proposed for multivariable non square and nonlinear system emulation. Particularly two contributions have been presented. First, the NE and ME approaches have been extended to non square MIMO systems. Secondly, for the ME approach, each system output is emulated by using a library of models. These models result from an offline identification procedure and a systematic generation of the synthesis parameters that are used to combine the submodels. The proposed method is based on a classification procedure over an identification data set. This systematic approach presents an important advantage i.e. avoiding any tedious effort to find an optimal value of the weighting function parameters. Another advantage of ME in comparison with NE is that the selection of the initialization parameter is no longer required. This scheme also reduces the computational complexity of the multivariable emulation [8, 18]. Several open loop simulations illustrate clearly that the proposed multimodel emulator leads to good performance relatively to the case where the neural emulator is applied. Appendix In this section we add an algorithm explaining the parametric estimation procedure. Algorithm 2. Input data: N H, U(k), y l (k) andn itersl Input data from Algorithm 1: N ml, c l,ie and σ l,ie Initialization: Δ l (0) and λ l (0) it 1 updatej ml 1(updateJ ml is a Boolean variable) while it n itersl if updatej ml =1 For each k [1,N H] Compute μ l,i (ξ(k)), X l,i R n l,i, y l,i (k), y ml (k) andthe sensitivity functions y l,i(k) θ l end for if it=1 Compute quadratic global criterion for the first iteration J end if Compute the gradient vector G l and the Hessian matrix H l end if Update the vector of parameters θ l Compute quadratic global criterion for the updated parameters J ml if (J ml J) Decrease λ l (it) andincreaseδ l (it) J J ml updatej ml 1 it it +1 else Increase λ l (it) and decrease Δ l (it) updatej ml 0 end if end while References [1] K. S. Narendra, K. Parthasarathy. Identification and control of dynamical systems using neural networks. IEEE Transactions on Neural Networks, vol. 1, no. 1, pp. 4 27, [2] O. Nelles. Nonlinear System Identification, Berlin, Heidelberg, Germany: Springer, [3] R. J. Williams. Adaptive state representation and estimation using recurrent connectionist networks. Neural Networks for Control, Cambridge, USA: MIT Press, pp , [4] S. Zerkaoui, F. Druaux, E. Leclercq, D. Lefebvre. Stable adaptive control with recurrent neural networks for square MIMO non-linear systems. Engineering Applications of Artificial Intelligence, vol. 22, no. 4 5, pp , [5] A. Atig, F. Druaux, D. Lefebvre, K. Abderrahim, R. Ben Abdennour. Neural emulation applied to chemical reactors. In Proceedings of the th IEEE International Multi- Conference on System, Signals and Devices, IEEE, Amman, Jordan, pp. 1 6, [6] A. Atig, F. Druaux, D. Lefebvre, K. Abderrahim, R. Ben Abdennour. Adaptive control design using stability analysis and tracking errors dynamics for nonlinear square MIMO systems. Engineering Applications of Artificial Intelligence, vol. 25, no. 7, pp , [7] N. Bahri, A. Messaoud, R. Ben Abdennour. A multimodel emulator for nonlinear system controls. International Journal of Sciences and Techniques of Automatic Control & Computer Engineering (IJ-STA), vol. 5, no. 1, pp , 2011.

12 N. Bahri et al. / A Systematic Design of Emulators for Multivariable Non Square and Nonlinear Systems 753 [8] N. Bahri, A. Atig, R. Ben Abdennour, F. Druaux, D. Lefebvre. Multimodel and neural emulators for non-linear systems: Application to an indirect adaptive neural control. International Journal of Modelling, Identification and Control, vol. 17, no. 4, pp , [9] M. Ltaief, K. Abderrahim, R. Ben Abdennour, M. Ksouri. A fuzzy fusion strategy for the multimodel approach application. WSEAS Transactions on Circuits and Systems, vol. 2, no. 4, pp , [10] A. Messaoud, M. Ltaief, R. Ben Abdennour. Supervision based on partial predictors for a multimodel generalised predictive control: Experimental validation on a semi-batch reactor. International Journal of Modelling, Identification and Control, vol. 6, no. 4, pp , [11] A. Messaoud, M. Ltaief, R. Ben Abdennour. Supervision based on a multi-predictor for an uncoupled state multimodel predictive control. In Proceedings of the 6th International Conference on Electrical Systems and Automatic Control, Hammamet, Tunisia, [12] R. Orjuela, B. Marw, J. Ragot, D. Maquin. State estimation for non-linear systems using a decoupled multiple model. International Journal of Modelling, Identification and Control, vol. 4, no. 1, pp , [13] R. Orjuela. Contributionà l estimation d état et au diagnostic des systèmes représentés par des multimodèles, Ph. D. dissertation, National Polytechnic Institute of Lorraine, France, (in French) [14] D. Ichalal. Estimation et diagnostic de systèmes non linéaires décrits par un modèle de Takagi-Sugeno, Ph. D. dissertation, National Polytechnic Institute of Lorraine, France, (in French) [15] S. L. Chiu. Fuzzy model identification based on cluster estimation. Journal of Intelligent and Fuzzy Systems, vol.2, no. 3, pp , [16] M. Ltaief, R. Ben Abdennour, K. Abderrahim, P. Borne. A multimodel numerical control based on a new approach for the determination of a model s library for uncertain systems. In Proceedings of the 2002 IEEE International Conference on Systems, Man and Cybernetics, IEEE, Yasmine Hammamet, Tunisia, pp , [17] R. Orjuela, D. Maquin, J. Ragot. Nonlinear system identification using uncoupled state multiple-model approach. In Proceedings of Workshop on Advanced Control and Diagnosis, Nancy, France. [18] N. Bahri, A. Atig, R. Ben Abdennour, F. Druaux, D. Lefebvre. Emulation of multivariable non square and nonlinear systems. In Proceedings of 2013 the 14th International Conference on Sciences and Techniques of Automatic Control and Computer Engineering, IEEE, Sousse, Tunisia, pp , [19] M. Ltaief, K. Abderrahim, R. Ben Abdennour, M. Ksouri. Contributions to the multimodel approach: Systematic determination of a models base and validities estimation. International Journal of Automation and Systems Engineering, vol. 2, no. 3, [20] M. Ltaief, A. Messaoud, R. Ben Abdennour. Optimal systematic determination of models base for multimodel representation: Real time application. International Journal of Automation and Computing, vol. 11, no. 6, pp , [21] D. W. Marquardt. An algorithm for least-squares estimation of nonlinear parameters. Journal of the Society for Industrial and Applied Mathematics, vol. 11, no. 2, pp , Nesrine Bahri received the engineering diploma in electric-automatic engineering, in 2009, and the master degree in automatic control and intelligent techniques, in 2010, from National School of Engineers of Gabes-Tunisia. She is currently a Ph. D. degree candidate at Research Unit of Numerical Control of Industrial Processes at ENIG (CONPRI) and at Electrical and Automatic Engineering Research Group at Le Havre University (GREAH). Her research interests include nonlinear process identification, multimodel and multicontrol approaches, neural and multimodel emulation and adaptive control. bahri.nesrine@gmail.com (Corresponding author) ORCID id: Asma Atig received the engineering diploma in electrical engineering, in 2007, the master degree in automatic control, in 2008, from the National School of Engineering of Sfax-Tunisia (ENIS), the Ph. D. degree in electrical engineering from the National School of Engineering of Gabes- Tunisia (ENIG) and from the University of Le Havre in automatic, signal processing and computing, in Actually she is a teaching assistant in Electrical Engineering Department at the High Institute of Industrial Systems of Gabes-Tunisia. She is a member of Research Unit of Numerical Control of Industrial Processes at ENIG (CONPRI). Her research interests include nonlinear process identification, neural emulation and adaptive control. asma.atig@issig.rnu.tn Ridha Ben Abdennour received the Speciality Ph. D. degree in automatic control from Higher School of Technique Education in 1987, and the Ph. D. degree in electrical engineering from the National School of Engineering of Tunis-Tunisia, in He is a professor in automatic control at the National School of Engineering of Gabes-Tunisia. He was chairman of the Electrical Engineering Department and the director of the High Institute of Technological Studies of Gabes. He is the head of the Research Unit of Numerical Control of Industrial Processes and the founder and the honorary President of the Tunisian Association of Automatic Control and Numerisation. He is the co-author of a book on Identification and Numerical Control of Industrial Processes and he is the author of more than 300 publications. He participated in the organization of many conferences and he was a member of some scientific committees of a number of congresses. His research interests include identification, multimodel & multicontrol approaches, numerical control and supervision of industrial processes. ridha.benabdennour@enig.rnu.tn

13 754 International Journal of Automation and Computing 14(6), December 2017 Fabrice Druaux received the B. Sc. degree in physics and mathematics in 1976, the M. Sc. degree in physic in 1981 and the Ph. D. degree in physic from University of Rouen, France in Since 1988, he has been an assistant professor at the Faculty of Sciences and Technology of Le Havre, France. Since 1999, he has been with the Electric and Automatic Engineering Research Group. His current research interests include modelling, control and fault detection using dynamical neural network. His principal applications under focus are electro-technical processes such as motors and wind generators. Dimitri Lefebvre is graduated from the Ecole Centrale of Lille, France in He received the Ph. D. degree in automatic control and computer science from University of Sciences and Technologies, Lille in 1994, and the HAB degree from University of Franche Comt Belfort, France in Since 2001, he has been a professor at Institute of Technology and Faculty of Sciences, University Le Havre, France. He is with the Electric and Automatic Engineering Research Group (GREAH). His research interests include Petri nets and electronic data systems (DESs), learning processes, adaptive control, fault detection and diagnosis and applications to electrical engineering. dimitri.lefebvre@univ-lehavre.fr

Optimal Systematic Determination of Models Base for Multimodel Representation: Real Time Application

Optimal Systematic Determination of Models Base for Multimodel Representation: Real Time Application International Journal of Automation and Computing 11(6) December 2014 644-652 DOI: 10.1007/s11633-014-0815-4 Optimal Systematic Determination of Models Base for Multimodel Representation: Real Time Application

More information

Second Order Sliding Mode Control for Discrete Decouplable Multivariable Systems via Input-output Models

Second Order Sliding Mode Control for Discrete Decouplable Multivariable Systems via Input-output Models International Journal of Automation and Computing 126), December 2015, 630-638 DOI: 10.1007/s11633-015-0904-z Second Order Sliding Mode Control for Discrete Decouplable Multivariable Systems via Input-output

More information

Autonomous learning algorithm for fully connected recurrent networks

Autonomous learning algorithm for fully connected recurrent networks Autonomous learning algorithm for fully connected recurrent networks Edouard Leclercq, Fabrice Druaux, Dimitri Lefebvre Groupe de Recherche en Electrotechnique et Automatique du Havre Université du Havre,

More information

A STATE-SPACE NEURAL NETWORK FOR MODELING DYNAMICAL NONLINEAR SYSTEMS

A STATE-SPACE NEURAL NETWORK FOR MODELING DYNAMICAL NONLINEAR SYSTEMS A STATE-SPACE NEURAL NETWORK FOR MODELING DYNAMICAL NONLINEAR SYSTEMS Karima Amoura Patrice Wira and Said Djennoune Laboratoire CCSP Université Mouloud Mammeri Tizi Ouzou Algeria Laboratoire MIPS Université

More information

A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models. Isabelle Rivals and Léon Personnaz

A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models. Isabelle Rivals and Léon Personnaz In Neurocomputing 2(-3): 279-294 (998). A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models Isabelle Rivals and Léon Personnaz Laboratoire d'électronique,

More information

Robust Observer for Uncertain T S model of a Synchronous Machine

Robust Observer for Uncertain T S model of a Synchronous Machine Recent Advances in Circuits Communications Signal Processing Robust Observer for Uncertain T S model of a Synchronous Machine OUAALINE Najat ELALAMI Noureddine Laboratory of Automation Computer Engineering

More information

Online Identification And Control of A PV-Supplied DC Motor Using Universal Learning Networks

Online Identification And Control of A PV-Supplied DC Motor Using Universal Learning Networks Online Identification And Control of A PV-Supplied DC Motor Using Universal Learning Networks Ahmed Hussein * Kotaro Hirasawa ** Jinglu Hu ** * Graduate School of Information Science & Electrical Eng.,

More information

PI OBSERVER DESIGN FOR DISCRETE-TIME DECOUPLED MULTIPLE MODELS. Rodolfo Orjuela, Benoît Marx, José Ragot and Didier Maquin

PI OBSERVER DESIGN FOR DISCRETE-TIME DECOUPLED MULTIPLE MODELS. Rodolfo Orjuela, Benoît Marx, José Ragot and Didier Maquin PI OBSERVER DESIGN FOR DISCRETE-TIME DECOUPLED MULTIPLE MODELS Rodolfo Orjuela Benoît Marx José Ragot and Didier Maquin Centre de Recherche en Automatique de Nancy UMR 739 Nancy-Université CNRS 2 Avenue

More information

Internal Model Control of A Class of Continuous Linear Underactuated Systems

Internal Model Control of A Class of Continuous Linear Underactuated Systems Internal Model Control of A Class of Continuous Linear Underactuated Systems Asma Mezzi Tunis El Manar University, Automatic Control Research Laboratory, LA.R.A, National Engineering School of Tunis (ENIT),

More information

A Boiler-Turbine System Control Using A Fuzzy Auto-Regressive Moving Average (FARMA) Model

A Boiler-Turbine System Control Using A Fuzzy Auto-Regressive Moving Average (FARMA) Model 142 IEEE TRANSACTIONS ON ENERGY CONVERSION, VOL. 18, NO. 1, MARCH 2003 A Boiler-Turbine System Control Using A Fuzzy Auto-Regressive Moving Average (FARMA) Model Un-Chul Moon and Kwang Y. Lee, Fellow,

More information

Adaptive Fuzzy Modelling and Control for Discrete-Time Nonlinear Uncertain Systems

Adaptive Fuzzy Modelling and Control for Discrete-Time Nonlinear Uncertain Systems American Control Conference June 8-,. Portland, OR, USA WeB7. Adaptive Fuzzy Modelling and Control for Discrete-Time nlinear Uncertain Systems Ruiyun Qi and Mietek A. Brdys Abstract This paper presents

More information

Petri nets design based on neural networks

Petri nets design based on neural networks Petri nets design based on neural networks Edouard LECLERCQ, Souleiman OULD EL MEDHI, Dimitri LEFEBVRE GREAH - Université du Havre - 5, rue Philippe LEBON - BP 54-7658 LE HAVRE Cedex Abstract. Petri net

More information

Stable Adaptive Momentum for Rapid Online Learning in Nonlinear Systems

Stable Adaptive Momentum for Rapid Online Learning in Nonlinear Systems Stable Adaptive Momentum for Rapid Online Learning in Nonlinear Systems Thore Graepel and Nicol N. Schraudolph Institute of Computational Science ETH Zürich, Switzerland {graepel,schraudo}@inf.ethz.ch

More information

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XVI - Qualitative Methods for Fault Diagnosis - Jan Lunze QUALITATIVE METHODS FOR FAULT DIAGNOSIS

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XVI - Qualitative Methods for Fault Diagnosis - Jan Lunze QUALITATIVE METHODS FOR FAULT DIAGNOSIS QUALITATIVE METHODS FOR FAULT DIAGNOSIS Jan Lunze Ruhr University Bochum,, Germany Keywords: Assumption-Based Truth Maintenance System, Consistency-based Diagnosis, Discrete Event System, General Diagnostic

More information

( t) Identification and Control of a Nonlinear Bioreactor Plant Using Classical and Dynamical Neural Networks

( t) Identification and Control of a Nonlinear Bioreactor Plant Using Classical and Dynamical Neural Networks Identification and Control of a Nonlinear Bioreactor Plant Using Classical and Dynamical Neural Networks Mehmet Önder Efe Electrical and Electronics Engineering Boðaziçi University, Bebek 80815, Istanbul,

More information

Lazy learning for control design

Lazy learning for control design Lazy learning for control design Gianluca Bontempi, Mauro Birattari, Hugues Bersini Iridia - CP 94/6 Université Libre de Bruxelles 5 Bruxelles - Belgium email: {gbonte, mbiro, bersini}@ulb.ac.be Abstract.

More information

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm Volume 4, Issue 5, May 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Huffman Encoding

More information

Neural Network Identification of Non Linear Systems Using State Space Techniques.

Neural Network Identification of Non Linear Systems Using State Space Techniques. Neural Network Identification of Non Linear Systems Using State Space Techniques. Joan Codina, J. Carlos Aguado, Josep M. Fuertes. Automatic Control and Computer Engineering Department Universitat Politècnica

More information

Observer design for nonlinear systems represented by Takagi-Sugeno models

Observer design for nonlinear systems represented by Takagi-Sugeno models Observer design for nonlinear systems represented by Takagi-Sugeno models WAFA JAMEL ATSI/ENIM Rue Ibn El Jazzar 59 Monastir TUNISIA wafa jamel@yahoo.fr NASREDDINE BOUGUILA ATSI/ENIM Rue Ibn El Jazzar

More information

NONLINEAR PLANT IDENTIFICATION BY WAVELETS

NONLINEAR PLANT IDENTIFICATION BY WAVELETS NONLINEAR PLANT IDENTIFICATION BY WAVELETS Edison Righeto UNESP Ilha Solteira, Department of Mathematics, Av. Brasil 56, 5385000, Ilha Solteira, SP, Brazil righeto@fqm.feis.unesp.br Luiz Henrique M. Grassi

More information

Convergence of Hybrid Algorithm with Adaptive Learning Parameter for Multilayer Neural Network

Convergence of Hybrid Algorithm with Adaptive Learning Parameter for Multilayer Neural Network Convergence of Hybrid Algorithm with Adaptive Learning Parameter for Multilayer Neural Network Fadwa DAMAK, Mounir BEN NASR, Mohamed CHTOUROU Department of Electrical Engineering ENIS Sfax, Tunisia {fadwa_damak,

More information

NONLINEAR IDENTIFICATION ON BASED RBF NEURAL NETWORK

NONLINEAR IDENTIFICATION ON BASED RBF NEURAL NETWORK DAAAM INTERNATIONAL SCIENTIFIC BOOK 2011 pp. 547-554 CHAPTER 44 NONLINEAR IDENTIFICATION ON BASED RBF NEURAL NETWORK BURLAK, V. & PIVONKA, P. Abstract: This article is focused on the off-line identification

More information

Intelligent Systems and Control Prof. Laxmidhar Behera Indian Institute of Technology, Kanpur

Intelligent Systems and Control Prof. Laxmidhar Behera Indian Institute of Technology, Kanpur Intelligent Systems and Control Prof. Laxmidhar Behera Indian Institute of Technology, Kanpur Module - 2 Lecture - 4 Introduction to Fuzzy Logic Control In this lecture today, we will be discussing fuzzy

More information

Run-to-Run MPC Tuning via Gradient Descent

Run-to-Run MPC Tuning via Gradient Descent Ian David Lockhart Bogle and Michael Fairweather (Editors), Proceedings of the nd European Symposium on Computer Aided Process Engineering, 7 - June, London. c Elsevier B.V. All rights reserved. Run-to-Run

More information

Reading Group on Deep Learning Session 1

Reading Group on Deep Learning Session 1 Reading Group on Deep Learning Session 1 Stephane Lathuiliere & Pablo Mesejo 2 June 2016 1/31 Contents Introduction to Artificial Neural Networks to understand, and to be able to efficiently use, the popular

More information

Gaussian Process for Internal Model Control

Gaussian Process for Internal Model Control Gaussian Process for Internal Model Control Gregor Gregorčič and Gordon Lightbody Department of Electrical Engineering University College Cork IRELAND E mail: gregorg@rennesuccie Abstract To improve transparency

More information

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XVII - Analysis and Stability of Fuzzy Systems - Ralf Mikut and Georg Bretthauer

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XVII - Analysis and Stability of Fuzzy Systems - Ralf Mikut and Georg Bretthauer ANALYSIS AND STABILITY OF FUZZY SYSTEMS Ralf Mikut and Forschungszentrum Karlsruhe GmbH, Germany Keywords: Systems, Linear Systems, Nonlinear Systems, Closed-loop Systems, SISO Systems, MISO systems, MIMO

More information

Lecture 5: Logistic Regression. Neural Networks

Lecture 5: Logistic Regression. Neural Networks Lecture 5: Logistic Regression. Neural Networks Logistic regression Comparison with generative models Feed-forward neural networks Backpropagation Tricks for training neural networks COMP-652, Lecture

More information

Adaptive Predictive Observer Design for Class of Uncertain Nonlinear Systems with Bounded Disturbance

Adaptive Predictive Observer Design for Class of Uncertain Nonlinear Systems with Bounded Disturbance International Journal of Control Science and Engineering 2018, 8(2): 31-35 DOI: 10.5923/j.control.20180802.01 Adaptive Predictive Observer Design for Class of Saeed Kashefi *, Majid Hajatipor Faculty of

More information

Neural Modelling of a Yeast Fermentation Process Using Extreme Learning Machines

Neural Modelling of a Yeast Fermentation Process Using Extreme Learning Machines Neural Modelling of a Yeast Fermentation Process Using Extreme Learning Machines Maciej Ławryńczu Abstract This wor details development of dynamic neural models of a yeast fermentation chemical reactor

More information

Robust Controller Design for Speed Control of an Indirect Field Oriented Induction Machine Drive

Robust Controller Design for Speed Control of an Indirect Field Oriented Induction Machine Drive Leonardo Electronic Journal of Practices and Technologies ISSN 1583-1078 Issue 6, January-June 2005 p. 1-16 Robust Controller Design for Speed Control of an Indirect Field Oriented Induction Machine Drive

More information

Adaptive Dual Control

Adaptive Dual Control Adaptive Dual Control Björn Wittenmark Department of Automatic Control, Lund Institute of Technology Box 118, S-221 00 Lund, Sweden email: bjorn@control.lth.se Keywords: Dual control, stochastic control,

More information

Model Reference Adaptive Control for Multi-Input Multi-Output Nonlinear Systems Using Neural Networks

Model Reference Adaptive Control for Multi-Input Multi-Output Nonlinear Systems Using Neural Networks Model Reference Adaptive Control for MultiInput MultiOutput Nonlinear Systems Using Neural Networks Jiunshian Phuah, Jianming Lu, and Takashi Yahagi Graduate School of Science and Technology, Chiba University,

More information

DESIGN OF OBSERVERS FOR TAKAGI-SUGENO DISCRETE-TIME SYSTEMS WITH UNMEASURABLE PREMISE VARIABLES. D. Ichalal, B. Marx, J. Ragot, D.

DESIGN OF OBSERVERS FOR TAKAGI-SUGENO DISCRETE-TIME SYSTEMS WITH UNMEASURABLE PREMISE VARIABLES. D. Ichalal, B. Marx, J. Ragot, D. DESIGN OF OBSERVERS FOR TAKAGI-SUGENO DISCRETE-TIME SYSTEMS WITH UNMEASURABLE PREMISE VARIABLES D. Ichalal, B. Marx, J. Ragot, D. Maquin Centre de Recherche en Automatique de Nancy, UMR 739, Nancy-Université,

More information

An artificial neural networks (ANNs) model is a functional abstraction of the

An artificial neural networks (ANNs) model is a functional abstraction of the CHAPER 3 3. Introduction An artificial neural networs (ANNs) model is a functional abstraction of the biological neural structures of the central nervous system. hey are composed of many simple and highly

More information

Multilayer Neural Networks

Multilayer Neural Networks Multilayer Neural Networks Multilayer Neural Networks Discriminant function flexibility NON-Linear But with sets of linear parameters at each layer Provably general function approximators for sufficient

More information

Distributed Adaptive Synchronization of Complex Dynamical Network with Unknown Time-varying Weights

Distributed Adaptive Synchronization of Complex Dynamical Network with Unknown Time-varying Weights International Journal of Automation and Computing 3, June 05, 33-39 DOI: 0.007/s633-05-0889-7 Distributed Adaptive Synchronization of Complex Dynamical Network with Unknown Time-varying Weights Hui-Na

More information

Last updated: Oct 22, 2012 LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

Last updated: Oct 22, 2012 LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition Last updated: Oct 22, 2012 LINEAR CLASSIFIERS Problems 2 Please do Problem 8.3 in the textbook. We will discuss this in class. Classification: Problem Statement 3 In regression, we are modeling the relationship

More information

Computational statistics

Computational statistics Computational statistics Lecture 3: Neural networks Thierry Denœux 5 March, 2016 Neural networks A class of learning methods that was developed separately in different fields statistics and artificial

More information

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000

More information

Mark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer.

Mark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer. University of Cambridge Engineering Part IIB & EIST Part II Paper I0: Advanced Pattern Processing Handouts 4 & 5: Multi-Layer Perceptron: Introduction and Training x y (x) Inputs x 2 y (x) 2 Outputs x

More information

CSTR CONTROL USING MULTIPLE MODELS

CSTR CONTROL USING MULTIPLE MODELS CSTR CONTROL USING MULTIPLE MODELS J. Novák, V. Bobál Univerzita Tomáše Bati, Fakulta aplikované informatiky Mostní 39, Zlín INTRODUCTION Almost every real process exhibits nonlinear behavior in a full

More information

FEEDBACK GMDH-TYPE NEURAL NETWORK AND ITS APPLICATION TO MEDICAL IMAGE ANALYSIS OF LIVER CANCER. Tadashi Kondo and Junji Ueno

FEEDBACK GMDH-TYPE NEURAL NETWORK AND ITS APPLICATION TO MEDICAL IMAGE ANALYSIS OF LIVER CANCER. Tadashi Kondo and Junji Ueno International Journal of Innovative Computing, Information and Control ICIC International c 2012 ISSN 1349-4198 Volume 8, Number 3(B), March 2012 pp. 2285 2300 FEEDBACK GMDH-TYPE NEURAL NETWORK AND ITS

More information

State estimation of uncertain multiple model with unknown inputs

State estimation of uncertain multiple model with unknown inputs State estimation of uncertain multiple model with unknown inputs Abdelkader Akhenak, Mohammed Chadli, Didier Maquin and José Ragot Centre de Recherche en Automatique de Nancy, CNRS UMR 79 Institut National

More information

BAYESIAN ESTIMATION OF UNKNOWN PARAMETERS OVER NETWORKS

BAYESIAN ESTIMATION OF UNKNOWN PARAMETERS OVER NETWORKS BAYESIAN ESTIMATION OF UNKNOWN PARAMETERS OVER NETWORKS Petar M. Djurić Dept. of Electrical & Computer Engineering Stony Brook University Stony Brook, NY 11794, USA e-mail: petar.djuric@stonybrook.edu

More information

Time delay system identification based on optimization approaches

Time delay system identification based on optimization approaches Time delay system identification based on optimization approaches Ahlem SASSI, Saïda BEDOUI, Kamel ABDERRAHIM Numerical Control of Industrial Processes, National School of Engineers of Gabes, University

More information

NEURAL NETWORKS APPLICATION FOR MECHANICAL PARAMETERS IDENTIFICATION OF ASYNCHRONOUS MOTOR

NEURAL NETWORKS APPLICATION FOR MECHANICAL PARAMETERS IDENTIFICATION OF ASYNCHRONOUS MOTOR NEURAL NETWORKS APPLICATION FOR MECHANICAL PARAMETERS IDENTIFICATION OF ASYNCHRONOUS MOTOR D. Balara, J. Timko, J. Žilková, M. Lešo Abstract: A method for identification of mechanical parameters of an

More information

Neural Networks Lecture 4: Radial Bases Function Networks

Neural Networks Lecture 4: Radial Bases Function Networks Neural Networks Lecture 4: Radial Bases Function Networks H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011. A. Talebi, Farzaneh Abdollahi

More information

Introduction to Support Vector Machines

Introduction to Support Vector Machines Introduction to Support Vector Machines Andreas Maletti Technische Universität Dresden Fakultät Informatik June 15, 2006 1 The Problem 2 The Basics 3 The Proposed Solution Learning by Machines Learning

More information

Selected method of artificial intelligence in modelling safe movement of ships

Selected method of artificial intelligence in modelling safe movement of ships Safety and Security Engineering II 391 Selected method of artificial intelligence in modelling safe movement of ships J. Malecki Faculty of Mechanical and Electrical Engineering, Naval University of Gdynia,

More information

Vasil Khalidov & Miles Hansard. C.M. Bishop s PRML: Chapter 5; Neural Networks

Vasil Khalidov & Miles Hansard. C.M. Bishop s PRML: Chapter 5; Neural Networks C.M. Bishop s PRML: Chapter 5; Neural Networks Introduction The aim is, as before, to find useful decompositions of the target variable; t(x) = y(x, w) + ɛ(x) (3.7) t(x n ) and x n are the observations,

More information

Zdzislaw Bubnicki Modern Control Theory

Zdzislaw Bubnicki Modern Control Theory Zdzislaw Bubnicki Modern Control Theory Zdzislaw Bubnicki Modern Control Theory With 04 figures Professor Zdzislaw Bubnicki, PhD Wroclaw University of Technology Institute of Information Science and Engineering

More information

Modelling Multivariate Data by Neuro-Fuzzy Systems

Modelling Multivariate Data by Neuro-Fuzzy Systems In Proceedings of IEEE/IAFE Concerence on Computational Inteligence for Financial Engineering, New York City, 999 Modelling Multivariate Data by Neuro-Fuzzy Systems Jianwei Zhang and Alois Knoll Faculty

More information

Introduction to Neural Networks

Introduction to Neural Networks Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning

More information

MODELING USING NEURAL NETWORKS: APPLICATION TO A LINEAR INCREMENTAL MACHINE

MODELING USING NEURAL NETWORKS: APPLICATION TO A LINEAR INCREMENTAL MACHINE MODELING USING NEURAL NETWORKS: APPLICATION TO A LINEAR INCREMENTAL MACHINE Rawia Rahali, Walid Amri, Abdessattar Ben Amor National Institute of Applied Sciences and Technology Computer Laboratory for

More information

A Hybrid Time-delay Prediction Method for Networked Control System

A Hybrid Time-delay Prediction Method for Networked Control System International Journal of Automation and Computing 11(1), February 2014, 19-24 DOI: 10.1007/s11633-014-0761-1 A Hybrid Time-delay Prediction Method for Networked Control System Zhong-Da Tian Xian-Wen Gao

More information

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström PREDICTIO ERROR METHODS Torsten Söderström Department of Systems and Control, Information Technology, Uppsala University, Uppsala, Sweden Keywords: prediction error method, optimal prediction, identifiability,

More information

DIAGNOSIS OF DISCRETE-TIME SINGULARLY PERTURBED SYSTEMS BASED ON SLOW SUBSYSTEM

DIAGNOSIS OF DISCRETE-TIME SINGULARLY PERTURBED SYSTEMS BASED ON SLOW SUBSYSTEM acta mechanica et automatica, vol.8 no.4 (4), DOI.478/ama-4-3 DIAGNOSIS OF DISCRETE-TIME SINGULARLY PERTURBED SYSTEMS BASED ON SLOW SUBSYSTEM Adel TELLILI *,***, Nouceyba ABDELKRIM *,**, Bahaa JAOUADI

More information

A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE

A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE Li Sheng Institute of intelligent information engineering Zheiang University Hangzhou, 3007, P. R. China ABSTRACT In this paper, a neural network-driven

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

Design of Multivariable Neural Controllers Using a Classical Approach

Design of Multivariable Neural Controllers Using a Classical Approach Design of Multivariable Neural Controllers Using a Classical Approach Seshu K. Damarla & Madhusree Kundu Abstract In the present study, the neural network (NN) based multivariable controllers were designed

More information

Artificial Neural Networks Francesco DI MAIO, Ph.D., Politecnico di Milano Department of Energy - Nuclear Division IEEE - Italian Reliability Chapter

Artificial Neural Networks Francesco DI MAIO, Ph.D., Politecnico di Milano Department of Energy - Nuclear Division IEEE - Italian Reliability Chapter Artificial Neural Networks Francesco DI MAIO, Ph.D., Politecnico di Milano Department of Energy - Nuclear Division IEEE - Italian Reliability Chapter (Chair) STF - China Fellow francesco.dimaio@polimi.it

More information

Acceleration of Levenberg-Marquardt method training of chaotic systems fuzzy modeling

Acceleration of Levenberg-Marquardt method training of chaotic systems fuzzy modeling ISSN 746-7233, England, UK World Journal of Modelling and Simulation Vol. 3 (2007) No. 4, pp. 289-298 Acceleration of Levenberg-Marquardt method training of chaotic systems fuzzy modeling Yuhui Wang, Qingxian

More information

A Modified Incremental Principal Component Analysis for On-Line Learning of Feature Space and Classifier

A Modified Incremental Principal Component Analysis for On-Line Learning of Feature Space and Classifier A Modified Incremental Principal Component Analysis for On-Line Learning of Feature Space and Classifier Seiichi Ozawa 1, Shaoning Pang 2, and Nikola Kasabov 2 1 Graduate School of Science and Technology,

More information

On an internal multimodel control for nonlinear multivariable systems - A comparative study

On an internal multimodel control for nonlinear multivariable systems - A comparative study On an internal multimodel control for nonlinear multivariable systems A comparative study Nahla Touati Karmani Dhaou Soudani Mongi Naceur Mohamed Benrejeb Abstract An internal multimodel control designed

More information

Immediate Reward Reinforcement Learning for Projective Kernel Methods

Immediate Reward Reinforcement Learning for Projective Kernel Methods ESANN'27 proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), 25-27 April 27, d-side publi., ISBN 2-9337-7-2. Immediate Reward Reinforcement Learning for Projective Kernel Methods

More information

Several ways to solve the MSO problem

Several ways to solve the MSO problem Several ways to solve the MSO problem J. J. Steil - Bielefeld University - Neuroinformatics Group P.O.-Box 0 0 3, D-3350 Bielefeld - Germany Abstract. The so called MSO-problem, a simple superposition

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE 4: Linear Systems Summary # 3: Introduction to artificial neural networks DISTRIBUTED REPRESENTATION An ANN consists of simple processing units communicating with each other. The basic elements of

More information

GAIN SCHEDULING CONTROL WITH MULTI-LOOP PID FOR 2- DOF ARM ROBOT TRAJECTORY CONTROL

GAIN SCHEDULING CONTROL WITH MULTI-LOOP PID FOR 2- DOF ARM ROBOT TRAJECTORY CONTROL GAIN SCHEDULING CONTROL WITH MULTI-LOOP PID FOR 2- DOF ARM ROBOT TRAJECTORY CONTROL 1 KHALED M. HELAL, 2 MOSTAFA R.A. ATIA, 3 MOHAMED I. ABU EL-SEBAH 1, 2 Mechanical Engineering Department ARAB ACADEMY

More information

Fault tolerant tracking control for continuous Takagi-Sugeno systems with time varying faults

Fault tolerant tracking control for continuous Takagi-Sugeno systems with time varying faults Fault tolerant tracking control for continuous Takagi-Sugeno systems with time varying faults Tahar Bouarar, Benoît Marx, Didier Maquin, José Ragot Centre de Recherche en Automatique de Nancy (CRAN) Nancy,

More information

18.6 Regression and Classification with Linear Models

18.6 Regression and Classification with Linear Models 18.6 Regression and Classification with Linear Models 352 The hypothesis space of linear functions of continuous-valued inputs has been used for hundreds of years A univariate linear function (a straight

More information

LECTURE NOTE #NEW 6 PROF. ALAN YUILLE

LECTURE NOTE #NEW 6 PROF. ALAN YUILLE LECTURE NOTE #NEW 6 PROF. ALAN YUILLE 1. Introduction to Regression Now consider learning the conditional distribution p(y x). This is often easier than learning the likelihood function p(x y) and the

More information

Lecture 10. Neural networks and optimization. Machine Learning and Data Mining November Nando de Freitas UBC. Nonlinear Supervised Learning

Lecture 10. Neural networks and optimization. Machine Learning and Data Mining November Nando de Freitas UBC. Nonlinear Supervised Learning Lecture 0 Neural networks and optimization Machine Learning and Data Mining November 2009 UBC Gradient Searching for a good solution can be interpreted as looking for a minimum of some error (loss) function

More information

Gain Scheduling Control with Multi-loop PID for 2-DOF Arm Robot Trajectory Control

Gain Scheduling Control with Multi-loop PID for 2-DOF Arm Robot Trajectory Control Gain Scheduling Control with Multi-loop PID for 2-DOF Arm Robot Trajectory Control Khaled M. Helal, 2 Mostafa R.A. Atia, 3 Mohamed I. Abu El-Sebah, 2 Mechanical Engineering Department ARAB ACADEMY FOR

More information

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption ANDRÉ NUNES DE SOUZA, JOSÉ ALFREDO C. ULSON, IVAN NUNES

More information

Design of Decentralized Fuzzy Controllers for Quadruple tank Process

Design of Decentralized Fuzzy Controllers for Quadruple tank Process IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.11, November 2008 163 Design of Fuzzy Controllers for Quadruple tank Process R.Suja Mani Malar1 and T.Thyagarajan2, 1 Assistant

More information

New Concepts for the Identification of Dynamic Takagi-Sugeno Fuzzy Models

New Concepts for the Identification of Dynamic Takagi-Sugeno Fuzzy Models New Concepts for the Identification of Dynamic Takagi-Sugeno Fuzzy Models Christoph Hametner Institute for Mechanics and Mechatronics, Vienna University of Technology, Vienna, Austria hametner@impa.tuwien.ac.at

More information

Observer Design for a Class of Takagi-Sugeno Descriptor Systems with Lipschitz Constraints

Observer Design for a Class of Takagi-Sugeno Descriptor Systems with Lipschitz Constraints Applied Mathematical Sciences, Vol. 10, 2016, no. 43, 2105-2120 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2016.64142 Observer Design for a Class of Takagi-Sugeno Descriptor Systems with

More information

Lecture 6. Regression

Lecture 6. Regression Lecture 6. Regression Prof. Alan Yuille Summer 2014 Outline 1. Introduction to Regression 2. Binary Regression 3. Linear Regression; Polynomial Regression 4. Non-linear Regression; Multilayer Perceptron

More information

Learning Gaussian Process Models from Uncertain Data

Learning Gaussian Process Models from Uncertain Data Learning Gaussian Process Models from Uncertain Data Patrick Dallaire, Camille Besse, and Brahim Chaib-draa DAMAS Laboratory, Computer Science & Software Engineering Department, Laval University, Canada

More information

Statistical Machine Learning from Data

Statistical Machine Learning from Data January 17, 2006 Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Other Artificial Neural Networks Samy Bengio IDIAP Research Institute, Martigny, Switzerland,

More information

Machine Learning

Machine Learning Machine Learning 10-601 Maria Florina Balcan Machine Learning Department Carnegie Mellon University 02/10/2016 Today: Artificial neural networks Backpropagation Reading: Mitchell: Chapter 4 Bishop: Chapter

More information

Introduction to Artificial Neural Networks

Introduction to Artificial Neural Networks Facultés Universitaires Notre-Dame de la Paix 27 March 2007 Outline 1 Introduction 2 Fundamentals Biological neuron Artificial neuron Artificial Neural Network Outline 3 Single-layer ANN Perceptron Adaline

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Le Song Machine Learning I CSE 6740, Fall 2013 Naïve Bayes classifier Still use Bayes decision rule for classification P y x = P x y P y P x But assume p x y = 1 is fully factorized

More information

REGRESSION TREE CREDIBILITY MODEL

REGRESSION TREE CREDIBILITY MODEL LIQUN DIAO AND CHENGGUO WENG Department of Statistics and Actuarial Science, University of Waterloo Advances in Predictive Analytics Conference, Waterloo, Ontario Dec 1, 2017 Overview Statistical }{{ Method

More information

Degenerate Expectation-Maximization Algorithm for Local Dimension Reduction

Degenerate Expectation-Maximization Algorithm for Local Dimension Reduction Degenerate Expectation-Maximization Algorithm for Local Dimension Reduction Xiaodong Lin 1 and Yu Zhu 2 1 Statistical and Applied Mathematical Science Institute, RTP, NC, 27709 USA University of Cincinnati,

More information

1348 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART B: CYBERNETICS, VOL. 34, NO. 3, JUNE 2004

1348 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART B: CYBERNETICS, VOL. 34, NO. 3, JUNE 2004 1348 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART B: CYBERNETICS, VOL 34, NO 3, JUNE 2004 Direct Adaptive Iterative Learning Control of Nonlinear Systems Using an Output-Recurrent Fuzzy Neural

More information

ADAPTIVE INVERSE CONTROL BASED ON NONLINEAR ADAPTIVE FILTERING. Information Systems Lab., EE Dep., Stanford University

ADAPTIVE INVERSE CONTROL BASED ON NONLINEAR ADAPTIVE FILTERING. Information Systems Lab., EE Dep., Stanford University ADAPTIVE INVERSE CONTROL BASED ON NONLINEAR ADAPTIVE FILTERING Bernard Widrow 1, Gregory Plett, Edson Ferreira 3 and Marcelo Lamego 4 Information Systems Lab., EE Dep., Stanford University Abstract: Many

More information

Worst-Case Analysis of the Perceptron and Exponentiated Update Algorithms

Worst-Case Analysis of the Perceptron and Exponentiated Update Algorithms Worst-Case Analysis of the Perceptron and Exponentiated Update Algorithms Tom Bylander Division of Computer Science The University of Texas at San Antonio San Antonio, Texas 7849 bylander@cs.utsa.edu April

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 254 Part V

More information

Design of Observers for Takagi-Sugeno Systems with Immeasurable Premise Variables : an L 2 Approach

Design of Observers for Takagi-Sugeno Systems with Immeasurable Premise Variables : an L 2 Approach Design of Observers for Takagi-Sugeno Systems with Immeasurable Premise Variables : an L Approach Dalil Ichalal, Benoît Marx, José Ragot, Didier Maquin Centre de Recherche en Automatique de ancy, UMR 739,

More information

Curriculum Vitae Wenxiao Zhao

Curriculum Vitae Wenxiao Zhao 1 Personal Information Curriculum Vitae Wenxiao Zhao Wenxiao Zhao, Male PhD, Associate Professor with Key Laboratory of Systems and Control, Institute of Systems Science, Academy of Mathematics and Systems

More information

Introduction to Neural Networks: Structure and Training

Introduction to Neural Networks: Structure and Training Introduction to Neural Networks: Structure and Training Professor Q.J. Zhang Department of Electronics Carleton University, Ottawa, Canada www.doe.carleton.ca/~qjz, qjz@doe.carleton.ca A Quick Illustration

More information

Neural Network Training

Neural Network Training Neural Network Training Sargur Srihari Topics in Network Training 0. Neural network parameters Probabilistic problem formulation Specifying the activation and error functions for Regression Binary classification

More information

Nonlinear System Identification Based on a Novel Adaptive Fuzzy Wavelet Neural Network

Nonlinear System Identification Based on a Novel Adaptive Fuzzy Wavelet Neural Network Nonlinear System Identification Based on a Novel Adaptive Fuzzy Wavelet Neural Network Maryam Salimifard*, Ali Akbar Safavi** *School of Electrical Engineering, Amirkabir University of Technology, Tehran,

More information

Nonlinear System Identification Using MLP Dr.-Ing. Sudchai Boonto

Nonlinear System Identification Using MLP Dr.-Ing. Sudchai Boonto Dr-Ing Sudchai Boonto Department of Control System and Instrumentation Engineering King Mongkut s Unniversity of Technology Thonburi Thailand Nonlinear System Identification Given a data set Z N = {y(k),

More information

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis Introduction to Natural Computation Lecture 9 Multilayer Perceptrons and Backpropagation Peter Lewis 1 / 25 Overview of the Lecture Why multilayer perceptrons? Some applications of multilayer perceptrons.

More information

Neural Networks, Computation Graphs. CMSC 470 Marine Carpuat

Neural Networks, Computation Graphs. CMSC 470 Marine Carpuat Neural Networks, Computation Graphs CMSC 470 Marine Carpuat Binary Classification with a Multi-layer Perceptron φ A = 1 φ site = 1 φ located = 1 φ Maizuru = 1 φ, = 2 φ in = 1 φ Kyoto = 1 φ priest = 0 φ

More information

10-701/ Machine Learning, Fall

10-701/ Machine Learning, Fall 0-70/5-78 Machine Learning, Fall 2003 Homework 2 Solution If you have questions, please contact Jiayong Zhang .. (Error Function) The sum-of-squares error is the most common training

More information

A PSO Approach for Optimum Design of Multivariable PID Controller for nonlinear systems

A PSO Approach for Optimum Design of Multivariable PID Controller for nonlinear systems A PSO Approach for Optimum Design of Multivariable PID Controller for nonlinear systems Taeib Adel Email: taeibadel@live.fr Ltaeif Ali Email: ltaief24@yahoo.fr Chaari Abdelkader Email: nabile.chaari@yahoo.fr

More information