Automatic Structure and Parameter Training Methods for Modeling of Mechanical System by Recurrent Neural Networks

Size: px
Start display at page:

Download "Automatic Structure and Parameter Training Methods for Modeling of Mechanical System by Recurrent Neural Networks"

Transcription

1 Automatic Structure and Parameter Training Methods for Modeling of Mechanical System by Recurrent Neural Networks C. James Li and Tung-Yung Huang Department of Mechanical Engineering, Aeronautical Engineering and Mechanics Rensselaer Polytechnic Institute Troy, NY ABSTRACT Automatic nonlinear-system identification is very useful for various disciplines including, e.g., automatic control, mechanical diagnostics, and financial market prediction. This paper describes a fully automatic structural and weight learning method for recurrent neural networks (RNN). The basic idea is training with residuals, i.e., a single hidden neuron RNN is trained to track the residuals of an existing network before it is augmented to the existing network to form a larger and, hopefully, better network. The network continues to grow until either a desired level of accuracy or a preset maximal number of neurons is reached. The method requires neither guessing of initial weight values nor the number of neurons in the hidden layer from users. This new structural and weight learning algorithm is used to find RNN models for a twodegree-of-freedom planar robot, a Van der Pol oscillator and a Mackey-Glass equation using their simulated responses to excitations. The algorithm is able to find good RNN models in all three cases. 1. INTRODUCTION Neural networks are well suited for nonlinear modeling in applications such as control and model-based diagnostics [1,2]. It is clearly feasible to construct fast, parallel devices to implement these models for real-time applications. The class of models is universal in the sense that essentially any function can be implemented to any desired degree of accuracy by a sufficiently large network [3,4]. Another salient characteristic of neural network is its use of novel classes of nonlinear models. It is essential to make the basic model nonlinear, because in that way, the linear systems are the special case. The other way around, there is no satisfactory way to generalize linear systems to any broad range of nonlinear cases. Because of these marked characteristics, neural networks have been accepted by many researchers with great enthusiasm. Unfortunately, many researchers are experimenting with currently available neural network training techniques without the aid of automatic schemes that guide their application and provides guarantees about their results. Their utility is being explored in a highly experimental fashion: several different network architectures are examined to see which produces the best performance; the networks are tinkered with; different initial values are tried; etc. 1

2 Despite the apparent needs in establishing more automatic schemes, the application of neural networks is being taken up by many researchers with great enthusiasm. Results reported by most researchers are usually very difficult to be consistently reproduced because the researcher has to be part of the loop to make his or her scheme work for a given case. More often than not, a system based on a neural net model, which frequently was costly obtained after numerous trial-anderrors by the researcher, fails to respond to changes of the environment because the part that is supposed to update the neural net model breaks down, while the claim made is that the system is "intelligent" due to the inclusion of the neural net. Ironically, when a neural network based system is reported to work well for a problem involving an "unknown" environment, such as an unknown plant in the case of servo control, the researcher usually has learned so much about the environment that he or she knows exactly the type of the neuron, the number of hidden units and the initial values that should be used. The essential issues of system identification using neural network models are explained as follows. 1. Nonlinear models, such as neural networks, can generate error surfaces with many local minima so that the final parameter estimates strongly depend on initial estimates and the vagaries of training experiences. There is no guarantee that the parameter estimates converge to globally optimal parameters; convergence of any kind can take a considerable amount of training since essentially only steepest descent and its variants have been used. More efficient and effective learning algorithms for training NN models need to be investigated. 2. A perhaps more important aspect of system identification that is studied little is structural learning. At minimum, structural learning involves the determination of number of layers and hidden units. In practice, the researcher tends to be an essential part of the structure learning loop as he or she experimentally searches for a network having enough, but not too many, hidden units. Systematic methods for structural learning that can lend themselves to straightforward machine implementation need to be developed. This study will investigate the automatic modeling for the Recurrent Neural Nets (RNN) (as opposed to feedforward, back-propagation or static neural networks). Although more difficult to train, RNNs do have a few attractive properties such as attenuating noise by interacting with signals with their own dynamics, having the ability to deal with timevarying input-output relationships through their special temporal operation [1,2], and modeling a wide class of nonlinear dynamic systems with a concise size [1, 5, 6]. Our goal is to establish practical and proven procedures that, given measurements of inputs and outputs of a system, can identify a Recurrent Neural Network model which behaves like the system. This implies the identification of an appropriate structure including the number of layers and neurons, and associated weight values. To identify a near-optimal structure for a recurrent neural network model is usually the most difficult part in acquiring such a model. Although many training methods have been proposed for weight learning of neural networks, little attention has been paid to structure learning. In general, people tackle the structural learning of NNs in two ways: the constructive addition of neurons [7-10] or the pruning of unnecessary neurons [11-13]. However, these works were limited to feedforward neural networks. Though Chen et al. [14] proposes a structural learning method for RNNs, it is only good for binary sequences in finite state automata. A structural learning method for general RNNs is still not available. Recently, Tsoi and Tan [15] proposed a constructive algorithm for output-feedback type recurrent neural networks based on radial basis functions. Instead of using the predetermined clusters such as those used in the conventional radial basis function neural network approach, they construct new neurons in the region where the desired degree of accuracy is not yet obtained. Therefore, a point in the input space might produce a response from more than one radial basis function. This paper is organized as follows. The structure of a recurrent neural network is described in section 2. Section 3 briefly presents the weight training algorithm that is based on the quasi-newton method, the objective function and its 2

3 gradient, and the initial guessing of weight values. Section 4 then discusses the structural learning algorithm for the recurrent neural network. Section 5 presents simulated and actual modeling experiments. Section 6 is the conclusion. 2. RECURRENT NEURAL NETWORK MODELS A typical recurrent neural network is shown in figure 1. The notations are defined below. : o( k) n o x1 output vector of the output layer y( k) n h x1 output vector of the hidden layer () w i n h xn i weight matrix in the input layer ( ) w h n h xn h weight matrix in the hidden layer ( ) w o n o xn h weight matrix in the output layer x( k) n h x1 state vector in the hidden layer ~ u( k) ni 1 x1 u k actual external input vector ( ) n i x1 extended input vector, u( k) = [ u~ ( k ) 1 ] where k is the time step. The RNN consists of one hidden layer of n h nonlinear elements interconnected by means of a weight matrix w ( h). The n i inputs are mapped onto the nonlinear elements via a weight matrix () w i. ( Note that the bias of a neuron is implemented as the weight of an additional unity input. Therefore, n i is equal to the number of actual external inputs to the network plus one.) Similarly, the output layer collects the outputs of the hidden layer and maps them onto n o outputs via a weight matrix w ( o). The input and output layers are static and perform linear branching and summing respectively, while the hidden layer provides the network with its dynamic behavior. The i-th hidden neuron can be described by a difference equation: where i is a time constant () i ( h) [ n n ] ( + 1) = ( ) + ( ) x k x k w~ z k (1) i i i w~ = w, w (2) nh ( ni + nh) nh ni h h ni + nh j= 1 ij j z= [ u 1,u 2,..,u ni, y 1, y 2,..,y nh ] T (3) The i--th output of the network is computed as n h ( i ij o ) j ( ) o ( k) = w y k j= 1 (4) where 3

4 y i (k) = f i ( x(k) ) = 1 e x i ( k ) 1 + e x i ( k ) (5) 3. WEIGHT LEARNING ALGORITHM Previously, Li and Yan (1995) described a recurrent neural network learning algorithm which is based on the quasi- Newton methods. The weight training algorithm employed the same learning algorithm in Li and Yan (16). It will be summarized briefly as follows to make this paper self-contained T h e O b j e c t i v e F u n c t i o n A n d I t s G r a d i e n t One of the possible objectives in modeling is to obtain a model that behaves similarly to the actual system. Hence we chose to minimize an objective function of sum of squared errors [17-21]. Such an objective function could be defined as J (,w ( i),w ( h),w (o) )= 1 2 k=1 N n o o m ( k) d m ( k) m =1 ( ) 2 (6) where d m denotes the desired output of the system. The difference between the desired output, d m, and the neural network output, o m, is the residual. Note that output of a neural network is a function of its thresholds and weights. So is the objective function. Define w to be a vector containing all the elements belong to weight matrices w () i and w ( h) and similarly w to be a weight vector containing neuron time constants and all the elements belong to weight matrices w () i, w ( h) and w ( o). The gradient of the objective function consists of its partial derivatives with respect to individual components of the w. The partial derivatives of the objective function with respect to its variables can be calculated as follows [22-24], J N (o ) w = ij k=1 ( o i ( k) d i ( k) )y j (k) (7) J i = N k=1 n o (o) y ( o m ( k) d m ( k) )w l ( k) ml m =1 i (8) and J = w ˆ ij N k= 1 n o ( o) y ( o m ( k) d m ( k) l ( k) )w ml (9) w ˆ ij m =1 where y l ( k) i and ( ) y l k w ˆ ij can be computed as y l ( k) x = f γ l ( x l ( k) ) δ li x l (k 1)+γ l ( k 1) l + i γ i n h (h) w y ( p k 1) (10) lp p=1 γ i y l ( k) = f w ˆ l x l ( k ) ij x ( ) γ l ( k 1) l n h ( h) + w y ( p k 1) w ˆ lp +δ ij p=1 w ˆ li z j ( k 1) ij (11) with initial conditions 4

5 x l ( 0) = 0, i y l ( 0) i = 0, x l ( 0) = 0 and w ˆ ij ( o) = 0 (12) w ˆ ij y l where f ( x) denotes the derivative of f ( x ) with respect to x and li the Kronecker delta Q u a s i - N e w t o n M e t h o d f o r T r a i n i n g N e u r a l N e t w o r k W e i g h t s Quasi-Newton method has been shown to be one of the most efficient gradient based methods for tuning weights of a recurrent neural network [16]. With an initial guess of weights, w 0, the weights are updated iteratively w N +1 = w N + s N (13) where is the step size along the line search direction s N in the weight space. Golden section method is the line search method used to get * [22], the optimal value of. The search direction s N is provided by the BFGS quasi-newton method [25-28]. s N = H N g N (14) H N +1 = H N g N T H N g N x T N x N x T N g N x T N g N x N g T T N H N + H N g N x N x T N g N (15) where H N is the approximated inverse Hessian matrix at N -th iteration, g N the gradient at the N -th iteration, x N the difference between x N and x N 1, and g N the difference between g N and g N 1. Usually, H 0 = I, the identity matrix, at beginning and this makes the method equivalent to the steepest descent method initially S e l e c t i o n O f T h e I n i t i a l W e i g h t V a l u e s According to equation (4), for a single hidden neuron network, the output is o(k) = w o f (x(k)) (16) where x(k) is given in equation (1) and w o is a scalar output weight. In this case, f, as defined in (5), is a continuous function whose inverse exists. The foregoing equation can be written as follows f 1 o(k) w o = x(k) = n i +n h ix i ( k 1)+ w ˆ ij z j ( k 1) (17) j = 1 Specifically, the inverse function is x = ln 1 y 1+ y (18) With a set of training data consisting of o(k) and u, and a chosen w o, Eq. (17) yields a system of linear equations where ˆ w ij and i are unknowns. If the number of the equations is larger than the number of unknowns, the unknowns can be determined using the least squares technique and used as the initial weight values. If no a priori knowledge exists for the selection of w o, it is normally chosen so that desired hidden neuron outputs, o(k), are in a range between -0.5 and +0.5 to w o avoid saturation. 5

6 4. THE STRUCTURAL LEARNING The basic idea behind the structure learning algorithm is training with residuals. The algorithm takes an incremental approach in which a separate neural network is trained to match the residuals of an existing network, and subsequently augmented to the existing network to form a new and larger network. The parameters of the new network are then further tuned by the aforementioned quasi-newton method. This process continues until a desired accuracy is reached, the number of hidden neurons exceeds a preset limit or no significant improvement is seen. The procedure is detailed below. First, weight training is carried out on an existing network so that its output will approach the desired output of a training data. If the stopping criterion is not met after a preset number of iterations, the weight training will stop. Assume that, at this moment, the network has n h hidden neurons and its input, hidden, and output weights are w () i, w ( h) and w ( o), respectively. Then, the incremental structural learning is started up by training the weights of another network to track the residual of the existing network. ( Here, we assume the new network only has a single hidden neuron. While more hidden neurons can certainly be used, using a single neuron simplifies the coding and (i) discussion.) Let s say, after weight training, this neural network s weights are w res, w (h) res and w (o) res. This network is then augmented to the previous one to form a larger network with the augmented weights as: w ( i ) aug w = w ( i ) ( i ) res (19) w ( h) aug ( h ) w = 0 0 ( h) wres o [, res o ] ( o ( ) ( ) aug (20) w = w w (21) Subsequently, weight training will be carried out on the augmented network to meet the stopping criterion. If that can not be accomplished within a preset number of iterations, another run of structural learning will be carried out. Since the weight training algorithm adjusts the new neural network to track the residual, it is likely to produce a new neural network that focuses on the part that has not been picked up by the existing network. Another benefit is from the lower complexity of the residuals. This translates to simpler learning and higher success rate in learning. The foregoing method trains one hidden neuron at a time before it is augmented into an existing recurrent neural network. The new neuron is trained without the benefit of being connected to existing hidden neurons. This is different from how it is going to be used after the augmentation when there are interconnections among hidden neurons. As illustrated in figure 2, this situation is corrected by connecting the existing neural network's output, which is the weighted sum of all the outputs of its hidden neurons, to the new neuron. With the output from the existing neural (i) network as an additional input, the new neuron s input is u res = [ o, u]. Therefore, the input weight w res consists of 2 parts: w o ( i) ( i ) which maps the existing net s output o to the new hidden neuron and w U to the new hidden neuron, i.e., wres ( i ) o i [ w ( = ), wu ( i) ] which maps the external inputs. Once the new neuron is trained, the product of the input weight connecting the existing neural net output to the new neuron and the output weights of the existing neural net becomes the initial weights connecting the existing hidden neurons to the new neuron. Let s say after training, the new network has (i) weights w res, w (h) (o) res and w res. Then, the augmented network has its initial weights as the following: w ( i ) aug w = w ( i ) ( i ) U (22) 6

7 w ( h) aug ( h) w 0 = ( o i) ( o) ( h ) (23) w w wres o [, res o ] ( o ( ) ( ) aug w = w w (24) 5. MODELING EXPERIMENTS The proposed algorithm was evaluated with different modeling experiments. Three nonlinear systems including a 2- link robot, Van der Pol oscillator and the Mackey-Glass equation are modeled from their input/output data. In all the experiments the models are evaluated with their ability to simulate a system, i.e., models generate responses solely from the inputs and no past samples of the actual output are available to the model. This is different from and more difficult than, for example, the one step ahead prediction model which is frequently used in control T w o - l i n k r o b o t : The governing equations of the robot are: where D && + D && D( & + 2 & & ) + c & + D = u (25) D && + D && + D & + c & + D = u (26) D = m l + m l + m l + 2m ll cos (27) D 12 = m 2 l m 2 l 1 l 2 cos 2 (28) D 22 = m 2 l 2 2 (29) D = m 2 l 1 l 2 sin 2 (30) D 1 = ( m 1 +m 2 )gl 1 sin 1 + m 2 gl 2 sin( ) (31) D 2 = m 2 gl 2 sin( ) (32) Subscripts 1 and 2 represent the first link and the second link respectively; u i denotes the torque applied to the i -th joint, i the angle of the i -th joint, l i the length of the i -th link, c i the damping coefficient of the i -th link, and m i the mass of the i -th link. The following values are used for parameters in simulation: m 1 = m 2 = 1kg, c 1 = c 2 = 0.1N m/sec, l 1 = 0.2m, l 2 = 0.1m and u 1 = u 2 = 0.7 N m. The sampling interval is 0.1 sec and 62 points are generated. The first 31 points are used for training and the remaining points are used for testing. The trained recurrent neural network needs 3 hidden neurons to satisfy the accuracy requirement (Previously, 6 hidden neurons were arbitrarily chosen in Li and Yan [16].). The actual output of the recurrent neural network during the training (between 0 and 3 seconds) and testing (between 3 and 6 seconds) and the corresponding desired output are plotted in figure 3. The discrepancies are plotted in figure 4. The root mean square error is rad V a n d e r P o l o s c i l l a t o r : The governing equation of the Van der Pol oscillator is: 7

8 d 2 y dt 2 + ( y2 1) dy dt + y = 0 (33) This system exhibits a limit cycle behavior in phase plane [29]. 400 points are generated. The first 200 points are used for training and the rest are for testing. Figure 5 shows the generated data in phase plane. Our method resulted a RNN of two hidden neurons. The actual output of the trained recurrent neural network and the desired output are shown in figure 6 for comparison. The errors are plotted in figure 7 and the root mean square error is T h e M a c k e y - G l a s s e q u a t i o n [ 3 0 ] : The governing equation is dy( t) dt ay( t r) = 1+ y 10 ( t r) by( t) (34) A set of 500 points are generated for system identification using r = 17, a = 0.2 and b = 0.1. The past states before time instant 0 are assumed to be zero here though it's not necessarily the case. Half of the data are used for training and the others for testing. The output of the trained recurrent neural network and the desired output are plotted in figure 8 and the errors are plotted in figure 9. It is clear that good tracking is obtained except for the initial few points. The root mean square error is CONCLUSIONS A fully automated recurrent neural network structural and weight learning algorithm is developed. Its marked characteristics include: providing a useful modeling technique in the sense that functions of a number of classes can be approximated to very good accuracy, automatic selection of initial weight values, automatic structural learning, excellent learning efficiency, and, frequently, a near optimal convergence. Its effectiveness has been demonstrated by the identification of three dynamic systems of different natures from their input/output data. These systems are a simulated vertical two-degree-of-freedom planar robot, a Van der Pol oscillator, and the Mackey-Glass equation. In all three cases, our algorithm quickly constructed a RNN model containing a small number of hidden neurons, that exhibits very small discrepancy between its outputs and that of the actual nonlinear dynamic system. When new data that has never been seen by the model before is supplied, the models have also demonstrated good generalization capability. REFERENCES 1. Jin, L., Nikiforuk, P. N., and Gupta, M. M., 1994, Dynamic Recurrent Neural Networks for Control of Unknown Nonlinear Systems, Transactions of the ASME, Journal of Dynamic Systems, Measurement, and Control, vol. 116, pp Sastry, P. S., Santharam, G., and Unnikrishan, K. P., 1994, Memory Neuron Networks for Identification and Control of Dynamical Systems, IEEE Transactions on Neural Networks, vol. 5, no. 2, pp Cybenko, G., 1989, Approximation by Superpositions of a Sigmoidal Function, Mathematics of Control, Signals, and Systems, vol. 2, no. 4, pp Hornik, K., Stinchcombe, M., and White, H., 1989, Multilayer Feedforward Networks Are Universal Approximators, Neural Networks, vol. 2, no. 5, pp

9 5. Ong, S., You, C., Choi, S., and Hong, D., 1997, A Decision Feedback Recurrent Neural Equalizer as an Infinite Impulse Response Filter, IEEE Transactions on Signal Processing, vol. 45, no. 11, pp Parisi, R., Di Claudio, E. D., Orlandi, G., and Rao, B. D., 1997, Fast Adaptive Digital Equalization by Recurrent Neural Networks, IEEE Transactions on Signal Processing, vol. 45, no. 11, pp Lee, T.-C., and Peterson, A. M., 1989, SPAN: A Neural Network That Grows, 1st International Joint Conference on Neural Networks. 8. Lee, T.-C., 1991, Structure Level Adaptation for Artificial Neural Networks, Kluwer Academic Publishers, Boston, pp Hirose, Y., Yamashita, K., and Hijiya, S., 1991, Back-Propagation Algorithm Which Varies the Number of Hidden Units, Neural Networks, vol. 4, pp Li, C. J., and Kim, T., 1995, "A New Feedforward Neural Network Structural Learning Algorithm- Augmentation by Training with Residuals," Journal of Dynamic Systems, Measurement and Control, vol. 117, no. 3, pp Karnin, E. D., 1990, A Simple Procedure for Pruning Backpropagation Trained Neural Networks, IEEE Transactions on Neural Networks, vol. 1, no. 2, pp Reed, R., 1993, Pruning Algorithms A Survey, IEEE Transactions on Neural Networks, vol. 4, no. 5, pp Ishikawa, M., 1996, Structural Learning with Forgetting, Neural Networks, vol. 9, no. 3, pp Chen, D., Giles, C. L., Sun, G. Z., Chen, H. H., Lee, Y. C., and Goudreau, M. W., 1995, Constructive Learning of Recurrent Neural Networks, Neural Networks Theory, Technology, and Applications (Patrick K. Simpson, editor), IEEE Technology Update series, New York, IEEE, pp Tsoi, A. C., and Tan, S., 1997, Recurrent Neural Networks: A Constructive Algorithm, and its Properties, Neurocomputing, vol. 15, pp Li, C. J., and Yan, L., 1995, "Mechanical System Modeling Using Recurrent Neural Networks via Quasi-Newton Learning Methods", Applied Mathematical Modeling, Vol. 19, p Hopfield, J. J., 1982, "Neural Networks And Physical Systems With Emergent Collective Computational Abilities," Proceedings of the National Academy Science USA, vol. 79, pp Hopfield, J. J., 1984, "Neurons With Graded Response Have Collective Computation Properties Abilities," Proceedings of the National Academy of Science USA, vol. 8, pp Werbos, P., 1988, "Generalization Of Backpropagation With Application To A Recurrent Gas Market Model," Neural Networks, vol. 1, pp Pineda, F. J., 1989, "Recurrent Backpropagation And The Dynamical Approach To Adaptive Neural Computation," Neural Computation, vol. 1, pp Williams, R. J. and Zipser, D., 1989, "A Learning Algorithm For Continually Running Fully Recurrent Networks," Neural Computation, vol. 1, pp Vanderplaats, G. N., 1984, Numerical Optimization Techniques For Engineering Design: With Applications, McGraw-Hill. 23. Luenberger, D. G., 1984, Linear And Nonlinear Programming, Addison-Wesley. 24. Bazaraa, M. S., 1993, Nonlinear Programming: Theory And Algorithms, Wiley. 25. Broydon, C. G., 1970, "The Convergence Of A Class Of Double Rank Minimization Algorithm," Parts I and II, J. Inst. Maths. Applns., vol. 6, pp and

10 26. Fletcher, R., 1970, "A New Approach To Variable Metric Algorithms," Computer Journal, vol. 13, pp Goldfarb, D., 1970, "A Family Of Variable Metric Methods Derived By Variational Means," Maths. Comput., vol. 24, pp Shanno, D. F., 1970, "Conditioning Of Quasi-Newton Methods For Function Minimization," Maths. Comput., vol. 24, pp Cook, P. A., 1986, Nonlinear Dynamical Systems, Prentice-Hall. 30. Lapedes, A. and Farber, R., 1987, "Nonlinear Signal Processing Using Neural Networks: Prediction And System Modeling," Los Alamos National Laboratory, Technical Report, LA-UR

11 o w (o) w (h) y y 1 2 y3 y n h w (i) u u u Figure 1. The Structure of Recurrent Neural Network 11

12 (a) Construct an RNN to track the target output(s) O. Target T ( o) O = W? y (b) Construct another RNN to track the residual with additional input(s) O. Target e = T - O O = W? y res ( res o ) n h hidden neurons RNN (c) Augment (b) to (a). Target T W (i) W (h) 1 hidden neuron RNN U 1 O U 1 ( aug o ) O = W? y ( h) W res i [ U ] ( i) ( o i) ( ) res W = W W Augmented RNN W ( h) aug = W W ( h) ( o i) ( o)? W 0 ( h) W res U 1 ( i) W aug Figure 2. The Structural Learning Algorithm of Recurrent Neural Network 12

13 solid: desired output dashed: output from RNN Position (rad) Time (sec) Figure 3. The Response of Neural Net and Shoulder Joint of the 2-Link Robot 13

14 6 x Position (rad) Time (sec) Figure 4. The Error of RNN for the Shoulder Joint of the 2-Link Robot 14

15 3 2 1 dy/dt y Figure 5. The Phase Plane Trajectory of the Van der Pol Oscillator 15

16 3 2 solid: desired output dashed: output from RNN y Time (sec) Figure 6. The Output of the Van der Pol Oscillator and the RNN 16

17 y Time (sec) Figure 7. The Error of RNN for the Van der Pol Oscillator 17

18 y solid: desired output dashed: output from RNN Time (sample) Figure 8. The Output of the Mackey-Glass Equation and the RNN 18

19 y Time (sample) Figure 9. The Error of RNN for the Mackey-Glass Equation 19

Lecture 4: Perceptrons and Multilayer Perceptrons

Lecture 4: Perceptrons and Multilayer Perceptrons Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks Threshold units Gradient descent Multilayer networks Backpropagation Hidden layer representations Example: Face Recognition Advanced topics 1 Connectionist Models Consider humans:

More information

AI Programming CS F-20 Neural Networks

AI Programming CS F-20 Neural Networks AI Programming CS662-2008F-20 Neural Networks David Galles Department of Computer Science University of San Francisco 20-0: Symbolic AI Most of this class has been focused on Symbolic AI Focus or symbols

More information

Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions

Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions Artem Chernodub, Institute of Mathematical Machines and Systems NASU, Neurotechnologies

More information

A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models. Isabelle Rivals and Léon Personnaz

A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models. Isabelle Rivals and Léon Personnaz In Neurocomputing 2(-3): 279-294 (998). A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models Isabelle Rivals and Léon Personnaz Laboratoire d'électronique,

More information

Chapter 4 Neural Networks in System Identification

Chapter 4 Neural Networks in System Identification Chapter 4 Neural Networks in System Identification Gábor HORVÁTH Department of Measurement and Information Systems Budapest University of Technology and Economics Magyar tudósok körútja 2, 52 Budapest,

More information

Temporal Backpropagation for FIR Neural Networks

Temporal Backpropagation for FIR Neural Networks Temporal Backpropagation for FIR Neural Networks Eric A. Wan Stanford University Department of Electrical Engineering, Stanford, CA 94305-4055 Abstract The traditional feedforward neural network is a static

More information

Back-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples

Back-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples Back-Propagation Algorithm Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples 1 Inner-product net =< w, x >= w x cos(θ) net = n i=1 w i x i A measure

More information

100 inference steps doesn't seem like enough. Many neuron-like threshold switching units. Many weighted interconnections among units

100 inference steps doesn't seem like enough. Many neuron-like threshold switching units. Many weighted interconnections among units Connectionist Models Consider humans: Neuron switching time ~ :001 second Number of neurons ~ 10 10 Connections per neuron ~ 10 4 5 Scene recognition time ~ :1 second 100 inference steps doesn't seem like

More information

Online Identification And Control of A PV-Supplied DC Motor Using Universal Learning Networks

Online Identification And Control of A PV-Supplied DC Motor Using Universal Learning Networks Online Identification And Control of A PV-Supplied DC Motor Using Universal Learning Networks Ahmed Hussein * Kotaro Hirasawa ** Jinglu Hu ** * Graduate School of Information Science & Electrical Eng.,

More information

Unit 8: Introduction to neural networks. Perceptrons

Unit 8: Introduction to neural networks. Perceptrons Unit 8: Introduction to neural networks. Perceptrons D. Balbontín Noval F. J. Martín Mateos J. L. Ruiz Reina A. Riscos Núñez Departamento de Ciencias de la Computación e Inteligencia Artificial Universidad

More information

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,

More information

An artificial neural networks (ANNs) model is a functional abstraction of the

An artificial neural networks (ANNs) model is a functional abstraction of the CHAPER 3 3. Introduction An artificial neural networs (ANNs) model is a functional abstraction of the biological neural structures of the central nervous system. hey are composed of many simple and highly

More information

IN neural-network training, the most well-known online

IN neural-network training, the most well-known online IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 1, JANUARY 1999 161 On the Kalman Filtering Method in Neural-Network Training and Pruning John Sum, Chi-sing Leung, Gilbert H. Young, and Wing-kay Kan

More information

Frequency Selective Surface Design Based on Iterative Inversion of Neural Networks

Frequency Selective Surface Design Based on Iterative Inversion of Neural Networks J.N. Hwang, J.J. Choi, S. Oh, R.J. Marks II, "Query learning based on boundary search and gradient computation of trained multilayer perceptrons", Proceedings of the International Joint Conference on Neural

More information

Neural Network Identification of Non Linear Systems Using State Space Techniques.

Neural Network Identification of Non Linear Systems Using State Space Techniques. Neural Network Identification of Non Linear Systems Using State Space Techniques. Joan Codina, J. Carlos Aguado, Josep M. Fuertes. Automatic Control and Computer Engineering Department Universitat Politècnica

More information

y(n) Time Series Data

y(n) Time Series Data Recurrent SOM with Local Linear Models in Time Series Prediction Timo Koskela, Markus Varsta, Jukka Heikkonen, and Kimmo Kaski Helsinki University of Technology Laboratory of Computational Engineering

More information

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption ANDRÉ NUNES DE SOUZA, JOSÉ ALFREDO C. ULSON, IVAN NUNES

More information

Lecture 7 Artificial neural networks: Supervised learning

Lecture 7 Artificial neural networks: Supervised learning Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

4. Multilayer Perceptrons

4. Multilayer Perceptrons 4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks 鮑興國 Ph.D. National Taiwan University of Science and Technology Outline Perceptrons Gradient descent Multi-layer networks Backpropagation Hidden layer representations Examples

More information

Recurrent neural networks with trainable amplitude of activation functions

Recurrent neural networks with trainable amplitude of activation functions Neural Networks 16 (2003) 1095 1100 www.elsevier.com/locate/neunet Neural Networks letter Recurrent neural networks with trainable amplitude of activation functions Su Lee Goh*, Danilo P. Mandic Imperial

More information

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks Topics in Machine Learning-EE 5359 Neural Networks 1 The Perceptron Output: A perceptron is a function that maps D-dimensional vectors to real numbers. For notational convenience, we add a zero-th dimension

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE 4: Linear Systems Summary # 3: Introduction to artificial neural networks DISTRIBUTED REPRESENTATION An ANN consists of simple processing units communicating with each other. The basic elements of

More information

Artificial Neural Networks

Artificial Neural Networks 0 Artificial Neural Networks Based on Machine Learning, T Mitchell, McGRAW Hill, 1997, ch 4 Acknowledgement: The present slides are an adaptation of slides drawn by T Mitchell PLAN 1 Introduction Connectionist

More information

Input layer. Weight matrix [ ] Output layer

Input layer. Weight matrix [ ] Output layer MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.034 Artificial Intelligence, Fall 2003 Recitation 10, November 4 th & 5 th 2003 Learning by perceptrons

More information

Neural Networks and the Back-propagation Algorithm

Neural Networks and the Back-propagation Algorithm Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

Lecture 5: Logistic Regression. Neural Networks

Lecture 5: Logistic Regression. Neural Networks Lecture 5: Logistic Regression. Neural Networks Logistic regression Comparison with generative models Feed-forward neural networks Backpropagation Tricks for training neural networks COMP-652, Lecture

More information

Christian Mohr

Christian Mohr Christian Mohr 20.12.2011 Recurrent Networks Networks in which units may have connections to units in the same or preceding layers Also connections to the unit itself possible Already covered: Hopfield

More information

Linear Discrimination Functions

Linear Discrimination Functions Laurea Magistrale in Informatica Nicola Fanizzi Dipartimento di Informatica Università degli Studi di Bari November 4, 2009 Outline Linear models Gradient descent Perceptron Minimum square error approach

More information

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur Module 12 Machine Learning Lesson 39 Neural Networks - III 12.4.4 Multi-Layer Perceptrons In contrast to perceptrons, multilayer networks can learn not only multiple decision boundaries, but the boundaries

More information

Introduction to Neural Networks

Introduction to Neural Networks Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning

More information

Neural Network Approach to Control System Identification with Variable Activation Functions

Neural Network Approach to Control System Identification with Variable Activation Functions Neural Network Approach to Control System Identification with Variable Activation Functions Michael C. Nechyba and Yangsheng Xu The Robotics Institute Carnegie Mellon University Pittsburgh, PA 52 Abstract

More information

Feedforward Neural Nets and Backpropagation

Feedforward Neural Nets and Backpropagation Feedforward Neural Nets and Backpropagation Julie Nutini University of British Columbia MLRG September 28 th, 2016 1 / 23 Supervised Learning Roadmap Supervised Learning: Assume that we are given the features

More information

Neural Networks biological neuron artificial neuron 1

Neural Networks biological neuron artificial neuron 1 Neural Networks biological neuron artificial neuron 1 A two-layer neural network Output layer (activation represents classification) Weighted connections Hidden layer ( internal representation ) Input

More information

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function

More information

AN INTRODUCTION TO NEURAL NETWORKS. Scott Kuindersma November 12, 2009

AN INTRODUCTION TO NEURAL NETWORKS. Scott Kuindersma November 12, 2009 AN INTRODUCTION TO NEURAL NETWORKS Scott Kuindersma November 12, 2009 SUPERVISED LEARNING We are given some training data: We must learn a function If y is discrete, we call it classification If it is

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement

More information

A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation

A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation 1 Introduction A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation J Wesley Hines Nuclear Engineering Department The University of Tennessee Knoxville, Tennessee,

More information

Using Neural Networks for Identification and Control of Systems

Using Neural Networks for Identification and Control of Systems Using Neural Networks for Identification and Control of Systems Jhonatam Cordeiro Department of Industrial and Systems Engineering North Carolina A&T State University, Greensboro, NC 27411 jcrodrig@aggies.ncat.edu

More information

Multilayer Perceptron Learning Utilizing Singular Regions and Search Pruning

Multilayer Perceptron Learning Utilizing Singular Regions and Search Pruning Multilayer Perceptron Learning Utilizing Singular Regions and Search Pruning Seiya Satoh and Ryohei Nakano Abstract In a search space of a multilayer perceptron having hidden units, MLP(), there exist

More information

Introduction to Artificial Neural Networks

Introduction to Artificial Neural Networks Facultés Universitaires Notre-Dame de la Paix 27 March 2007 Outline 1 Introduction 2 Fundamentals Biological neuron Artificial neuron Artificial Neural Network Outline 3 Single-layer ANN Perceptron Adaline

More information

Study on the use of neural networks in control systems

Study on the use of neural networks in control systems Study on the use of neural networks in control systems F. Rinaldi Politecnico di Torino & Italian Air Force, Italy Keywords: neural networks, automatic control Abstract The purpose of this report is to

More information

A STATE-SPACE NEURAL NETWORK FOR MODELING DYNAMICAL NONLINEAR SYSTEMS

A STATE-SPACE NEURAL NETWORK FOR MODELING DYNAMICAL NONLINEAR SYSTEMS A STATE-SPACE NEURAL NETWORK FOR MODELING DYNAMICAL NONLINEAR SYSTEMS Karima Amoura Patrice Wira and Said Djennoune Laboratoire CCSP Université Mouloud Mammeri Tizi Ouzou Algeria Laboratoire MIPS Université

More information

Adaptive Predictive Observer Design for Class of Uncertain Nonlinear Systems with Bounded Disturbance

Adaptive Predictive Observer Design for Class of Uncertain Nonlinear Systems with Bounded Disturbance International Journal of Control Science and Engineering 2018, 8(2): 31-35 DOI: 10.5923/j.control.20180802.01 Adaptive Predictive Observer Design for Class of Saeed Kashefi *, Majid Hajatipor Faculty of

More information

Harnessing Nonlinearity: Predicting Chaotic Systems and Saving

Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication Publishde in Science Magazine, 2004 Siamak Saliminejad Overview Eco State Networks How to build ESNs Chaotic

More information

Address for Correspondence

Address for Correspondence Research Article APPLICATION OF ARTIFICIAL NEURAL NETWORK FOR INTERFERENCE STUDIES OF LOW-RISE BUILDINGS 1 Narayan K*, 2 Gairola A Address for Correspondence 1 Associate Professor, Department of Civil

More information

Multilayer Neural Networks

Multilayer Neural Networks Multilayer Neural Networks Introduction Goal: Classify objects by learning nonlinearity There are many problems for which linear discriminants are insufficient for minimum error In previous methods, the

More information

Learning and Memory in Neural Networks

Learning and Memory in Neural Networks Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units

More information

Neural Network Control of Robot Manipulators and Nonlinear Systems

Neural Network Control of Robot Manipulators and Nonlinear Systems Neural Network Control of Robot Manipulators and Nonlinear Systems F.L. LEWIS Automation and Robotics Research Institute The University of Texas at Arlington S. JAG ANNATHAN Systems and Controls Research

More information

Artificial Neural Network

Artificial Neural Network Artificial Neural Network Contents 2 What is ANN? Biological Neuron Structure of Neuron Types of Neuron Models of Neuron Analogy with human NN Perceptron OCR Multilayer Neural Network Back propagation

More information

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm Volume 4, Issue 5, May 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Huffman Encoding

More information

Learning Deep Architectures for AI. Part I - Vijay Chakilam

Learning Deep Architectures for AI. Part I - Vijay Chakilam Learning Deep Architectures for AI - Yoshua Bengio Part I - Vijay Chakilam Chapter 0: Preliminaries Neural Network Models The basic idea behind the neural network approach is to model the response as a

More information

Benjamin L. Pence 1, Hosam K. Fathy 2, and Jeffrey L. Stein 3

Benjamin L. Pence 1, Hosam K. Fathy 2, and Jeffrey L. Stein 3 2010 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 30-July 02, 2010 WeC17.1 Benjamin L. Pence 1, Hosam K. Fathy 2, and Jeffrey L. Stein 3 (1) Graduate Student, (2) Assistant

More information

CS:4420 Artificial Intelligence

CS:4420 Artificial Intelligence CS:4420 Artificial Intelligence Spring 2018 Neural Networks Cesare Tinelli The University of Iowa Copyright 2004 18, Cesare Tinelli and Stuart Russell a a These notes were originally developed by Stuart

More information

Neural Networks (Part 1) Goals for the lecture

Neural Networks (Part 1) Goals for the lecture Neural Networks (Part ) Mark Craven and David Page Computer Sciences 760 Spring 208 www.biostat.wisc.edu/~craven/cs760/ Some of the slides in these lectures have been adapted/borrowed from materials developed

More information

Machine Learning. Neural Networks

Machine Learning. Neural Networks Machine Learning Neural Networks Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 Biological Analogy Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 THE

More information

Development of a Deep Recurrent Neural Network Controller for Flight Applications

Development of a Deep Recurrent Neural Network Controller for Flight Applications Development of a Deep Recurrent Neural Network Controller for Flight Applications American Control Conference (ACC) May 26, 2017 Scott A. Nivison Pramod P. Khargonekar Department of Electrical and Computer

More information

Chapter 9: The Perceptron

Chapter 9: The Perceptron Chapter 9: The Perceptron 9.1 INTRODUCTION At this point in the book, we have completed all of the exercises that we are going to do with the James program. These exercises have shown that distributed

More information

NONLINEAR PLANT IDENTIFICATION BY WAVELETS

NONLINEAR PLANT IDENTIFICATION BY WAVELETS NONLINEAR PLANT IDENTIFICATION BY WAVELETS Edison Righeto UNESP Ilha Solteira, Department of Mathematics, Av. Brasil 56, 5385000, Ilha Solteira, SP, Brazil righeto@fqm.feis.unesp.br Luiz Henrique M. Grassi

More information

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore Lecture - 27 Multilayer Feedforward Neural networks with Sigmoidal

More information

C4 Phenomenological Modeling - Regression & Neural Networks : Computational Modeling and Simulation Instructor: Linwei Wang

C4 Phenomenological Modeling - Regression & Neural Networks : Computational Modeling and Simulation Instructor: Linwei Wang C4 Phenomenological Modeling - Regression & Neural Networks 4040-849-03: Computational Modeling and Simulation Instructor: Linwei Wang Recall.. The simple, multiple linear regression function ŷ(x) = a

More information

inear Adaptive Inverse Control

inear Adaptive Inverse Control Proceedings of the 36th Conference on Decision & Control San Diego, California USA December 1997 inear Adaptive nverse Control WM15 1:50 Bernard Widrow and Gregory L. Plett Department of Electrical Engineering,

More information

Neural networks. Chapter 19, Sections 1 5 1

Neural networks. Chapter 19, Sections 1 5 1 Neural networks Chapter 19, Sections 1 5 Chapter 19, Sections 1 5 1 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 19, Sections 1 5 2 Brains 10

More information

Machine Learning

Machine Learning Machine Learning 10-315 Maria Florina Balcan Machine Learning Department Carnegie Mellon University 03/29/2019 Today: Artificial neural networks Backpropagation Reading: Mitchell: Chapter 4 Bishop: Chapter

More information

Artificial Neural Networks Examination, June 2005

Artificial Neural Networks Examination, June 2005 Artificial Neural Networks Examination, June 2005 Instructions There are SIXTY questions. (The pass mark is 30 out of 60). For each question, please select a maximum of ONE of the given answers (either

More information

7.1 Basis for Boltzmann machine. 7. Boltzmann machines

7.1 Basis for Boltzmann machine. 7. Boltzmann machines 7. Boltzmann machines this section we will become acquainted with classical Boltzmann machines which can be seen obsolete being rarely applied in neurocomputing. It is interesting, after all, because is

More information

C1.2 Multilayer perceptrons

C1.2 Multilayer perceptrons Supervised Models C1.2 Multilayer perceptrons Luis B Almeida Abstract This section introduces multilayer perceptrons, which are the most commonly used type of neural network. The popular backpropagation

More information

Machine Learning

Machine Learning Machine Learning 10-601 Maria Florina Balcan Machine Learning Department Carnegie Mellon University 02/10/2016 Today: Artificial neural networks Backpropagation Reading: Mitchell: Chapter 4 Bishop: Chapter

More information

Comments. Assignment 3 code released. Thought questions 3 due this week. Mini-project: hopefully you have started. implement classification algorithms

Comments. Assignment 3 code released. Thought questions 3 due this week. Mini-project: hopefully you have started. implement classification algorithms Neural networks Comments Assignment 3 code released implement classification algorithms use kernels for census dataset Thought questions 3 due this week Mini-project: hopefully you have started 2 Example:

More information

Adaptive Inverse Control based on Linear and Nonlinear Adaptive Filtering

Adaptive Inverse Control based on Linear and Nonlinear Adaptive Filtering Adaptive Inverse Control based on Linear and Nonlinear Adaptive Filtering Bernard Widrow and Gregory L. Plett Department of Electrical Engineering, Stanford University, Stanford, CA 94305-9510 Abstract

More information

PREDICTION THE JOMINY CURVES BY MEANS OF NEURAL NETWORKS

PREDICTION THE JOMINY CURVES BY MEANS OF NEURAL NETWORKS Tomislav Filetin, Dubravko Majetić, Irena Žmak Faculty of Mechanical Engineering and Naval Architecture, University of Zagreb, Croatia PREDICTION THE JOMINY CURVES BY MEANS OF NEURAL NETWORKS ABSTRACT:

More information

Artificial Neural Networks. MGS Lecture 2

Artificial Neural Networks. MGS Lecture 2 Artificial Neural Networks MGS 2018 - Lecture 2 OVERVIEW Biological Neural Networks Cell Topology: Input, Output, and Hidden Layers Functional description Cost functions Training ANNs Back-Propagation

More information

Unit III. A Survey of Neural Network Model

Unit III. A Survey of Neural Network Model Unit III A Survey of Neural Network Model 1 Single Layer Perceptron Perceptron the first adaptive network architecture was invented by Frank Rosenblatt in 1957. It can be used for the classification of

More information

Multilayer Neural Networks

Multilayer Neural Networks Multilayer Neural Networks Multilayer Neural Networks Discriminant function flexibility NON-Linear But with sets of linear parameters at each layer Provably general function approximators for sufficient

More information

Reservoir Computing and Echo State Networks

Reservoir Computing and Echo State Networks An Introduction to: Reservoir Computing and Echo State Networks Claudio Gallicchio gallicch@di.unipi.it Outline Focus: Supervised learning in domain of sequences Recurrent Neural networks for supervised

More information

A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE

A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE Li Sheng Institute of intelligent information engineering Zheiang University Hangzhou, 3007, P. R. China ABSTRACT In this paper, a neural network-driven

More information

Linear Classification. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Linear Classification. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington Linear Classification CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 Example of Linear Classification Red points: patterns belonging

More information

Vasil Khalidov & Miles Hansard. C.M. Bishop s PRML: Chapter 5; Neural Networks

Vasil Khalidov & Miles Hansard. C.M. Bishop s PRML: Chapter 5; Neural Networks C.M. Bishop s PRML: Chapter 5; Neural Networks Introduction The aim is, as before, to find useful decompositions of the target variable; t(x) = y(x, w) + ɛ(x) (3.7) t(x n ) and x n are the observations,

More information

New Recursive-Least-Squares Algorithms for Nonlinear Active Control of Sound and Vibration Using Neural Networks

New Recursive-Least-Squares Algorithms for Nonlinear Active Control of Sound and Vibration Using Neural Networks IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 1, JANUARY 2001 135 New Recursive-Least-Squares Algorithms for Nonlinear Active Control of Sound and Vibration Using Neural Networks Martin Bouchard,

More information

Deep Learning & Artificial Intelligence WS 2018/2019

Deep Learning & Artificial Intelligence WS 2018/2019 Deep Learning & Artificial Intelligence WS 2018/2019 Linear Regression Model Model Error Function: Squared Error Has no special meaning except it makes gradients look nicer Prediction Ground truth / target

More information

Recursive Neural Filters and Dynamical Range Transformers

Recursive Neural Filters and Dynamical Range Transformers Recursive Neural Filters and Dynamical Range Transformers JAMES T. LO AND LEI YU Invited Paper A recursive neural filter employs a recursive neural network to process a measurement process to estimate

More information

Data Mining (Mineria de Dades)

Data Mining (Mineria de Dades) Data Mining (Mineria de Dades) Lluís A. Belanche belanche@lsi.upc.edu Soft Computing Research Group Dept. de Llenguatges i Sistemes Informàtics (Software department) Universitat Politècnica de Catalunya

More information

CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning

CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Learning Neural Networks Classifier Short Presentation INPUT: classification data, i.e. it contains an classification (class) attribute.

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) Human Brain Neurons Input-Output Transformation Input Spikes Output Spike Spike (= a brief pulse) (Excitatory Post-Synaptic Potential)

More information

Reading Group on Deep Learning Session 1

Reading Group on Deep Learning Session 1 Reading Group on Deep Learning Session 1 Stephane Lathuiliere & Pablo Mesejo 2 June 2016 1/31 Contents Introduction to Artificial Neural Networks to understand, and to be able to efficiently use, the popular

More information

Second-order Learning Algorithm with Squared Penalty Term

Second-order Learning Algorithm with Squared Penalty Term Second-order Learning Algorithm with Squared Penalty Term Kazumi Saito Ryohei Nakano NTT Communication Science Laboratories 2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 69-2 Japan {saito,nakano}@cslab.kecl.ntt.jp

More information

Neural Modelling of a Yeast Fermentation Process Using Extreme Learning Machines

Neural Modelling of a Yeast Fermentation Process Using Extreme Learning Machines Neural Modelling of a Yeast Fermentation Process Using Extreme Learning Machines Maciej Ławryńczu Abstract This wor details development of dynamic neural models of a yeast fermentation chemical reactor

More information

Convex Optimization CMU-10725

Convex Optimization CMU-10725 Convex Optimization CMU-10725 Quasi Newton Methods Barnabás Póczos & Ryan Tibshirani Quasi Newton Methods 2 Outline Modified Newton Method Rank one correction of the inverse Rank two correction of the

More information

POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH

POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH Abstract POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH A.H.M.A.Rahim S.K.Chakravarthy Department of Electrical Engineering K.F. University of Petroleum and Minerals Dhahran. Dynamic

More information

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks INFOB2KI 2017-2018 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Artificial Neural Networks Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html

More information

Artificial Neural Networks

Artificial Neural Networks Introduction ANN in Action Final Observations Application: Poverty Detection Artificial Neural Networks Alvaro J. Riascos Villegas University of los Andes and Quantil July 6 2018 Artificial Neural Networks

More information

On the Hopfield algorithm. Foundations and examples

On the Hopfield algorithm. Foundations and examples General Mathematics Vol. 13, No. 2 (2005), 35 50 On the Hopfield algorithm. Foundations and examples Nicolae Popoviciu and Mioara Boncuţ Dedicated to Professor Dumitru Acu on his 60th birthday Abstract

More information

Designing Dynamic Neural Network for Non-Linear System Identification

Designing Dynamic Neural Network for Non-Linear System Identification Designing Dynamic Neural Network for Non-Linear System Identification Chandradeo Prasad Assistant Professor, Department of CSE, RGIT,Koderma Abstract : System identification deals with many subtleties

More information

Cascade Neural Networks with Node-Decoupled Extended Kalman Filtering

Cascade Neural Networks with Node-Decoupled Extended Kalman Filtering Cascade Neural Networks with Node-Decoupled Extended Kalman Filtering Michael C. Nechyba and Yangsheng Xu The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 Abstract Most neural networks

More information

Quasi-Newton Methods

Quasi-Newton Methods Newton s Method Pros and Cons Quasi-Newton Methods MA 348 Kurt Bryan Newton s method has some very nice properties: It s extremely fast, at least once it gets near the minimum, and with the simple modifications

More information

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Neural Networks Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Brains as Computational Devices Brains advantages with respect to digital computers: Massively parallel Fault-tolerant Reliable

More information

Introduction to Machine Learning Spring 2018 Note Neural Networks

Introduction to Machine Learning Spring 2018 Note Neural Networks CS 189 Introduction to Machine Learning Spring 2018 Note 14 1 Neural Networks Neural networks are a class of compositional function approximators. They come in a variety of shapes and sizes. In this class,

More information

Topic 3: Neural Networks

Topic 3: Neural Networks CS 4850/6850: Introduction to Machine Learning Fall 2018 Topic 3: Neural Networks Instructor: Daniel L. Pimentel-Alarcón c Copyright 2018 3.1 Introduction Neural networks are arguably the main reason why

More information