Parallel layer perceptron

Size: px
Start display at page:

Download "Parallel layer perceptron"

Transcription

1 Neurocomputing 55 (2003) Letters Parallel layer perceptron Walmir M. Caminhas, Douglas A.G. Vieira, João A. Vasconcelos Department of Electrical Engineering, Federal University of Minas Gerais, Belo Horizonte , MG, Brazil Abstract In this paper, both the architecture and learning procedure underlying the parallel layer perceptron is presented. This topology, dierent to the previous ones, uses parallel layers of perceptrons to map nonlinear input output relationships. Comparisons between the parallel layer perceptron, multi-layer perceptron and ANFIS are included and show the eectiveness of the proposed topology. c 2003 Elsevier B.V. All rights reserved. Keywords: ANFIS; Multi-layer perceptron; Parallel layer perceptron 1. Introduction In the last years the multi-layer perceptrons (MLP) gained popularity in a vast range of applications due to its universal approximation characteristic [1,2] and the popularization of the backpropagation algorithm [7]. Another popular network is the adaptive-network-based fuzzyinference system (ANFIS) [4]. This network has some advantage if compared with MLPs due to the linear dependence of its output in relation to the consequents; therefore, a more ecient learning algorithm can be used. However, the complexityof the ANFIS topologyincreases exponentiallyas the rules are generated using all possible combinations of premisses. The number of generated rules N for a system with n inputs and P premisses is N = P n, hence it is prohibitive to use an ANFIS for problems with several variables. In this paper is proposed a novel network, called parallel layer perceptron (PLP), which tries to combine the advantages of both MLP and ANFIS topologies. Moreover, as the learning for most problems needs a big Corresponding author. Tel.: ; fax: address: caminhas@cpdee.ufmg.br (W.M. Caminhas) /$ - see front matter c 2003 Elsevier B.V. All rights reserved. doi: /s (03)

2 772 W.M. Caminhas et al. / Neurocomputing 55 (2003) computational eort, training neural networks can be viewed as a natural application of parallelism. The topologyproposed here is a natural extension of articial neural networks to parallel environments. Several aspects of the PLP will be treated in this paper. Firstly, the architecture of the proposed topologyis presented. Also included is a particular case which the error surface for half of its parameters is a quadratic curve. Afterwards a learning procedure is discussed. Subsequently, the universal approximation theorem is presented. Lastly, some computational results are presented and discussed. 2. PLP architecture The output y t of the PLP considering n inputs, m perceptrons per layer is calculated as m y t = [(a jt )(b jt )] ; (1) j=1 where (:); (:) and (:) are activation functions (hyperbolic tangent, Gaussian, linear, etc.), a jt = n i=0 p jix it ; b jt = n i=0 v jix it ; v ji and p ji are components of the weight matrices P and V, x it is the ith input for the tth sample, where x 0t is the perceptron bias, and y t is the tth position of the vector output y. In Fig. 1 is shown the PLP topology. Similarly to the traditional multi-layer perceptron, all network parameters can be adapted using the backpropagation method. However, some dierences must be highlighted. Firstly, in the MLP case, the network uses function of functions to Fig. 1. PLP architecture.

3 W.M. Caminhas et al. / Neurocomputing 55 (2003) input output mapping. The PLP is mainlybased on product of functions. Moreover, as can be seen in Fig. 1, the proposed topologyis composed of parallel layers. This feature simplies the network implementation in parallel machines or clusters. One particular case of the topologyshowed in Fig. 1 is considering both (:) and (:) as identityfunctions. In this case, the network output is computed as m m y t = [a jt (b jt )] = [L j N j ]: (2) j=1 j=1 It is important to note that, the particular case described in Eq. (2), has some desirable characteristics. The error surface in relation to p ji, which for this particular case is a linear parameter, is a quadratical structure, hence a more ecient learning algorithm can be used. A hybrid learning that combines backpropagation and the least-squares estimate (LSE) is used to adapt the network parameters. In the next sections, just the network given byeq. (2) will be considered. Firstly, the learning algorithm for the proposed network is presented. It is important to remember that the learning for the most general case is similar to the traditional backpropagation, thus this discussion is not covered in this paper. 3. Hybrid learning algorithm When the particular network presented in Eq. (2) is employed, a method that uses a combination of gradient and LSE is more interesting. As the output y t is a linear function of the parameters p ji, their optimum values can be calculated using simple algorithm based on linear algebra. This feature is similar to the consequent characteristics for ANFIS. To simplifythe explanation about the LSE let l k = p ji, where k = n(j 1) + i, which is simplythe transformation of the matrix P to a vector l with the same components. First of all, the outputs of the nonlinear perceptrons are calculated. One matrix C, which is a combination of the nonlinear output and the inputs is generated. The components c tk of the matrix C are, c tk = x it (b jt ). The equivalence of Eq. (2) in matrix notation is y = Cl. The aim of the learning process for most supervised learning algorithms for a neural network is minimizing the sum of the squared error for the training data, that is e = 1 2 (Cl yd)t (Cl yd); (3) where yd is the desired output vector. The optimum value for l is obtained using the condition that the gradient of Eq. (3) is zero. So, l =(C T C) 1 C T yd: (4) After the evaluation of the l, its components return to the matrix form P. The output is calculated and the nonlinear weights, matrix V, are adapted according to the classical backpropagation method, v ji (iter +1)=v ji (iter) 9e=9v ji, where is the learning rate, which is calculated as described byjang [4], iter is the iteration number.

4 774 W.M. Caminhas et al. / Neurocomputing 55 (2003) PLP as a universal function approximation One of the most important features that are desired in this kind of network, is the possibilityof mapping a nonlinear input output model. Consider the universal approximation theorem stated as follows [1,2]: Let (:) be a nonconstant, bounded and monotone-increasing continuous function. Let Ip denote the p-dimensional unit hypercube. The space of continuous functions on Ip is denoted by C(Ip). Then, given any function and 0 there exists an integer M and sets of real constants i ; w ij and b i, where i =1;:::;M and j =1;:::;p such that we dene ( M p ) F(x 1 ;:::;x p )= i w ij x j + b i i=1 k=1 (5) as an approximate realization of the function f(:); that is F(x 1 ;:::;x p ) f(x 1 ;:::;x p ) for all {x 1 ;:::;x p } in I p. Byinspection it can be seen that Eq. (2) has the same form as Eq. (5). As the logistic and tanh functions are nonconstant, bounded and monotone increasing, both can be used in the nonlinear layer to satisfythe conditions imposed bythe theorem. For radial basis functions, the approximation theorem can be stated as in [6]. A graphical interpretation can be given to the PLP approximation property. In fact, this network can be understood as a linear combination of nonlinear functions or vice versa. One network with one pair of parallel layers, and two perceptrons per layer, m = 2, is capable to approximate two periods of a sin function. In Fig. 2, is shown the linear functions generated bythe linear perceptron. In Fig. 3, is shown the generated nonlinear functions considering as a Gaussian function. The linear nonlinear product is shown in Fig. 4 and, lastly, in Fig. 5 is shown the resulted approximation. Fig. 2. Linear parameters.

5 W.M. Caminhas et al. / Neurocomputing 55 (2003) Fig. 3. Nonlinear parameters. Fig. 4. Product of linear and nonlinear parameters. 5. Numerical problems Fig. 5. Approximated function. Two test problems were used to compare the PLP computational performance. The comparisons were done with ANFIS [4] and MLP trained with the Levenberg

6 776 W.M. Caminhas et al. / Neurocomputing 55 (2003) Marquardt algorithm [3]. All networks were trained during 50 epochs. The PLP activation function,, used in the examples was the Gaussian function. The rst test problem is a mapping of a nonlinear input output model. One sinc function with two variables ranging between [ 10; 10] [ 10; 10], was used. The training data were composed by121 sampled points, equallyspaced. The results for this problem are shown in Table 1. The numbers in front of the PLP topologyrepresent the perceptrons and parallel layers numbers (2 m); the number of perceptrons in the hidden layer for MLP, considering one hidden layer; the number of rules for the ANFIS topology. The second example is the chaotic Mackey Glass dierential delay equation [5], ẋ =(0:2x(t ))=(1 + x 10 (t )) 0:1x(t), which is a benchmark largelyused in the neural and fuzzycommunities [4]. This series was solved using fourth-order Runge Kutta method assuming x(0)=1:2; t=17 and x(t)=0 for t 0. From the Mackey Glass series was extracted 1000 pairs of input output data for t = The rst 500 pairs were used for training and the others 500 to validating the model. The results are presented in Table 2. Columns three and four represent the root mean squared error for training (Trn) and validation data (Val), respectively. Table 1 Sinc results TopologyTime RMSE PLP (2 5) : MLP (10) PLP (2 16) : MLP (32) : ANFIS (16) PLP (2 25) : MLP (50) : ANFIS (25) Table 2 Mackey Glass series results TopologyTime Trn (10 3 ) Val (10 3 ) PLP (2 16) ANFIS (16) MLP (32) PLP (2 20) MLP (40) PLP (2 30) MLP (60)

7 W.M. Caminhas et al. / Neurocomputing 55 (2003) Comparing the simulations presented in Table 1, for equivalent networks (equivalence was considered here as the total number of functions used in each network), the PLP topologywas faster and presented smaller errors, showing its eciency. For instance, PLP (2 5) outperform, it is better in both training time and error than MLP(10). The PLP (2 16) outperform the MLP(32) and ANFIS(16). This same analysis can be done for the others network showing the advantage of the proposed network. The same analysis done in the previous example, can be done for the second test problem (Table 2). In this example, the PLP outperform both ANFIS and MLP. What is more, for ANFIS the spatial exponential problem occurs in this example. Even though the validation error does not depend onlyof the topology, the PLP networks presented good results for unknown samples, as shown in the fourth column of Table Conclusions In this work, a novel neural network topologywas proposed, PLP, including the architecture and learning algorithms. One particular case was also proposed due to its special characteristics that enable a more ecient training algorithm. This network can be used in a wide varietyof applications, due to its universal function approximation characteristic. The numerical results presented in this paper show the computational eciencyof the proposed topology. Although its parallel characteristics were not deeply approached in this paper, it is an important feature of PLP networks that seems to be an open area for further researches. Acknowledgements The authors would like to thank CNPq, CAPES, FINEP and FAPEMIG for the nancial support. References [1] G. Cybenko, Approximation by superpositions of a Sigmoid function, Math. Control Signals Systems 2 (1989) [2] K. Funahashi, On the approximate realization of continuous mappings byneural networks, Neural Networks Signals Systems 2 (1989) [3] M.T. Hangan, M.B. Menhaj, Training feedforward networks with the Marquardt algorithm, IEEE Trans. Neural Networks 5 (6) (1994) [4] J.S.R. Jang, ANFIS: adaptative-network-based fuzzy inference system, IEEE Trans. Systems Man Cybernet. 23 (3) (1993) [5] M.C. Mackey, L. Glass, Oscillation and chaos in physiological control systems, Science 197 (1977) [6] J. Park, I.W. Sandberg, Universal approximation using radial-basis-function networks, Neural Comput. 3 (1991)

8 778 W.M. Caminhas et al. / Neurocomputing 55 (2003) [7] D.E. Rumelhart, G.E. Hinton, R.J. Williams, Learning internal representations byerror propagation, in: D.E. Rumelhart, J.L. McClelland (Eds.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1, Bradford Books, MIT Press, Cambridge, MA, 1986 (Chapter 8). Walmir Matos Caminhas is an Adjunct Professor at Department of Electrical Engineering at Federal Universityof Minas Gerais, Brazil. He holds a Doctarate degree in Electrical Engineering obtained from Universityof Campinas, Sao Paulo, Brazil, in His research interests include computational intelligence and control of electrical drives. Douglas Alexandre Gomes Vieira was born in Belo Horizonte, Brazil, in He is a student of electrical engineering course at Federal Universityof Minas Gerais, Brazil. His research interests include computational intelligence, multi-objective optimization and design, and stochastic and deterministic optimization methods. João Antônio de Vasconcelos was born in Monte Carmelo, Brazil. He obtained his Ph.D. in 1984 at Ecole Centrale de Lyon France. He is Professor at Electrical Engineering Department of the Federal Universityof Minas Gerais since His research interests include vector optimization (evolutionarymulti-objective optimization) and design, computational intelligence, computational electromagnetics (nite element methods, boundaryintegral equation methods and others).

A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation

A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation 1 Introduction A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation J Wesley Hines Nuclear Engineering Department The University of Tennessee Knoxville, Tennessee,

More information

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm Volume 4, Issue 5, May 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Huffman Encoding

More information

4. Multilayer Perceptrons

4. Multilayer Perceptrons 4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output

More information

Novel determination of dierential-equation solutions: universal approximation method

Novel determination of dierential-equation solutions: universal approximation method Journal of Computational and Applied Mathematics 146 (2002) 443 457 www.elsevier.com/locate/cam Novel determination of dierential-equation solutions: universal approximation method Thananchai Leephakpreeda

More information

ADAPTIVE NEURO-FUZZY INFERENCE SYSTEMS

ADAPTIVE NEURO-FUZZY INFERENCE SYSTEMS ADAPTIVE NEURO-FUZZY INFERENCE SYSTEMS RBFN and TS systems Equivalent if the following hold: Both RBFN and TS use same aggregation method for output (weighted sum or weighted average) Number of basis functions

More information

NN V: The generalized delta learning rule

NN V: The generalized delta learning rule NN V: The generalized delta learning rule We now focus on generalizing the delta learning rule for feedforward layered neural networks. The architecture of the two-layer network considered below is shown

More information

Gaussian process for nonstationary time series prediction

Gaussian process for nonstationary time series prediction Computational Statistics & Data Analysis 47 (2004) 705 712 www.elsevier.com/locate/csda Gaussian process for nonstationary time series prediction Soane Brahim-Belhouari, Amine Bermak EEE Department, Hong

More information

Supervised (BPL) verses Hybrid (RBF) Learning. By: Shahed Shahir

Supervised (BPL) verses Hybrid (RBF) Learning. By: Shahed Shahir Supervised (BPL) verses Hybrid (RBF) Learning By: Shahed Shahir 1 Outline I. Introduction II. Supervised Learning III. Hybrid Learning IV. BPL Verses RBF V. Supervised verses Hybrid learning VI. Conclusion

More information

A New Weight Initialization using Statistically Resilient Method and Moore-Penrose Inverse Method for SFANN

A New Weight Initialization using Statistically Resilient Method and Moore-Penrose Inverse Method for SFANN A New Weight Initialization using Statistically Resilient Method and Moore-Penrose Inverse Method for SFANN Apeksha Mittal, Amit Prakash Singh and Pravin Chandra University School of Information and Communication

More information

Convergence of Hybrid Algorithm with Adaptive Learning Parameter for Multilayer Neural Network

Convergence of Hybrid Algorithm with Adaptive Learning Parameter for Multilayer Neural Network Convergence of Hybrid Algorithm with Adaptive Learning Parameter for Multilayer Neural Network Fadwa DAMAK, Mounir BEN NASR, Mohamed CHTOUROU Department of Electrical Engineering ENIS Sfax, Tunisia {fadwa_damak,

More information

Back-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples

Back-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples Back-Propagation Algorithm Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples 1 Inner-product net =< w, x >= w x cos(θ) net = n i=1 w i x i A measure

More information

Combination of M-Estimators and Neural Network Model to Analyze Inside/Outside Bark Tree Diameters

Combination of M-Estimators and Neural Network Model to Analyze Inside/Outside Bark Tree Diameters Combination of M-Estimators and Neural Network Model to Analyze Inside/Outside Bark Tree Diameters Kyriaki Kitikidou, Elias Milios, Lazaros Iliadis, and Minas Kaymakis Democritus University of Thrace,

More information

Rprop Using the Natural Gradient

Rprop Using the Natural Gradient Trends and Applications in Constructive Approximation (Eds.) M.G. de Bruin, D.H. Mache & J. Szabados International Series of Numerical Mathematics Vol. 1?? c 2005 Birkhäuser Verlag Basel (ISBN 3-7643-7124-2)

More information

Feed-forward Network Functions

Feed-forward Network Functions Feed-forward Network Functions Sargur Srihari Topics 1. Extension of linear models 2. Feed-forward Network Functions 3. Weight-space symmetries 2 Recap of Linear Models Linear Models for Regression, Classification

More information

Bidirectional Representation and Backpropagation Learning

Bidirectional Representation and Backpropagation Learning Int'l Conf on Advances in Big Data Analytics ABDA'6 3 Bidirectional Representation and Bacpropagation Learning Olaoluwa Adigun and Bart Koso Department of Electrical Engineering Signal and Image Processing

More information

Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions

Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions Artem Chernodub, Institute of Mathematical Machines and Systems NASU, Neurotechnologies

More information

Learning and Memory in Neural Networks

Learning and Memory in Neural Networks Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units

More information

memory networks, have been proposed by Hopeld (1982), Lapedes and Farber (1986), Almeida (1987), Pineda (1988), and Rohwer and Forrest (1987). Other r

memory networks, have been proposed by Hopeld (1982), Lapedes and Farber (1986), Almeida (1987), Pineda (1988), and Rohwer and Forrest (1987). Other r A Learning Algorithm for Continually Running Fully Recurrent Neural Networks Ronald J. Williams College of Computer Science Northeastern University Boston, Massachusetts 02115 and David Zipser Institute

More information

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes CS 6501: Deep Learning for Computer Graphics Basics of Neural Networks Connelly Barnes Overview Simple neural networks Perceptron Feedforward neural networks Multilayer perceptron and properties Autoencoders

More information

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption ANDRÉ NUNES DE SOUZA, JOSÉ ALFREDO C. ULSON, IVAN NUNES

More information

International Journal of Advanced Research in Computer Science and Software Engineering

International Journal of Advanced Research in Computer Science and Software Engineering Volume 3, Issue 4, April 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Application of

More information

A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE

A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE Li Sheng Institute of intelligent information engineering Zheiang University Hangzhou, 3007, P. R. China ABSTRACT In this paper, a neural network-driven

More information

Fundamentals of Neural Network

Fundamentals of Neural Network Chapter 3 Fundamentals of Neural Network One of the main challenge in actual times researching, is the construction of AI (Articial Intelligence) systems. These systems could be understood as any physical

More information

An alternate burst analysis for detecting intra-burst rings based on inter-burst periods

An alternate burst analysis for detecting intra-burst rings based on inter-burst periods Neurocomputing 44 46 (2002) 1155 1159 www.elsevier.com/locate/neucom An alternate burst analysis for detecting intra-burst rings based on inter-burst periods David C. Tam Deptartment of Biological Sciences,

More information

Application of Artificial Neural Networks to Predict Daily Solar Radiation in Sokoto

Application of Artificial Neural Networks to Predict Daily Solar Radiation in Sokoto Research Article International Journal of Current Engineering and Technology ISSN 2277-4106 2013 INPRESSCO. All Rights Reserved. Available at http://inpressco.com/category/ijcet Application of Artificial

More information

Deep Feedforward Networks

Deep Feedforward Networks Deep Feedforward Networks Liu Yang March 30, 2017 Liu Yang Short title March 30, 2017 1 / 24 Overview 1 Background A general introduction Example 2 Gradient based learning Cost functions Output Units 3

More information

Chapter 4 Neural Networks in System Identification

Chapter 4 Neural Networks in System Identification Chapter 4 Neural Networks in System Identification Gábor HORVÁTH Department of Measurement and Information Systems Budapest University of Technology and Economics Magyar tudósok körútja 2, 52 Budapest,

More information

Neural Networks and Deep Learning

Neural Networks and Deep Learning Neural Networks and Deep Learning Professor Ameet Talwalkar November 12, 2015 Professor Ameet Talwalkar Neural Networks and Deep Learning November 12, 2015 1 / 16 Outline 1 Review of last lecture AdaBoost

More information

A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models. Isabelle Rivals and Léon Personnaz

A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models. Isabelle Rivals and Léon Personnaz In Neurocomputing 2(-3): 279-294 (998). A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models Isabelle Rivals and Léon Personnaz Laboratoire d'électronique,

More information

= w 2. w 1. B j. A j. C + j1j2

= w 2. w 1. B j. A j. C + j1j2 Local Minima and Plateaus in Multilayer Neural Networks Kenji Fukumizu and Shun-ichi Amari Brain Science Institute, RIKEN Hirosawa 2-, Wako, Saitama 35-098, Japan E-mail: ffuku, amarig@brain.riken.go.jp

More information

λ-universe: Introduction and Preliminary Study

λ-universe: Introduction and Preliminary Study λ-universe: Introduction and Preliminary Study ABDOLREZA JOGHATAIE CE College Sharif University of Technology Azadi Avenue, Tehran IRAN Abstract: - Interactions between the members of an imaginary universe,

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement

More information

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function

More information

Neuro-Fuzzy Comp. Ch. 4 March 24, R p

Neuro-Fuzzy Comp. Ch. 4 March 24, R p 4 Feedforward Multilayer Neural Networks part I Feedforward multilayer neural networks (introduced in sec 17) with supervised error correcting learning are used to approximate (synthesise) a non-linear

More information

ESTIMATING THE ACTIVATION FUNCTIONS OF AN MLP-NETWORK

ESTIMATING THE ACTIVATION FUNCTIONS OF AN MLP-NETWORK ESTIMATING THE ACTIVATION FUNCTIONS OF AN MLP-NETWORK P.V. Vehviläinen, H.A.T. Ihalainen Laboratory of Measurement and Information Technology Automation Department Tampere University of Technology, FIN-,

More information

Lecture 10. Neural networks and optimization. Machine Learning and Data Mining November Nando de Freitas UBC. Nonlinear Supervised Learning

Lecture 10. Neural networks and optimization. Machine Learning and Data Mining November Nando de Freitas UBC. Nonlinear Supervised Learning Lecture 0 Neural networks and optimization Machine Learning and Data Mining November 2009 UBC Gradient Searching for a good solution can be interpreted as looking for a minimum of some error (loss) function

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks 鮑興國 Ph.D. National Taiwan University of Science and Technology Outline Perceptrons Gradient descent Multi-layer networks Backpropagation Hidden layer representations Examples

More information

Learning in Boltzmann Trees. Lawrence Saul and Michael Jordan. Massachusetts Institute of Technology. Cambridge, MA January 31, 1995.

Learning in Boltzmann Trees. Lawrence Saul and Michael Jordan. Massachusetts Institute of Technology. Cambridge, MA January 31, 1995. Learning in Boltzmann Trees Lawrence Saul and Michael Jordan Center for Biological and Computational Learning Massachusetts Institute of Technology 79 Amherst Street, E10-243 Cambridge, MA 02139 January

More information

Neural networks. Chapter 20, Section 5 1

Neural networks. Chapter 20, Section 5 1 Neural networks Chapter 20, Section 5 Chapter 20, Section 5 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 20, Section 5 2 Brains 0 neurons of

More information

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen Neural Networks - I Henrik I Christensen Robotics & Intelligent Machines @ GT Georgia Institute of Technology, Atlanta, GA 30332-0280 hic@cc.gatech.edu Henrik I Christensen (RIM@GT) Neural Networks 1 /

More information

Equivalence of Backpropagation and Contrastive Hebbian Learning in a Layered Network

Equivalence of Backpropagation and Contrastive Hebbian Learning in a Layered Network LETTER Communicated by Geoffrey Hinton Equivalence of Backpropagation and Contrastive Hebbian Learning in a Layered Network Xiaohui Xie xhx@ai.mit.edu Department of Brain and Cognitive Sciences, Massachusetts

More information

Artificial Neural Networks 2

Artificial Neural Networks 2 CSC2515 Machine Learning Sam Roweis Artificial Neural s 2 We saw neural nets for classification. Same idea for regression. ANNs are just adaptive basis regression machines of the form: y k = j w kj σ(b

More information

Deep Feedforward Networks

Deep Feedforward Networks Deep Feedforward Networks Liu Yang March 30, 2017 Liu Yang Short title March 30, 2017 1 / 24 Overview 1 Background A general introduction Example 2 Gradient based learning Cost functions Output Units 3

More information

PDF hosted at the Radboud Repository of the Radboud University Nijmegen

PDF hosted at the Radboud Repository of the Radboud University Nijmegen PDF hosted at the Radboud Repository of the Radboud University Nijmegen The following full text is a preprint version which may differ from the publisher's version. For additional information about this

More information

Neural networks. Chapter 19, Sections 1 5 1

Neural networks. Chapter 19, Sections 1 5 1 Neural networks Chapter 19, Sections 1 5 Chapter 19, Sections 1 5 1 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 19, Sections 1 5 2 Brains 10

More information

Optmization Methods for Machine Learning Beyond Perceptron Feed Forward neural networks (FFN)

Optmization Methods for Machine Learning Beyond Perceptron Feed Forward neural networks (FFN) Optmization Methods for Machine Learning Beyond Perceptron Feed Forward neural networks (FFN) Laura Palagi http://www.dis.uniroma1.it/ palagi Dipartimento di Ingegneria informatica automatica e gestionale

More information

output dimension input dimension Gaussian evidence Gaussian Gaussian evidence evidence from t +1 inputs and outputs at time t x t+2 x t-1 x t+1

output dimension input dimension Gaussian evidence Gaussian Gaussian evidence evidence from t +1 inputs and outputs at time t x t+2 x t-1 x t+1 To appear in M. S. Kearns, S. A. Solla, D. A. Cohn, (eds.) Advances in Neural Information Processing Systems. Cambridge, MA: MIT Press, 999. Learning Nonlinear Dynamical Systems using an EM Algorithm Zoubin

More information

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000

More information

y(n) Time Series Data

y(n) Time Series Data Recurrent SOM with Local Linear Models in Time Series Prediction Timo Koskela, Markus Varsta, Jukka Heikkonen, and Kimmo Kaski Helsinki University of Technology Laboratory of Computational Engineering

More information

C1.2 Multilayer perceptrons

C1.2 Multilayer perceptrons Supervised Models C1.2 Multilayer perceptrons Luis B Almeida Abstract This section introduces multilayer perceptrons, which are the most commonly used type of neural network. The popular backpropagation

More information

Deep Neural Networks

Deep Neural Networks Deep Neural Networks DT2118 Speech and Speaker Recognition Giampiero Salvi KTH/CSC/TMH giampi@kth.se VT 2015 1 / 45 Outline State-to-Output Probability Model Artificial Neural Networks Perceptron Multi

More information

Address for Correspondence

Address for Correspondence Research Article APPLICATION OF ARTIFICIAL NEURAL NETWORK FOR INTERFERENCE STUDIES OF LOW-RISE BUILDINGS 1 Narayan K*, 2 Gairola A Address for Correspondence 1 Associate Professor, Department of Civil

More information

Neural Network Weight Space Symmetries Can Speed up Genetic Learning

Neural Network Weight Space Symmetries Can Speed up Genetic Learning Neural Network Weight Space Symmetries Can Speed up Genetic Learning ROMAN NERUDA Λ Institue of Computer Science Academy of Sciences of the Czech Republic P.O. Box 5, 187 Prague, Czech Republic tel: (4)665375,fax:(4)8585789

More information

Computational Intelligence Winter Term 2017/18

Computational Intelligence Winter Term 2017/18 Computational Intelligence Winter Term 207/8 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS ) Fakultät für Informatik TU Dortmund Plan for Today Single-Layer Perceptron Accelerated Learning

More information

Research Article Stacked Heterogeneous Neural Networks for Time Series Forecasting

Research Article Stacked Heterogeneous Neural Networks for Time Series Forecasting Hindawi Publishing Corporation Mathematical Problems in Engineering Volume 21, Article ID 373648, 2 pages doi:1.1155/21/373648 Research Article Stacked Heterogeneous Neural Networks for Time Series Forecasting

More information

Predicting the Future with the Appropriate Embedding Dimension and Time Lag JAMES SLUSS

Predicting the Future with the Appropriate Embedding Dimension and Time Lag JAMES SLUSS Predicting the Future with the Appropriate Embedding Dimension and Time Lag Georgios Lezos, Monte Tull, Joseph Havlicek, and Jim Sluss GEORGIOS LEZOS Graduate Student School of Electtical & Computer Engineering

More information

Introduction to Neural Networks: Structure and Training

Introduction to Neural Networks: Structure and Training Introduction to Neural Networks: Structure and Training Professor Q.J. Zhang Department of Electronics Carleton University, Ottawa, Canada www.doe.carleton.ca/~qjz, qjz@doe.carleton.ca A Quick Illustration

More information

A STATE-SPACE NEURAL NETWORK FOR MODELING DYNAMICAL NONLINEAR SYSTEMS

A STATE-SPACE NEURAL NETWORK FOR MODELING DYNAMICAL NONLINEAR SYSTEMS A STATE-SPACE NEURAL NETWORK FOR MODELING DYNAMICAL NONLINEAR SYSTEMS Karima Amoura Patrice Wira and Said Djennoune Laboratoire CCSP Université Mouloud Mammeri Tizi Ouzou Algeria Laboratoire MIPS Université

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

EKF LEARNING FOR FEEDFORWARD NEURAL NETWORKS

EKF LEARNING FOR FEEDFORWARD NEURAL NETWORKS EKF LEARNING FOR FEEDFORWARD NEURAL NETWORKS A. Alessandri, G. Cirimele, M. Cuneo, S. Pagnan, M. Sanguineti Institute of Intelligent Systems for Automation, ISSIA-CNR National Research Council of Italy,

More information

Notes on Back Propagation in 4 Lines

Notes on Back Propagation in 4 Lines Notes on Back Propagation in 4 Lines Lili Mou moull12@sei.pku.edu.cn March, 2015 Congratulations! You are reading the clearest explanation of forward and backward propagation I have ever seen. In this

More information

ADAPTIVE INVERSE CONTROL BASED ON NONLINEAR ADAPTIVE FILTERING. Information Systems Lab., EE Dep., Stanford University

ADAPTIVE INVERSE CONTROL BASED ON NONLINEAR ADAPTIVE FILTERING. Information Systems Lab., EE Dep., Stanford University ADAPTIVE INVERSE CONTROL BASED ON NONLINEAR ADAPTIVE FILTERING Bernard Widrow 1, Gregory Plett, Edson Ferreira 3 and Marcelo Lamego 4 Information Systems Lab., EE Dep., Stanford University Abstract: Many

More information

RAINFALL RUNOFF MODELING USING SUPPORT VECTOR REGRESSION AND ARTIFICIAL NEURAL NETWORKS

RAINFALL RUNOFF MODELING USING SUPPORT VECTOR REGRESSION AND ARTIFICIAL NEURAL NETWORKS CEST2011 Rhodes, Greece Ref no: XXX RAINFALL RUNOFF MODELING USING SUPPORT VECTOR REGRESSION AND ARTIFICIAL NEURAL NETWORKS D. BOTSIS1 1, P. LATINOPOULOS 2 and K. DIAMANTARAS 3 1&2 Department of Civil

More information

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY 1 On-line Resources http://neuralnetworksanddeeplearning.com/index.html Online book by Michael Nielsen http://matlabtricks.com/post-5/3x3-convolution-kernelswith-online-demo

More information

Comparative Application of Radial Basis Function and Multilayer Perceptron Neural Networks to Predict Traffic Noise Pollution in Tehran Roads

Comparative Application of Radial Basis Function and Multilayer Perceptron Neural Networks to Predict Traffic Noise Pollution in Tehran Roads Journal of Ecological Engineering Received: 2017.10.02 Accepted: 2017.10.28 Published: 2018.01.01 Volume 19, Issue 1, January 2018, pages 113 121 https://doi.org/10.12911/22998993/79411 Comparative Application

More information

The Neural Support Vector Machine

The Neural Support Vector Machine The Neural Support Vector Machine M.A. Wiering a M.H. van der Ree a M.J. Embrechts b M.F. Stollenga c A. Meijster a A. Nolte d L.R.B. Schomaker a a Institute of Artificial Intelligence and Cognitive Engineering,

More information

Departement Elektrotechniek ESAT-SISTA/TR Dynamical System Prediction: a Lie algebraic approach for a novel. neural architecture 1

Departement Elektrotechniek ESAT-SISTA/TR Dynamical System Prediction: a Lie algebraic approach for a novel. neural architecture 1 Katholieke Universiteit Leuven Departement Elektrotechniek ESAT-SISTA/TR 1995-47 Dynamical System Prediction: a Lie algebraic approach for a novel neural architecture 1 Yves Moreau and Joos Vandewalle

More information

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA   1/ 21 Neural Networks Chapter 8, Section 7 TB Artificial Intelligence Slides from AIMA http://aima.cs.berkeley.edu / 2 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural

More information

Computational Intelligence

Computational Intelligence Plan for Today Single-Layer Perceptron Computational Intelligence Winter Term 00/ Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS ) Fakultät für Informatik TU Dortmund Accelerated Learning

More information

Multilayer Neural Networks. (sometimes called Multilayer Perceptrons or MLPs)

Multilayer Neural Networks. (sometimes called Multilayer Perceptrons or MLPs) Multilayer Neural Networks (sometimes called Multilayer Perceptrons or MLPs) Linear separability Hyperplane In 2D: w x + w 2 x 2 + w 0 = 0 Feature x 2 = w w 2 x w 0 w 2 Feature 2 A perceptron can separate

More information

Local minima and plateaus in hierarchical structures of multilayer perceptrons

Local minima and plateaus in hierarchical structures of multilayer perceptrons Neural Networks PERGAMON Neural Networks 13 (2000) 317 327 Contributed article Local minima and plateaus in hierarchical structures of multilayer perceptrons www.elsevier.com/locate/neunet K. Fukumizu*,

More information

An Inverse Vibration Problem Solved by an Artificial Neural Network

An Inverse Vibration Problem Solved by an Artificial Neural Network TEMA Tend. Mat. Apl. Comput., 6, No. 1 (05), 163-175. c Uma Publicação da Sociedade Brasileira de Matemática Aplicada e Computacional. An Inverse Vibration Problem Solved by an Artificial Neural Network

More information

ARTIFICIAL INTELLIGENCE MODELLING OF STOCHASTIC PROCESSES IN DIGITAL COMMUNICATION NETWORKS

ARTIFICIAL INTELLIGENCE MODELLING OF STOCHASTIC PROCESSES IN DIGITAL COMMUNICATION NETWORKS Journal of ELECTRICAL ENGINEERING, VOL. 54, NO. 9-, 23, 255 259 ARTIFICIAL INTELLIGENCE MODELLING OF STOCHASTIC PROCESSES IN DIGITAL COMMUNICATION NETWORKS Dimitar Radev Svetla Radeva The paper presents

More information

Multilayer Neural Networks. (sometimes called Multilayer Perceptrons or MLPs)

Multilayer Neural Networks. (sometimes called Multilayer Perceptrons or MLPs) Multilayer Neural Networks (sometimes called Multilayer Perceptrons or MLPs) Linear separability Hyperplane In 2D: w 1 x 1 + w 2 x 2 + w 0 = 0 Feature 1 x 2 = w 1 w 2 x 1 w 0 w 2 Feature 2 A perceptron

More information

Neural networks. Chapter 20. Chapter 20 1

Neural networks. Chapter 20. Chapter 20 1 Neural networks Chapter 20 Chapter 20 1 Outline Brains Neural networks Perceptrons Multilayer networks Applications of neural networks Chapter 20 2 Brains 10 11 neurons of > 20 types, 10 14 synapses, 1ms

More information

Machine Learning (CSE 446): Neural Networks

Machine Learning (CSE 446): Neural Networks Machine Learning (CSE 446): Neural Networks Noah Smith c 2017 University of Washington nasmith@cs.washington.edu November 6, 2017 1 / 22 Admin No Wednesday office hours for Noah; no lecture Friday. 2 /

More information

Knowledge Extraction from DBNs for Images

Knowledge Extraction from DBNs for Images Knowledge Extraction from DBNs for Images Son N. Tran and Artur d Avila Garcez Department of Computer Science City University London Contents 1 Introduction 2 Knowledge Extraction from DBNs 3 Experimental

More information

Cascade Neural Networks with Node-Decoupled Extended Kalman Filtering

Cascade Neural Networks with Node-Decoupled Extended Kalman Filtering Cascade Neural Networks with Node-Decoupled Extended Kalman Filtering Michael C. Nechyba and Yangsheng Xu The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 Abstract Most neural networks

More information

Artificial Neural Networks. Edward Gatt

Artificial Neural Networks. Edward Gatt Artificial Neural Networks Edward Gatt What are Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning Very

More information

Artificial Neural Network

Artificial Neural Network Artificial Neural Network Contents 2 What is ANN? Biological Neuron Structure of Neuron Types of Neuron Models of Neuron Analogy with human NN Perceptron OCR Multilayer Neural Network Back propagation

More information

Taylor series based nite dierence approximations of higher-degree derivatives

Taylor series based nite dierence approximations of higher-degree derivatives Journal of Computational and Applied Mathematics 54 (3) 5 4 www.elsevier.com/locate/cam Taylor series based nite dierence approximations of higher-degree derivatives Ishtiaq Rasool Khan a;b;, Ryoji Ohba

More information

8. Lecture Neural Networks

8. Lecture Neural Networks Soft Control (AT 3, RMA) 8. Lecture Neural Networks Learning Process Contents of the 8 th lecture 1. Introduction of Soft Control: Definition and Limitations, Basics of Intelligent" Systems 2. Knowledge

More information

Links between Perceptrons, MLPs and SVMs

Links between Perceptrons, MLPs and SVMs Links between Perceptrons, MLPs and SVMs Ronan Collobert Samy Bengio IDIAP, Rue du Simplon, 19 Martigny, Switzerland Abstract We propose to study links between three important classification algorithms:

More information

N. Sarikaya Department of Aircraft Electrical and Electronics Civil Aviation School Erciyes University Kayseri 38039, Turkey

N. Sarikaya Department of Aircraft Electrical and Electronics Civil Aviation School Erciyes University Kayseri 38039, Turkey Progress In Electromagnetics Research B, Vol. 6, 225 237, 2008 ADAPTIVE NEURO-FUZZY INFERENCE SYSTEM FOR THE COMPUTATION OF THE CHARACTERISTIC IMPEDANCE AND THE EFFECTIVE PERMITTIVITY OF THE MICRO-COPLANAR

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

Batch-mode, on-line, cyclic, and almost cyclic learning 1 1 Introduction In most neural-network applications, learning plays an essential role. Throug

Batch-mode, on-line, cyclic, and almost cyclic learning 1 1 Introduction In most neural-network applications, learning plays an essential role. Throug A theoretical comparison of batch-mode, on-line, cyclic, and almost cyclic learning Tom Heskes and Wim Wiegerinck RWC 1 Novel Functions SNN 2 Laboratory, Department of Medical hysics and Biophysics, University

More information

Prediction of Hourly Solar Radiation in Amman-Jordan by Using Artificial Neural Networks

Prediction of Hourly Solar Radiation in Amman-Jordan by Using Artificial Neural Networks Int. J. of Thermal & Environmental Engineering Volume 14, No. 2 (2017) 103-108 Prediction of Hourly Solar Radiation in Amman-Jordan by Using Artificial Neural Networks M. A. Hamdan a*, E. Abdelhafez b

More information

Neural Networks Task Sheet 2. Due date: May

Neural Networks Task Sheet 2. Due date: May Neural Networks 2007 Task Sheet 2 1/6 University of Zurich Prof. Dr. Rolf Pfeifer, pfeifer@ifi.unizh.ch Department of Informatics, AI Lab Matej Hoffmann, hoffmann@ifi.unizh.ch Andreasstrasse 15 Marc Ziegler,

More information

Neural Networks, Computation Graphs. CMSC 470 Marine Carpuat

Neural Networks, Computation Graphs. CMSC 470 Marine Carpuat Neural Networks, Computation Graphs CMSC 470 Marine Carpuat Binary Classification with a Multi-layer Perceptron φ A = 1 φ site = 1 φ located = 1 φ Maizuru = 1 φ, = 2 φ in = 1 φ Kyoto = 1 φ priest = 0 φ

More information

Modelling of Pehlivan-Uyaroglu_2010 Chaotic System via Feed Forward Neural Network and Recurrent Neural Networks

Modelling of Pehlivan-Uyaroglu_2010 Chaotic System via Feed Forward Neural Network and Recurrent Neural Networks Modelling of Pehlivan-Uyaroglu_2010 Chaotic System via Feed Forward Neural Network and Recurrent Neural Networks 1 Murat ALÇIN, 2 İhsan PEHLİVAN and 3 İsmail KOYUNCU 1 Department of Electric -Energy, Porsuk

More information

Neural Networks. Yan Shao Department of Linguistics and Philology, Uppsala University 7 December 2016

Neural Networks. Yan Shao Department of Linguistics and Philology, Uppsala University 7 December 2016 Neural Networks Yan Shao Department of Linguistics and Philology, Uppsala University 7 December 2016 Outline Part 1 Introduction Feedforward Neural Networks Stochastic Gradient Descent Computational Graph

More information

Estimation of Inelastic Response Spectra Using Artificial Neural Networks

Estimation of Inelastic Response Spectra Using Artificial Neural Networks Estimation of Inelastic Response Spectra Using Artificial Neural Networks J. Bojórquez & S.E. Ruiz Universidad Nacional Autónoma de México, México E. Bojórquez Universidad Autónoma de Sinaloa, México SUMMARY:

More information

Machine Learning. Neural Networks

Machine Learning. Neural Networks Machine Learning Neural Networks Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 Biological Analogy Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 THE

More information

Pattern Recognition and Machine Learning

Pattern Recognition and Machine Learning Christopher M. Bishop Pattern Recognition and Machine Learning ÖSpri inger Contents Preface Mathematical notation Contents vii xi xiii 1 Introduction 1 1.1 Example: Polynomial Curve Fitting 4 1.2 Probability

More information

Introduction Biologically Motivated Crude Model Backpropagation

Introduction Biologically Motivated Crude Model Backpropagation Introduction Biologically Motivated Crude Model Backpropagation 1 McCulloch-Pitts Neurons In 1943 Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, published A logical calculus of the

More information

Artificial Neural Networks. MGS Lecture 2

Artificial Neural Networks. MGS Lecture 2 Artificial Neural Networks MGS 2018 - Lecture 2 OVERVIEW Biological Neural Networks Cell Topology: Input, Output, and Hidden Layers Functional description Cost functions Training ANNs Back-Propagation

More information

Deep Learning Architecture for Univariate Time Series Forecasting

Deep Learning Architecture for Univariate Time Series Forecasting CS229,Technical Report, 2014 Deep Learning Architecture for Univariate Time Series Forecasting Dmitry Vengertsev 1 Abstract This paper studies the problem of applying machine learning with deep architecture

More information

Linear Least-Squares Based Methods for Neural Networks Learning

Linear Least-Squares Based Methods for Neural Networks Learning Linear Least-Squares Based Methods for Neural Networks Learning Oscar Fontenla-Romero 1, Deniz Erdogmus 2, JC Principe 2, Amparo Alonso-Betanzos 1, and Enrique Castillo 3 1 Laboratory for Research and

More information

Dual Estimation and the Unscented Transformation

Dual Estimation and the Unscented Transformation Dual Estimation and the Unscented Transformation Eric A. Wan ericwan@ece.ogi.edu Rudolph van der Merwe rudmerwe@ece.ogi.edu Alex T. Nelson atnelson@ece.ogi.edu Oregon Graduate Institute of Science & Technology

More information

Bioethanol production sustainability: Outlook for improvement using computer-aided techniques

Bioethanol production sustainability: Outlook for improvement using computer-aided techniques 17 th European Symposium on Computer Aided Process Engineering ESCAPE17 V. Plesu and P.S. Agachi (Editors) 007 Elsevier B.V. All rights reserved. 1 Bioethanol production sustainability: Outlook for improvement

More information