Process modeling and optimization of mono ethylene glycol quality in commercial plant integrating artificial neural network and differential evolution
|
|
- Carol Dean
- 5 years ago
- Views:
Transcription
1 From the SelectedWorks of adeem Khalfe Winter December 7, 2008 Process modeling and optimization of mono ethylene glycol quality in commercial plant integrating artificial neural network and differential evolution adeem Muhammed Khalfe Available at:
2 CHEMCO, Process modeling and optimization of mono ethylene glycol quality in commercial plant integrating artificial neural network and differential evolution S.K. Lahiri Department of Chemical Engineering, ational Institute of Technology, Durgapur, India adeem M. Khalfe Jubail United Petrochemical Company, SABIC. Sunil Kumar Sawke Jubail United Petrochemical Company, SABIC. ABSTRACT This paper presents an artificial intelligence based process modeling and optimization strategies, namely artificial neural network differential evolution (A-DE) for modeling and optimization of ultraviolet (UV) transmittance of mono ethylene glycol (MEG) product. UV transmittance is one of the most important quality variable of MEG that has impact on the polyester product quality. UV transmittance measures the presence of undesirable compounds in MEG that absorb light in the ultraviolet region of the spectrum and indirectly measures the purity of MEG product. They are in trace quantities in the ppb ranges and primarily unknown in chemical structure. Thus, they cannot be measured directly. Off-line laboratory method for MEG UV measurement is common practice among the manufacturer, where a sample is withdrawn several times a day from the product stream and analyzed by time consuming laboratory analysis. In the event of a process malfunction or operating under suboptimal condition, the plant continues to produce off-spec product until lab results become available. It results in enormous financial losses for a large scale commercial plant. In the present paper a soft sensor was developed to predict the UV transmittance on real time basis and an online hybrid A-DE technique was used to optimize the process parameters so that UV is maximized. This paper describes a systematic approach to the development of inferential measurements of UV transmittance using A regression analysis. After predicting the UV accurately, model inputs are optimized using DEs to maximize the UV. The optimized solutions when verified in actual commercial plant resulted in a significant improvement in the MEG quality. Keywords: A, DE, modeling & optimization
3 . ITRODUCTIO: Recently Monoethylene glycol (MEG) has emerged as most important petrochemical product as its demand and price rises considerably in last few years all over the world. It is extensively used as a main feed for polyester fibre and polyethylene tere-phthalate plastics production. UV is one of the most important quality parameter of MEG and its represents indirectly the impurities level such as aldehyde, nitrogeneous compound and iron in the MEG product. In laboratory, MEG product sample is exposed to UV light of different wavelengths (220, 250,275 and 350 nm) and how much percentage of the UV light is transmitted through the MEG sample is measured. UV transmittance measures the presence of compounds in MEG that absorb light in the ultraviolet region of the spectrum. These undesirable compounds are in trace quantities in the ppb ranges and primarily unknown in chemical structure. Samples showing higher transmittance are considered to be of a greater quality grade. In Glycol plant the MEG is drawn off from MEG Column as product, its UV transmittance is affected by many things such as impurity formation in upstream ethylene oxide reactor, impurity formation and accumulation in MEG column bottoms due to thermal degradation of glycol, non removal and accumulation of aldehyde in the system etc. Because these UV deteriorating impurities are in ppb ranges, they are very difficult to detect during MEG production process and they have hardly any effect on process parameters.that s why it is very difficult for any phenomenological model for UV prediction to succeed in industrial scenario. ormally online UV analyzers are not available to monitor product MEG UV analysis in ethylene glycol plant, so offline methods for MEG quality control is common practice among the manufacturer, where a sample is withdrawn from the process and product stream for laboratory analysis several times a day and analyzed by time consuming laboratory analysis. In the event of a process malfunction or operating under suboptimal condition, the plant will continue to produce off-spec product until lab results become available. For a big world class capacity plant this represents a huge amount of offspec production results in enormous financial losses. This necessitates the online UV sensors or analyzers which can gives UV continuously on real time basis. Accurate, reliable and robust UV soft sensors can be a viable alternative in this scenario. Making of UV soft sensor is not an easy task as rigorous mathematical model for MEG product UV is still not available in literature which can predict UV transmittance to minimize the dependency on lab analysis. The comprehensive process model is expected to take into account the various subjects, such as chemistry, chemical reaction, UV deteriorating compound generation and accumulation which consequently become very complex. Industry needs this mathematical model to predict MEG UV on real time basis so that process parameters can be adjusted before the product goes off specification. To develop such model from basic principles of chemical engineering is very difficult due to unknown reactions taking place. In the last decade, As have emerged as attractive tool for nonlinear modeling especially in situations where the development of phenomenological or conventional regression models becomes impractical or cumbersome. The advantages of an A-based model are (i) it can be constructed solely from the historic process input-output data (ii) detailed knowledge of the process phenomenology is unnecessary (iii) a properly trained model possesses excellent generalization ability owing to which it can accurately predict outputs for a new input data set [2]. 2
4 Once an A based process model is developed, it can be used for predicting the MEG product UV s. The model can be utilized for UV soft sensor development and can be interfaced with online DCS and continuous monitoring can be achieved to yield the better process control. Once an A based process model is developed, it can be used for process optimization to obtain the optimal values of the process input variables that maximize the MEG product UV. In such situations, an efficient optimization formalism known as Differential Evolution which is lenient towards the form of the objective function can be used []. The DEs were originally developed as the genetic engineering models mimicking population evolution in natural systems. Specifically, DE like genetic algorithm (GA) enforce the survival-of the- fittest and genetic propagation of characteristics principles of biological evolution for searching the solution space of an optimization problem. The principal features possessed by the DEs are: (i) they require only scalar values and not the second- and/or first-order derivatives of the objective function, (ii) capability to handle nonlinear and noisy objective functions, (iii) they perform global search and thus are more likely to arrive at or near the global optimum. In the present paper, A formalism is integrated with DE to arrive at modeling and optimization strategies. The strategy (henceforth referred to as A-DE ) use an A as the nonlinear process modeling paradigm, and the DE for optimizing the input space of the A model such that an improved process performance is realized. To our knowledge, the hybrid involving A and DE is being used for the first time for chemical process modeling and optimization. In this study, the A-DE strategy have been used to model and optimize the MEG product UV for a commercial plant The optimized operating conditions leading to maximized UV of the product (MEG). The best sets of operating conditions obtained thereby when subjected to actual plant validation indeed resulted in significant enhancements in UVs. 2. Hybrid A and DE Based Modeling eural networks are computer algorithms inspired by the way information is processed in the nervous system. 2. etwork Architecture: The back propagation algorithm[2] assumes a feed forward neural network architecture (as shown in fig.) where nodes are partitioned into layers. The lower most layer is the input layer numbered and the topmost layer is the output layer numbered. Back propagation addresses networks containing one or more hidden layers. Hidden nodes do not directly receive inputs from nor send outputs to the external environment. Input layer nodes merely transmit input values to the hidden layer nodes and do not performs any computations. The number of input nodes equals the dimensionality of input patterns and the number of nodes in output layer is dictated by the problem under considerations. Each hidden node and output node applies the activation function to its net input. ormally there are three types of activation function reported in literature namely sigmoid function, tan hyperbolic function and linear function. 3
5 Fig. Architecture of feed forward network with one hidden layer 2.2 Training Training a network consists of an iterative process in which the network is given the desired inputs along with the correct outputs for those inputs. It then seeks to alter its weights to try and produce the correct output (within a reasonable error margin). If it succeeds, it has learned the training set and is ready to perform upon previously unseen data. If it fails to produce the correct output it rereads the input and again tries to produce the correct output by adjusting the weights. 2.3 Generalizability eural learning is considered successful only if the system can perform well on test data on which the system has not been trained. This capability of a network is called generalizability. Given a large network, it is possible that repeated training iterations successively improve performance of the network on training data e.g. by memorizing training samples, but the resulting network may perform poorly on test data (unseen data). This phenomenon is called over training. The proposed solution is to constantly monitor the performance of the network on the test data. In literature it is proposes that the weight should be adjusted only on the basis of the training set, but the error should be monitored on the test set. Here we apply the same strategy: training continues as long as the error on the test set continues to decrease and is terminated if the error on the test set increases. Training may thus be halted even if the network performance on the training set continues to improve. 2.4 DE Based Optimization of A Models Having developed an A-based process model, a DE algorithm is used to optimize the - dimensional input space (x) of the A model. Differential Evolution (DE), an improved version of GA, is an exceptionally simple evolution strategy that is significantly faster and robust at numerical optimization and is more likely to find a function s true global optimum. The optimization objective underlying the DE-based optimization of an A model is defined as: Find the -dimensional optimal decision variable vector, x* =[x *, x 2 *. x n *] T representing optimal process conditions such that it simultaneously maximizes process outputs, y. In the DE procedure, the search for an optimal solution (decision) vector, x*, begins from a randomly initialized population of probable (candidate) solutions. The solutions are then tested to measure their fitness in fulfilling the optimization objective. Implementation of this DE algorithm and looping generates a new 4
6 population of candidate solutions, which as compared to the previous population, usually fares better at fulfilling the optimization objective. The best vector that evolves after repeating the above described loop till convergence forms the solution to the optimization problem. (refer fig 2) Fig2: Flowchart for DE based optimization of A model 5
7 3. Case Study of Mono Ethylene Glycol Product UV Transmittance Fig 3 describes a brief process description where Glycol (90%) and water solution (0%) fed to the drying column to remove the water from drying column top. The bottom of drying column fed to MEG column to distil MEG from heavier glycols ( namely diethylene glycol and triethylenen glycol). MEG product (99.9 % wt purity) is withdrawn from the MEG column below the top packing bed. An overhead vapor purge of up to 0 % of the product is taken overhead to purge the light compounds. Fig 3 shows the location of input parameters from drying column and MEG column which were used to build the model of UV. Fig: 3 Process Flow diagram of Drying and MEG column 3. Development Of The A Based Correlation The development of the A-based correlation had been started with the collection of a large databank. The next step was to perform a neural regression, and to validate it statistically. 3.2 Collection of Data The quality and quantity of data is very crucial in A modeling as neural learning is primarily based on these data. Hourly average of actual plant operating data at steady state was collected for approximately one year. Data was checked and cleaned for obvious inaccuracy and retains those data when plant operation was in steady state and smooth. Finally 6273 records are qualified for neural regression. This wide range of database includes plant operation data at various capacities starting from 75% capacity to 0% of design capacity. 3.3 Identification of Input and Output Parameters The column performance was monitored in terms of output variable namely UV. Based on the operating experience in glycol plant, all physical parameters that influence UV are put in a so-called 6
8 wish-list. Out of the number of inputs in wish list several sets of inputs were made and tested via rigorous trial-and-error on the A. The above mentioned criteria were then used to identify the most pertinent set of input groups. Based on the above analysis, the nine input variables (in table ) have been finalized to predict UV. Table Input and output variable for A model Input Variables Reflux Ratio (Product flow / Reflux flow) Reflux Flow (MT/Hr) MEG Column Top Pressure (mmhg) MEG Column Condenser Pressure (Barg) MEG column control temperature (Deg C) MEG column feed flow (MT/Hr) Drying column control temperature (Deg C) Drying column bottom temperature (Deg C) Crude Glycol reprocessing flow. Output variables Mono ethylene glycol UV 3.4 eural Regression For modeling purposes, the reaction operating conditions data ( see table) can be viewed as an example input matrix (X) of size (6273 X 9), and the corresponding UV data as the example output matrix (Y) of size (6273 X ). For A training, each row of X represents a nine-dimensional input vector x = [x, x2 x9] T, and the corresponding row of matrix Y denotes the one-dimensional desired (target) output vector y = [y] T. As the magnitude of inputs and outputs greatly differ from each other, they are normalized in 0- scales.to avoid over training phenomena described earlier, 80% of total dataset was chosen randomly for training and rest 20% was selected for validation and testing. It has been reported that multilayer A models with only one hidden layer are universal approximators. Hence a three layer feed forward neural network (like Fig.) is chosen as a regression model. As there is no previous idea about the suitability of the particular activation function, all the three activation function (sigmoid, tan hyperbolic and linear) are chosen in all combinations for both hidden layer and output layer. The purpose is to find out which combination gives lowest error. The number of nodes in the hidden layer is up to the discretion of the network designer and generally depends on problem complexity. With too few nodes, the network may not be powerful enough for a given learning task. With a large number of nodes (and connections), computation is too expensive and time consuming. In the present study, the optimum number of nodes is calculated by trial and error method. The statistical analysis of network prediction is based on the following performance criteria:. The average absolute relative error (AARE) should be minimum 7
9 AARE = ypredicted y experimental ( ) y experimental R= i= 2. The standard deviation(σ)should be minimum σ = [ ( ypredicted y experimental y erimental AARE]^ ( i) ( i) ) / exp ( i) 2 3. The cross-correlation co-efficient (R) between input and output should be around unity. i= ( y experimental( i) y experimental( mean))( ypredicted( i) ypredicted( mean)) ( y experimental( i) y experimental( mean))^2 i= ( ypredicted( i) ypredicted( mean))^2 4 Results and Discussions 4. A Model Development for MISO (Multi Input Single Output) System: While the training set was utilized for the Error back propagation based iterative updation of the network weights, the test set was used for simultaneously monitoring the generalization ability of the multilayer perceptron (MLP) model. The MLP architecture comprised nine input ( = 9) and one output (K = ) nodes. For developing an optimal MLP model, its structural parameter, namely the number of hidden nodes (L) was varied systematically. For choosing an overall optimal network model, the criterion used was least AARE for the test set. The optimal MLP model that satisfied this criterion has thirty hidden nodes, tan sigmoid activation function at input and tan sigmoid activation function at output nodes. The average error (AARE) for training and test set is calculated as 0.04% and 0.042% and corresponding cross correlation co-efficient (R ) calculated as 0.84 and 0.83 respectively. The low and comparable training and test error AARE values indicate good prediction and generalization ability of the trained network model. Good prediction and generalization performance of the model is also evident from the high and comparable R values corresponding to both the outputs of training and test sets. Figure 4 depicts a comparison of the outputs as predicted by the MLP model and their target values. Considering the fact that all the input output data are from real plant with their inherent noise, the very low prediction error can be considered as an excellent A model. Once developed, this A model can be used to quantitatively predict the effects of all input parameters on the MEG product UV transmittance. 4.2 Actual Plant vs v Predicted UV in Plant: P To validate the reliability of model,actual plant data were taken from DCS at different plant load at different point of time and actual lab measured UV was compared with the model predicted UV. Fig: 4 depict the actual versus the predicted UV. 8
10 4.3 DE-based optimization of the A model After development of successful A model of glycol column, next step is to find out the best set of operating conditions which lead to maximum UV. DE based hybrid model was run and optimum parameters were evaluated ( within their permissible operating limit). Fig.5 depicts the actual versus the optimum UV. From figure 5 it is clear that by making a small change in the nine input parameters, the to 2% rise in UV can be made. The program was made online where it gives the operator what should be the nine input parameters at different time to maximized the UV in real time basis. After verifying all the calculations, the optimum input parameters were maintained in actual plant and benefit was found exactly same as calculated. This ensures the validation and accuracy of this calculation. Fig: 4 Actual Vs Predicted UV Fig: 5 Actual Vs s optimum UV 5. Conclusion In this paper, process modeling and optimization strategies integrating artificial neural networks with the differential evolution have been employed for modeling and optimization of commercial ethylene glycol product column. In the strategy, a process model is developed using an A method following which the input space of that model is optimized using DEs such that the process performance is maximized. The major advantage of the A-DE strategy is that modeling and optimization can be conducted exclusively from the historic process data wherein the detailed knowledge of process phenomenology (reaction mechanism, kinetics etc.) is not required. Using A-DE strategy, a number of sets of optimized operating conditions leading to maximized product UV was obtained. The optimized solutions when verified in actual plant resulted in a significant improvement in the product UV. References. Babu, B.V & Sastry, K.K..(999). Estimation of heat transfer parameters in a trickle-bed reactor using differential evolution and orthogonal collocation, Comp. Chem. Engg. 23, Tambe S. S., Kulkarni B. D. and Deshpande P. B. (996), Elements of Artificial eural etworks with selected applications in Chemical Engineering, and Chemical & Biological Sciences, Simulations & Advanced Controls, Louisville, KY. 9
ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92
ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000
More informationProcess Modeling and Optimization Strategies
From the SelectedWorks of Dr. Sandip Kumar Lahiri December, 2008 Process Modeling and Optimization Strategies sandip k lahiri Available at: https://works.bepress.com/sandip_lahiri/7/ Chemical Product and
More informationApplication of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption
Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption ANDRÉ NUNES DE SOUZA, JOSÉ ALFREDO C. ULSON, IVAN NUNES
More informationARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD
ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided
More informationMr. Harshit K. Dave 1, Dr. Keyur P. Desai 2, Dr. Harit K. Raval 3
Investigations on Prediction of MRR and Surface Roughness on Electro Discharge Machine Using Regression Analysis and Artificial Neural Network Programming Mr. Harshit K. Dave 1, Dr. Keyur P. Desai 2, Dr.
More informationData Mining Part 5. Prediction
Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,
More informationEngineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 6: Multi-Layer Perceptrons I
Engineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 6: Multi-Layer Perceptrons I Phil Woodland: pcw@eng.cam.ac.uk Michaelmas 2012 Engineering Part IIB: Module 4F10 Introduction In
More informationLecture 7 Artificial neural networks: Supervised learning
Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in
More information2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller
2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks Todd W. Neller Machine Learning Learning is such an important part of what we consider "intelligence" that
More informationLecture 4: Feed Forward Neural Networks
Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training
More informationNeural Networks and the Back-propagation Algorithm
Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely
More informationArtificial Neural Network and Fuzzy Logic
Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks
More informationIntroduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis
Introduction to Natural Computation Lecture 9 Multilayer Perceptrons and Backpropagation Peter Lewis 1 / 25 Overview of the Lecture Why multilayer perceptrons? Some applications of multilayer perceptrons.
More informationMODELLING ENERGY DEMAND FORECASTING USING NEURAL NETWORKS WITH UNIVARIATE TIME SERIES
MODELLING ENERGY DEMAND FORECASTING USING NEURAL NETWORKS WITH UNIVARIATE TIME SERIES S. Cankurt 1, M. Yasin 2 1&2 Ishik University Erbil, Iraq 1 s.cankurt@ishik.edu.iq, 2 m.yasin@ishik.edu.iq doi:10.23918/iec2018.26
More informationHeterogeneous mixture-of-experts for fusion of locally valid knowledge-based submodels
ESANN'29 proceedings, European Symposium on Artificial Neural Networks - Advances in Computational Intelligence and Learning. Bruges Belgium), 22-24 April 29, d-side publi., ISBN 2-9337-9-9. Heterogeneous
More informationSemi-empirical Process Modelling with Integration of Commercial Modelling Tools
European Symposium on Computer Arded Aided Process Engineering 15 L. Puigjaner and A. Espuña (Editors) 2005 Elsevier Science B.V. All rights reserved. Semi-empirical Process Modelling with Integration
More informationCSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning
CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Learning Neural Networks Classifier Short Presentation INPUT: classification data, i.e. it contains an classification (class) attribute.
More informationFeed-forward Network Functions
Feed-forward Network Functions Sargur Srihari Topics 1. Extension of linear models 2. Feed-forward Network Functions 3. Weight-space symmetries 2 Recap of Linear Models Linear Models for Regression, Classification
More informationDETECTING PROCESS STATE CHANGES BY NONLINEAR BLIND SOURCE SEPARATION. Alexandre Iline, Harri Valpola and Erkki Oja
DETECTING PROCESS STATE CHANGES BY NONLINEAR BLIND SOURCE SEPARATION Alexandre Iline, Harri Valpola and Erkki Oja Laboratory of Computer and Information Science Helsinki University of Technology P.O.Box
More informationArtificial Neural Networks 2
CSC2515 Machine Learning Sam Roweis Artificial Neural s 2 We saw neural nets for classification. Same idea for regression. ANNs are just adaptive basis regression machines of the form: y k = j w kj σ(b
More informationReal-Time Feasibility of Nonlinear Predictive Control for Semi-batch Reactors
European Symposium on Computer Arded Aided Process Engineering 15 L. Puigjaner and A. Espuña (Editors) 2005 Elsevier Science B.V. All rights reserved. Real-Time Feasibility of Nonlinear Predictive Control
More informationProcess Design Decisions and Project Economics Prof. Dr. V. S. Moholkar Department of Chemical Engineering Indian Institute of Technology, Guwahati
Process Design Decisions and Project Economics Prof. Dr. V. S. Moholkar Department of Chemical Engineering Indian Institute of Technology, Guwahati Module - 2 Flowsheet Synthesis (Conceptual Design of
More informationMachine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6
Machine Learning for Large-Scale Data Analysis and Decision Making 80-629-17A Neural Networks Week #6 Today Neural Networks A. Modeling B. Fitting C. Deep neural networks Today s material is (adapted)
More informationEE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan
EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, 2012 Sasidharan Sreedharan www.sasidharan.webs.com 3/1/2012 1 Syllabus Artificial Intelligence Systems- Neural Networks, fuzzy logic,
More information(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann
(Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for
More informationC4 Phenomenological Modeling - Regression & Neural Networks : Computational Modeling and Simulation Instructor: Linwei Wang
C4 Phenomenological Modeling - Regression & Neural Networks 4040-849-03: Computational Modeling and Simulation Instructor: Linwei Wang Recall.. The simple, multiple linear regression function ŷ(x) = a
More informationAn artificial neural networks (ANNs) model is a functional abstraction of the
CHAPER 3 3. Introduction An artificial neural networs (ANNs) model is a functional abstraction of the biological neural structures of the central nervous system. hey are composed of many simple and highly
More informationMark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer.
University of Cambridge Engineering Part IIB & EIST Part II Paper I0: Advanced Pattern Processing Handouts 4 & 5: Multi-Layer Perceptron: Introduction and Training x y (x) Inputs x 2 y (x) 2 Outputs x
More informationShort Term Load Forecasting Based Artificial Neural Network
Short Term Load Forecasting Based Artificial Neural Network Dr. Adel M. Dakhil Department of Electrical Engineering Misan University Iraq- Misan Dr.adelmanaa@gmail.com Abstract Present study develops short
More informationShort Term Solar Radiation Forecast from Meteorological Data using Artificial Neural Network for Yola, Nigeria
American Journal of Engineering Research (AJER) 017 American Journal of Engineering Research (AJER) eiss: 300847 piss : 300936 Volume6, Issue8, pp8389 www.ajer.org Research Paper Open Access Short Term
More informationNeural Networks Introduction
Neural Networks Introduction H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011 H. A. Talebi, Farzaneh Abdollahi Neural Networks 1/22 Biological
More informationPortugaliae Electrochimica Acta 26/4 (2008)
Portugaliae Electrochimica Acta 6/4 (008) 6-68 PORTUGALIAE ELECTROCHIMICA ACTA Comparison of Regression Model and Artificial Neural Network Model for the Prediction of Volume Percent of Diamond Deposition
More informationComfort characteristics of textiles - Objective evaluation and prediction by soft computing techniques
Comfort characteristics of textiles - Objective evaluation and prediction by soft computing techniques Yamini Jhanji 1,Shelly Khanna 1 & Amandeep Manocha 1 1 Department of Fashion & Apparel Engineering,
More informationError Empirical error. Generalization error. Time (number of iteration)
Submitted to Neural Networks. Dynamics of Batch Learning in Multilayer Networks { Overrealizability and Overtraining { Kenji Fukumizu The Institute of Physical and Chemical Research (RIKEN) E-mail: fuku@brain.riken.go.jp
More informationPattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore
Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore Lecture - 27 Multilayer Feedforward Neural networks with Sigmoidal
More informationCS:4420 Artificial Intelligence
CS:4420 Artificial Intelligence Spring 2018 Neural Networks Cesare Tinelli The University of Iowa Copyright 2004 18, Cesare Tinelli and Stuart Russell a a These notes were originally developed by Stuart
More informationMODELING OF A HOT AIR DRYING PROCESS BY USING ARTIFICIAL NEURAL NETWORK METHOD
MODELING OF A HOT AIR DRYING PROCESS BY USING ARTIFICIAL NEURAL NETWORK METHOD Ahmet DURAK +, Ugur AKYOL ++ + NAMIK KEMAL UNIVERSITY, Hayrabolu, Tekirdag, Turkey. + NAMIK KEMAL UNIVERSITY, Çorlu, Tekirdag,
More informationNeural Network Based Methodology for Cavitation Detection in Pressure Dropping Devices of PFBR
Indian Society for Non-Destructive Testing Hyderabad Chapter Proc. National Seminar on Non-Destructive Evaluation Dec. 7-9, 2006, Hyderabad Neural Network Based Methodology for Cavitation Detection in
More informationMultilayer Perceptron = FeedForward Neural Network
Multilayer Perceptron = FeedForward Neural Networ History Definition Classification = feedforward operation Learning = bacpropagation = local optimization in the space of weights Pattern Classification
More information18.6 Regression and Classification with Linear Models
18.6 Regression and Classification with Linear Models 352 The hypothesis space of linear functions of continuous-valued inputs has been used for hundreds of years A univariate linear function (a straight
More informationApplication of Monte Carlo Simulation to Multi-Area Reliability Calculations. The NARP Model
Application of Monte Carlo Simulation to Multi-Area Reliability Calculations The NARP Model Any power system reliability model using Monte Carlo simulation consists of at least the following steps: 1.
More information4. Multilayer Perceptrons
4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output
More informationAdvanced statistical methods for data analysis Lecture 2
Advanced statistical methods for data analysis Lecture 2 RHUL Physics www.pp.rhul.ac.uk/~cowan Universität Mainz Klausurtagung des GK Eichtheorien exp. Tests... Bullay/Mosel 15 17 September, 2008 1 Outline
More informationMultilayer Perceptron
Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Single Perceptron 3 Boolean Function Learning 4
More informationData Mining. Preamble: Control Application. Industrial Researcher s Approach. Practitioner s Approach. Example. Example. Goal: Maintain T ~Td
Data Mining Andrew Kusiak 2139 Seamans Center Iowa City, Iowa 52242-1527 Preamble: Control Application Goal: Maintain T ~Td Tel: 319-335 5934 Fax: 319-335 5669 andrew-kusiak@uiowa.edu http://www.icaen.uiowa.edu/~ankusiak
More informationIntegrated Knowledge Based System for Process Synthesis
17 th European Symposium on Computer Aided Process Engineering ESCAPE17 V. Plesu and P.S. Agachi (Editors) 2007 Elsevier B.V. All rights reserved. 1 Integrated Knowledge Based System for Process Synthesis
More informationArtificial Neural Network Based Approach for Design of RCC Columns
Artificial Neural Network Based Approach for Design of RCC Columns Dr T illai, ember I Karthekeyan, Non-member Recent developments in artificial neural network have opened up new possibilities in the field
More informationForecasting of Rain Fall in Mirzapur District, Uttar Pradesh, India Using Feed-Forward Artificial Neural Network
International Journal of Engineering Science Invention ISSN (Online): 2319 6734, ISSN (Print): 2319 6726 Volume 2 Issue 8ǁ August. 2013 ǁ PP.87-93 Forecasting of Rain Fall in Mirzapur District, Uttar Pradesh,
More informationLearning and Memory in Neural Networks
Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units
More informationIncrease of coal burning efficiency via automatic mathematical modeling. Patrick Bangert algorithmica technologies GmbH 1 Germany
Increase of coal burning efficiency via automatic mathematical modeling Patrick Bangert algorithmica technologies GmbH 1 Germany Abstract The entire process of a coal power plant from coal delivery to
More informationEstimation of Inelastic Response Spectra Using Artificial Neural Networks
Estimation of Inelastic Response Spectra Using Artificial Neural Networks J. Bojórquez & S.E. Ruiz Universidad Nacional Autónoma de México, México E. Bojórquez Universidad Autónoma de Sinaloa, México SUMMARY:
More informationCSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18
CSE 417T: Introduction to Machine Learning Final Review Henry Chai 12/4/18 Overfitting Overfitting is fitting the training data more than is warranted Fitting noise rather than signal 2 Estimating! "#$
More informationAN APPROACH TO FIND THE TRANSITION PROBABILITIES IN MARKOV CHAIN FOR EARLY PREDICTION OF SOFTWARE RELIABILITY
International Journal of Latest Research in Science and Technology Volume 2, Issue 6: Page No.111-115,November-December 2013 http://www.mnkjournals.com/ijlrst.htm ISSN (Online):2278-5299 AN APPROACH TO
More informationAddress for Correspondence
Research Article APPLICATION OF ARTIFICIAL NEURAL NETWORK FOR INTERFERENCE STUDIES OF LOW-RISE BUILDINGS 1 Narayan K*, 2 Gairola A Address for Correspondence 1 Associate Professor, Department of Civil
More informationLearning Tetris. 1 Tetris. February 3, 2009
Learning Tetris Matt Zucker Andrew Maas February 3, 2009 1 Tetris The Tetris game has been used as a benchmark for Machine Learning tasks because its large state space (over 2 200 cell configurations are
More informationElectric Load Forecasting Using Wavelet Transform and Extreme Learning Machine
Electric Load Forecasting Using Wavelet Transform and Extreme Learning Machine Song Li 1, Peng Wang 1 and Lalit Goel 1 1 School of Electrical and Electronic Engineering Nanyang Technological University
More informationNeural Networks Lecturer: J. Matas Authors: J. Matas, B. Flach, O. Drbohlav
Neural Networks 30.11.2015 Lecturer: J. Matas Authors: J. Matas, B. Flach, O. Drbohlav 1 Talk Outline Perceptron Combining neurons to a network Neural network, processing input to an output Learning Cost
More informationIntelligent Modular Neural Network for Dynamic System Parameter Estimation
Intelligent Modular Neural Network for Dynamic System Parameter Estimation Andrzej Materka Technical University of Lodz, Institute of Electronics Stefanowskiego 18, 9-537 Lodz, Poland Abstract: A technique
More informationEnhancing a Model-Free Adaptive Controller through Evolutionary Computation
Enhancing a Model-Free Adaptive Controller through Evolutionary Computation Anthony Clark, Philip McKinley, and Xiaobo Tan Michigan State University, East Lansing, USA Aquatic Robots Practical uses autonomous
More informationy(x n, w) t n 2. (1)
Network training: Training a neural network involves determining the weight parameter vector w that minimizes a cost function. Given a training set comprising a set of input vector {x n }, n = 1,...N,
More informationBuilding knowledge from plant operating data for process improvement. applications
Building knowledge from plant operating data for process improvement applications Ramasamy, M., Zabiri, H., Lemma, T. D., Totok, R. B., and Osman, M. Chemical Engineering Department, Universiti Teknologi
More informationA Hybrid Model of Wavelet and Neural Network for Short Term Load Forecasting
International Journal of Electronic and Electrical Engineering. ISSN 0974-2174, Volume 7, Number 4 (2014), pp. 387-394 International Research Publication House http://www.irphouse.com A Hybrid Model of
More informationNeural-based Monitoring of a Debutanizer. Distillation Column
Neural-based Monitoring of a Debutanizer Distillation Column L. Fortuna*, S. Licitra, M. Sinatra, M. G. Xibiliaº ERG Petroli ISAB Refinery, 96100 Siracusa, Italy e-mail: slicitra@ergpetroli.it *University
More informationFall 2003 BMI 226 / CS 426 AUTOMATIC SYNTHESIS OF IMPROVED TUNING RULES FOR PID CONTROLLERS
Notes LL-1 AUTOMATIC SYNTHESIS OF IMPROVED TUNING RULES FOR PID CONTROLLERS Notes LL-2 AUTOMATIC SYNTHESIS OF IMPROVED TUNING RULES FOR PID CONTROLLERS The PID controller was patented in 1939 by Albert
More informationNeural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington
Neural Networks CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 Perceptrons x 0 = 1 x 1 x 2 z = h w T x Output: z x D A perceptron
More informationMultilayer Perceptrons and Backpropagation
Multilayer Perceptrons and Backpropagation Informatics 1 CG: Lecture 7 Chris Lucas School of Informatics University of Edinburgh January 31, 2017 (Slides adapted from Mirella Lapata s.) 1 / 33 Reading:
More informationLecture 6. Notes on Linear Algebra. Perceptron
Lecture 6. Notes on Linear Algebra. Perceptron COMP90051 Statistical Machine Learning Semester 2, 2017 Lecturer: Andrey Kan Copyright: University of Melbourne This lecture Notes on linear algebra Vectors
More informationA Novel Activity Detection Method
A Novel Activity Detection Method Gismy George P.G. Student, Department of ECE, Ilahia College of,muvattupuzha, Kerala, India ABSTRACT: This paper presents an approach for activity state recognition of
More informationSerious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions
BACK-PROPAGATION NETWORKS Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks Cannot approximate (learn) non-linear functions Difficult (if not impossible) to design
More informationMODULE -4 BAYEIAN LEARNING
MODULE -4 BAYEIAN LEARNING CONTENT Introduction Bayes theorem Bayes theorem and concept learning Maximum likelihood and Least Squared Error Hypothesis Maximum likelihood Hypotheses for predicting probabilities
More informationWind Power Forecasting using Artificial Neural Networks
Wind Power Forecasting using Artificial Neural Networks This paper aims at predicting the power output of wind turbines using artificial neural networks,two different algorithms and models were trained
More informationConfidence Estimation Methods for Neural Networks: A Practical Comparison
, 6-8 000, Confidence Estimation Methods for : A Practical Comparison G. Papadopoulos, P.J. Edwards, A.F. Murray Department of Electronics and Electrical Engineering, University of Edinburgh Abstract.
More informationPrediction of Monthly Rainfall of Nainital Region using Artificial Neural Network (ANN) and Support Vector Machine (SVM)
Vol- Issue-3 25 Prediction of ly of Nainital Region using Artificial Neural Network (ANN) and Support Vector Machine (SVM) Deepa Bisht*, Mahesh C Joshi*, Ashish Mehta** *Department of Mathematics **Department
More informationA Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation
1 Introduction A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation J Wesley Hines Nuclear Engineering Department The University of Tennessee Knoxville, Tennessee,
More informationLecture 4: Perceptrons and Multilayer Perceptrons
Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons
More informationFeedforward Neural Nets and Backpropagation
Feedforward Neural Nets and Backpropagation Julie Nutini University of British Columbia MLRG September 28 th, 2016 1 / 23 Supervised Learning Roadmap Supervised Learning: Assume that we are given the features
More informationSolubility Modeling of Diamines in Supercritical Carbon Dioxide Using Artificial Neural Network
Australian Journal of Basic and Applied Sciences, 5(8): 166-170, 2011 ISSN 1991-8178 Solubility Modeling of Diamines in Supercritical Carbon Dioxide Using Artificial Neural Network 1 Mehri Esfahanian,
More informationDifferent Criteria for Active Learning in Neural Networks: A Comparative Study
Different Criteria for Active Learning in Neural Networks: A Comparative Study Jan Poland and Andreas Zell University of Tübingen, WSI - RA Sand 1, 72076 Tübingen, Germany Abstract. The field of active
More informationNeural Network Identification of Non Linear Systems Using State Space Techniques.
Neural Network Identification of Non Linear Systems Using State Space Techniques. Joan Codina, J. Carlos Aguado, Josep M. Fuertes. Automatic Control and Computer Engineering Department Universitat Politècnica
More informationSPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks
Topics in Machine Learning-EE 5359 Neural Networks 1 The Perceptron Output: A perceptron is a function that maps D-dimensional vectors to real numbers. For notational convenience, we add a zero-th dimension
More informationECE662: Pattern Recognition and Decision Making Processes: HW TWO
ECE662: Pattern Recognition and Decision Making Processes: HW TWO Purdue University Department of Electrical and Computer Engineering West Lafayette, INDIANA, USA Abstract. In this report experiments are
More information22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1
Neural Networks Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Brains as Computational Devices Brains advantages with respect to digital computers: Massively parallel Fault-tolerant Reliable
More informationProcess design decisions and project economics Dr. V. S. Moholkar Department of chemical engineering Indian Institute of Technology, Guwahati
Process design decisions and project economics Dr. V. S. Moholkar Department of chemical engineering Indian Institute of Technology, Guwahati Module - 02 Flowsheet Synthesis (Conceptual Design of a Chemical
More informationCOMP 551 Applied Machine Learning Lecture 14: Neural Networks
COMP 551 Applied Machine Learning Lecture 14: Neural Networks Instructor: Ryan Lowe (ryan.lowe@mail.mcgill.ca) Slides mostly by: Class web page: www.cs.mcgill.ca/~hvanho2/comp551 Unless otherwise noted,
More informationLecture 6. Regression
Lecture 6. Regression Prof. Alan Yuille Summer 2014 Outline 1. Introduction to Regression 2. Binary Regression 3. Linear Regression; Polynomial Regression 4. Non-linear Regression; Multilayer Perceptron
More informationHow New Information Criteria WAIC and WBIC Worked for MLP Model Selection
How ew Information Criteria WAIC and WBIC Worked for MLP Model Selection Seiya Satoh and Ryohei akano ational Institute of Advanced Industrial Science and Tech, --7 Aomi, Koto-ku, Tokyo, 5-6, Japan Chubu
More informationKeywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm
Volume 4, Issue 5, May 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Huffman Encoding
More informationPrediction of Hourly Solar Radiation in Amman-Jordan by Using Artificial Neural Networks
Int. J. of Thermal & Environmental Engineering Volume 14, No. 2 (2017) 103-108 Prediction of Hourly Solar Radiation in Amman-Jordan by Using Artificial Neural Networks M. A. Hamdan a*, E. Abdelhafez b
More informationNeural network modelling of reinforced concrete beam shear capacity
icccbe 2010 Nottingham University Press Proceedings of the International Conference on Computing in Civil and Building Engineering W Tizani (Editor) Neural network modelling of reinforced concrete beam
More informationPOWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH
Abstract POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH A.H.M.A.Rahim S.K.Chakravarthy Department of Electrical Engineering K.F. University of Petroleum and Minerals Dhahran. Dynamic
More informationUnit III. A Survey of Neural Network Model
Unit III A Survey of Neural Network Model 1 Single Layer Perceptron Perceptron the first adaptive network architecture was invented by Frank Rosenblatt in 1957. It can be used for the classification of
More informationRobust Pareto Design of GMDH-type Neural Networks for Systems with Probabilistic Uncertainties
. Hybrid GMDH-type algorithms and neural networks Robust Pareto Design of GMDH-type eural etworks for Systems with Probabilistic Uncertainties. ariman-zadeh, F. Kalantary, A. Jamali, F. Ebrahimi Department
More informationARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC ALGORITHM FOR NONLINEAR MIMO MODEL OF MACHINING PROCESSES
International Journal of Innovative Computing, Information and Control ICIC International c 2013 ISSN 1349-4198 Volume 9, Number 4, April 2013 pp. 1455 1475 ARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC
More informationLecture 7: DecisionTrees
Lecture 7: DecisionTrees What are decision trees? Brief interlude on information theory Decision tree construction Overfitting avoidance Regression trees COMP-652, Lecture 7 - September 28, 2009 1 Recall:
More informationDeep Feedforward Networks
Deep Feedforward Networks Liu Yang March 30, 2017 Liu Yang Short title March 30, 2017 1 / 24 Overview 1 Background A general introduction Example 2 Gradient based learning Cost functions Output Units 3
More informationOcean Based Water Allocation Forecasts Using an Artificial Intelligence Approach
Ocean Based Water Allocation Forecasts Using an Artificial Intelligence Approach Khan S 1, Dassanayake D 2 and Rana T 2 1 Charles Sturt University and CSIRO Land and Water, School of Science and Tech,
More informationNeural Networks for Protein Structure Prediction Brown, JMB CS 466 Saurabh Sinha
Neural Networks for Protein Structure Prediction Brown, JMB 1999 CS 466 Saurabh Sinha Outline Goal is to predict secondary structure of a protein from its sequence Artificial Neural Network used for this
More informationShear Strength of Slender Reinforced Concrete Beams without Web Reinforcement
RESEARCH ARTICLE OPEN ACCESS Shear Strength of Slender Reinforced Concrete Beams without Web Reinforcement Prof. R.S. Chavan*, Dr. P.M. Pawar ** (Department of Civil Engineering, Solapur University, Solapur)
More informationSolution of Stiff Differential Equations & Dynamical Systems Using Neural Network Methods
Advances in Dynamical Systems and Applications. ISSN 0973-5321, Volume 12, Number 1, (2017) pp. 21-28 Research India Publications http://www.ripublication.com Solution of Stiff Differential Equations &
More informationPrediction of gas emission quantity using artificial neural networks
Available online www.jocpr.com Journal of Chemical and Pharmaceutical Research, 2014, 6(6):1653-165 Research Article ISSN : 095-384 CODEN(USA) : JCPRC5 Prediction of gas emission quantity using artificial
More information