Address for Correspondence
|
|
- Basil Green
- 5 years ago
- Views:
Transcription
1 Research Article APPLICATION OF ARTIFICIAL NEURAL NETWORK FOR INTERFERENCE STUDIES OF LOW-RISE BUILDINGS 1 Narayan K*, 2 Gairola A Address for Correspondence 1 Associate Professor, Department of Civil Engineering Institute of Engineering & Technology Lucknow, India 2 Professor, Department of Civil Engineering I.I.T. Roorkee, India ABSTRACT Artificial neural network (ANN) representations are capable of developing functional relationship from discrete values of input-output quantities obtained from computational approaches or experimental results. Data obtained from wind tunnel tests for interference effects expressed by Interference Factor (IF) on gable roof building (25 roof slope) has been used for training the network. Training of the neural network is carried out by inputting the data sets, which consist of some selected locations of the interfering building and the corresponding values of IF for the worst design pressure coefficients (Cpq). These were determined independently for each zone of the roof. The trained network is then expected to predict the IF for Cpq for locations of interfering building not covered in the training data set. Correlation plots and contours show that most of the ANN predicted values are very close to the corresponding experimental values. 50% reduction in the experimental work can be achieved for the case studied by using the neural network modeling for interference studies in low buildings, without sacrificing any accuracy. KEYWORDS: Artificial neural network; gable roof building; interference effects; design pressure coefficients. INTRODUCTION The prediction of wind-induced pressure coefficients on the wide variety of building geometries is of considerable practical importance. Extensive amount of wind tunnel testing is required to determine the wind loads on low buildings, there being a large number of parameters with a wide range of values affecting it. Time consuming and costly wind tunnel tests can only cover a limited number of basic configurations. Artificial Neural Network (ANN) is inspired from the biological sciences by attempting to emulate the behaviour and complex functioning of the human brain in recognizing patterns. They are composed of several layers of many interconnected neurons operating in parallel. Since ANN have many inputs and outputs (responses) and allow nonlinearity in the transfer functions of the neurons, they can be used to solve multivariate and nonlinear modeling problems. Use of ANN has been reported in a number of civil engineering applications. Vanluchene and Sun (1) demonstrated the application of ANN in structural engineering applications. Authors presented the application of Back-propagation to concrete beam design and rectangular plate analysis problems. Khanduri et al (2) presented the abilities of neural network for solving wind interference problems among tall buildings. They concluded that the ability of neural network to be trained to generalize, when presented with limited data examples, makes it an attractive application for knowledge acquisition on wind interference effects where there is no acceptable theory or empirical generalization available at present. Similarly, successful application of ANN is also reported by Deshpande and Mukerjee (3) on initial design process and on structural design expert system. Girma (4) applied cascade correlation learning algorithm for determination of wind pressure distribution in buildings and found that the error was less than 15% in the predicted values. Kwatra (5) predicted interference effect on gable roof building successfully using Back-propagation training algorithm of neural network and showed that the predicted values lie within 5% variation of the corresponding experimental values. Chen et al (6) presented an artificial neural network approach for the prediction of mean and root-mean-square pressure coefficients on the gable roofs of low buildings. The performance of the ANN is demonstrated by the prediction of the pressure coefficients for roof tap locations in a corner bay. Authors concluded that the approach could be used to expand aerodynamic databases to a larger variety of geometries and increase its practical feasibility. Artificial Neural Network models have the ability to learn and generalize the problems even when input data contains error or is incomplete. Neural networks exhibit several characteristics that make them suitable to study interference effects in wind engineering. Training teaches a network to capture significant features or relationships in the data. Thus a trained network is capable of exhibiting such relationships in new related data. The ability of neural networks to perform a computation when trained with examples makes them applicable in situations where a model is needed, yet no currently acceptable theory exists for describing the input/output pattern. Neural networks are composed of several layers of simple elements operating in parallel. Each layer has a weighted input vector and an output vector. Layers whose outputs become the final network output are called output layers; all other layers are called hidden layers. The first set of neurons referred to as input layer perform no computations and serve only as distribution points. The network function is
2 determined largely by the connections between neurons or processors. Each neuron accepts a set of inputs from other neurons and computes an output which is propagated to connected neurons. Networks are trained with available test examples to recognize input patterns and produce appropriate output responses. Each connection is associated with a measure of the strength of the connection, called its weight, which is used to modify the signals. For a given neural architecture, it is the weight of the connections between the neurons which determines the output. Changing the strength or weight of connections with experience (new examples) is akin to learning, the memory of a network being embedded in the strength (weight) of the connections. An elementary back propagation neuron with n inputs and weighted connections is shown in Fig. 1. The activity pattern in a hidden layer follows an encoding of the significant features of the input. The collective activity of all the hidden units (neurons or nodes) determines the behavior of a network. Hidden units work as internal representations for the inputs, and outputs are generated from these internal representations rather than by the original example pattern. Thus, an appropriate output pattern can be generated from any input-output pattern by employing an adequate number of hidden units. There are currently no universal rules for selecting the number of hidden nodes and layers. These vary from problem to problem and are generally fixed after several trials with the network. A network may not get trained to the acceptable level of error with too few hidden nodes, whereas with too many hidden nodes, the network may work a bit better but not generalize well. It then becomes merely a look-up table and tends to increase training time and the size of weight matrices in the solution. Generally one hidden layer is enough for most problems, but for very complex, fuzzy and highly non-linear problems, more than one hidden layers might be required to capture the significant features in the data. The changing of interconnection patterns and the setting of weights that determine the strength of a connection is done by training the network to exhibit the correct behaviour. One of the most common training algorithms is known as the Generalized Delta Rule or back propagation Neural Network (BPNN) (Rumelhart et al, 1986). The present study uses Back Propagation algorithm for modeling wind interference problems. DEVELOPMENT AND VALIDATION OF ANN ALGORITHM Back-propagation learning algorithm is assumed to be composed of only three layers: input-hiddenoutput. The application of the Back propagation for training a network involves two phases. In the first (feed forward) phase, each input is weighted with an appropriate weight w (initialized to small random numbers, usually between -0.3 and +0.3) and the products are summed up at each neuron (node) of the network. This summation S of the weighted inputs at each neuron of the network is then modified by an activation or transfer function F, thereby generating an output signal O. A continuous, non-linear logistic or sigmoid (meaning S-shaped) transfer function is commonly used, mainly because it meets the differentiability requirement of the Back propagation algorithm. The sigmoid transfer function forces the output to lie between 0 and 1 as the neuron s net input (S) goes from negative to positive infinity. The summation is continued up to the output layer of the network, the outputs of each layer serving as inputs to subsequent layers. The transfer function (see Fig 1) is given by: F(S) = 1 (1+ e S ) (1) Fig. 1 Basic Neuron Model and the Feed Forward Phase
3 Fig. 2 Overall BPNN Architecture and the Back Propagation Phase In the second (Back propagation) phase, the error between output of the network and the training set is propagated backward through the network where it is used to adjust the weight according to the Back propagation algorithm. The algorithm makes a small adjustment to the strength of each connection in such a manner that each alternative reduces the total network error in the direction of the steepest descent of the error. This is called the gradient descent method. Derivatives of the error are calculated for the network s output layer and then back-propagated through the network until all the weights are adjusted and the sum squared error of the network is within acceptable limits. The back-propagation process is briefly described below and illustrated in Fig. 2. At the output layer, the error ( δ ) between the actual network output (say O r at the r th neuron) and the desired or training example output (say T r at the r th neuron) is calculated by multiplying the difference between T r and O r by the derivative of the transfer function F(S r )/ S r = O r (1-O r ). Thus, δ = (T r O r ){ O r (1- O r )} (2) r This error is back-propagated from the output layer, where it is used to adjust the weights of connections between the hidden layer and the output layer, according to the back-propagation algorithm as follows: r W qr = ηδ O q (3) r W qr (m+1) = W qr (m ) + W qr (4) where η is the training rate coefficient; δ is the error for neuron r in the output layer; O q is the value of the output for neuron q (hidden layer); W qr (m ) is the value of weight of connection from neuron q in the hidden layer to neuron r in the output layer at iteration m, before adjustment and W qr (m+1) is the value of weight at iteration m+1, after adjustment. The training rate coefficient (η) serves to adjust the size of the average weight change, and is typically set between 0.01 and 1.0. The larger this coefficient, the larger the changes in the weights and more rapid will be the learning. However, in some cases, larger training rates often lead to oscillation of weight changes and the model does not converge to a solution. Errors at the output layer are backpropagated to the hidden layers preceding the output layer and weight adjustments for the connections from the hidden layer to the input layer are made as follows: n δ = { q i= 1 δ W }{ O q (1- O q )} (5) i qi W pq = ηδ O p (6) q W pq (m+1) = W pq (m) + W pq (7) r
4 where δ q is the error for neuron q in the hidden layer; δ i is the error for neuron i in the output layer; W qi is the value of weight of connection from neuron q in the hidden layer to neuron i in the output layer; n" is the number of neurons in the output layer; O p is the value of output for neuron p (input layer); W pq (m) is the value of a weight from neuron p in the input layer to neuron q in the hidden layer at iteration m before adjustment and W pq (m+1) is the value of weight at iteration m+1, after adjustment. In practice a momentum term is frequently added to equation (6) as an aid to more rapid convergence in certain problem domain. The momentum takes into account the effect of past weight change. The momentum constant, β, determines the emphasis to place on this term. Momentum has the effect of smoothing the error surface in weight space by filtering out adjusted in the presence of momentum by: W pq (m+1)=ηδ O p +βw pq (m) (8) q The above steps are repeated for all training examples and the square of all the errors between the network and training examples outputs are added up. For a network trained on l examples, each having n outputs, the average mean square error E is calculated as follows: l n 1 2 E = ( T ij Oij ) (9) 2l j= 1 i= 1 where T ij is the i th output of the j th training example set; O ij is the network output at the i th output neuron of the j th training example set. The entire training process is repeated until the behaviour of the network is satisfactory, i.e., the average mean square error E is less than or equal to a user-defined threshold. After being trained on a large number of cases, a network will not only be able to map the input and output patters of the training examples, it will also perform well on many cases which were not present in the training set, i.e. it will display some abilities to generalize. A software (using C programming language) has been developed for this algorithm. The software allows selection of number of neurons in the input layer, number of hidden layers, number of neurons in each hidden layer, and number of neurons in output layer. It also allows selection of learning rate parameter and momentum factor. Software generates random numbered weights (as per specified range) depending upon the architecture of the network. ANN MODELING FOR INTERFERENCE STUDIES Extensive wind tunnel experiments remain the source of knowledge on interference effects on low buildings. This involves detailed parametric studies on scaled building models in the wind tunnel, incorporating a large number of variables. An experimental programme is quite demanding in terms of time and resources. The complex nature of the problem and large number of variables involved make it impossible to test all building interference situations. Till today no analytical approach or even mathematical model based on experimental results are available to predict quantitatively, the extent of interference. This shows that the wind tunnel testing is the only solution to study the interference effect on low buildings. To economise on the effort, there is always a need to explore the ways for predicting wind loads including interference effects from the comparatively reduced test programme. Artificial neural network (ANN) representations are capable of developing functional relationship from discrete values of input-output quantities obtained from computational approaches or experimental results. This generalization makes it possible to train a network on a representative set of input-output examples and get good results for new input without training the network on all possible input-output examples. Data obtained from wind tunnel testing for interference effects (IF) on gable roof building has been used for training the network. The interference factor (IF) for worst design pressure coefficients independent of wind direction for each zone of the roof on the building for single building interference has been taken as output parameter of the neural network. Locations of interfering building(s) have been considered as input parameter. Training of the neural network is carried out by the data sets, which consist of some selected locations of interfering building (s) and values of IF for Cpq for each zone of the roof at those locations. The trained network is then expected to predict the IF for Cpq for each zone of the roof for locations of interfering building(s) not covered in the training data set. ANN APPLICATION FOR INTERFERENCE WITH A SINGLE BUILDING Extensive wind tunnel testing has been carried out to study the effect of single building interference on the 25º roofed building with overhangs. Data obtained from this study has been used for training of the network. Training of neural network has been carried out for each zone of the roof. SELECTION OF NEURAL NETWORK ARCHITECTURE Neural network used for training for each zone of the roof consists of two hidden layers with an input layer and an output layer. Input layer has two neurons representing the input parameters, which are (x) and (y) coordinates of position of interfering building. Output layer has one neuron, which represents the Interference Factor (IF) for Cpq for the concerned zone for the corresponding position of interfering building. Each hidden layer consists of twenty six neurons.
5 Fifteen positions of interfering building have been selected for training of the neural network for each zone. The values of learning rate parameter (η ) and momentum coefficient (β) have been changed during the training of network. Training of the neural network has been started with a value of 0.05 for learning rate parameter and 0.50 for momentum coefficient. After some cycles of training, when the convergence of the network becomes slow, the values of these parameters have been increased in steps of Finally, the values of η and β converged to 0.35 and 0.95 respectively. Training of the network was carried out till the average mean square error of the network reduced to The network has been tested for additional fifteen interfering positions i.e., total thirty interfering positions of input data for each zone (Fig. 3). Correlation plots and Interference Factor contours for each zone have been shown in Figs. 4 to 19. Fig. 3(a) Location of different zones on building roof Fig. 3(b) Locations of Single Similar used for A. N. N.
6 Fig 4 Correlation Plot Between Experimental and ANN Predicted Values for Zone A for Single Similar Fig 5 Interference Factor (I F) Contours for Cpq Obtained by Experimental and ANN Predicted for Zone A due to Change in Position of Single Similar
7 Fig 6 Correlation Plot Between Experimental and ANN Predicted Values for Zone B1 for Single Similar Fig 7 I.F. Contours for Cpq Obtained by Experimental and ANN Predicted for Zone B1 due to Change in Position of Single Similar
8 Fig 8 Correlation Plot Between Experimental and ANN Predicted Values for Zone C for Single Similar Fig 9 I.F. Contours for Cpq Obtained by Experimental and ANN Predicted for Zone C due to Change in Position of Single Similar
9
10 Fig 10 Correlation Plot Between Experimental and ANN Predicted Values for Zone D for Single Similar Fig 11 I. F. Contours for Cpq Obtained by Experimental and ANN Predicted for Zone D due to Change in Position of Single Similar
11 Fig 12 Correlation Plot Between Experimental and ANN Predicted Values for Zone E for Single Similar Fig 13 I. F. Contours for Cpq Obtained by Experimental and ANN Predicted for Zone E due to Change in Position of Single Similar
12 Fig 14 Correlation Plot Between Experimental and ANN Predicted Values for Zone F for Single Similar Fig 15 I.F. Contours for Cpq Obtained by Experimental and ANN Predicted for Zone F due to Change in Position of Single Similar
13 Fig 16 Correlation Plot Between Experimental and ANN Predicted Values for Zone G for Single Similar Fig 17 I.F. Contours for Cpq Obtained by Experimental and ANN Predicted for Zone G due to Change in Position of Single Similar
14 Fig 18 Correlation Plot Between Experimental and ANN Predicted Values for Zone H for Single Similar Fig 19 I.F. Contours for Cpq Obtained by Experimental and ANN Predicted for Zone H due to Change in Position of Single Similar COMPARISON OF ANN PREDICTIONS WITH between ANN predicted values and experimental EXPERIMENTAL VALUES values of IF for Cpq for various zones of the roof and For each zone of the building roof, training of the their contours are plotted in Figs.4 to 19. It can be neural network has been performed separately. seen from these correlation plots that most of the Prediction of the values of IF for Cpq for each zone is predicted values are found to be very close to the then carried out through the trained network for all corresponding experimental values. Contours of the positions of the interfering building. Correlation plots values of IFs for Cpq predicted by ANN follow
15 closely the pattern as obtained experimentally for all the zones. Moreover, the contours of predicted values of Cpq show a generalized trend of variations, as ANN predictions attempt to map all the cases of input-output. CONCLUSION It can be concluded from the results of these comparisons that almost 50% reduction in the experimental work can be achieved by using the neural network modeling for interference studies in low buildings. ACKNOWLEDGEMENT The work presented in this paper is part of the Ph. D. Thesis of the first author awarded in the Department of Civil Engineering I. I. T. Roorkee, Roorkee, India. REFERENCES 1. Vanluchene, R.D., and Sun, R. (1990), Neural network in structural engineering, Microcomputers in Civil Engineering, Vol. 5, pp , Elsevier Science Publishing Co. Inc., 655 Avenue of the Americas, New York. 2. Khanduri, A.C., Bedard, C. and Stathopoulos T., (1995), Neural network modeling of windinduced interference effects, Proc. 9 th Int. Conf. on Wind Engineering, New Delhi, India, pp Deshpande, J.M., Mukherjee, A. (1995), Artificial neural network in modeling structural stability of compression members, Proc. International Conf. on Stability of Structures, ICSS Girma, T.B. (1998), Application of artificial neural network for determination of wind pressure distribution in buildings, M.E. thesis, University of Roorkee, Roorkee, India. 5. Kwatra, N. (2000), Experimental studies and ANN modeling of wind loads on low buildings, Ph. D. Thesis, Department of Civil Engineering, University of Roorkee, Roorkee. 6. Chen, Y., Kopp, G.A., Surry, D. (2003), Prediction of pressure coefficients on roofs of low buildings using artificial neural networks, Journal of Wind Engineering and Industrial Aerodynamics, Vol. 91, pp Rumelhart, D.E., Hinton, G.E. and Williams, R.J. (1986), Learning internal representations by error propagation, Parallel Distributed Processing: Explorations in the Microstructures of Cognition, Vol. 1: Foundations, MIT Press, Cambridge, MA, USA.
Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm
Volume 4, Issue 5, May 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Huffman Encoding
More informationLecture 4: Perceptrons and Multilayer Perceptrons
Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons
More informationArtificial Neural Network Based Approach for Design of RCC Columns
Artificial Neural Network Based Approach for Design of RCC Columns Dr T illai, ember I Karthekeyan, Non-member Recent developments in artificial neural network have opened up new possibilities in the field
More informationArtificial Neural Networks. Edward Gatt
Artificial Neural Networks Edward Gatt What are Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning Very
More informationInternational Journal of Advanced Research in Computer Science and Software Engineering
Volume 3, Issue 4, April 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Application of
More informationBack-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples
Back-Propagation Algorithm Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples 1 Inner-product net =< w, x >= w x cos(θ) net = n i=1 w i x i A measure
More informationData Mining Part 5. Prediction
Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,
More informationComputational Intelligence Winter Term 2017/18
Computational Intelligence Winter Term 207/8 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS ) Fakultät für Informatik TU Dortmund Plan for Today Single-Layer Perceptron Accelerated Learning
More informationLab 5: 16 th April Exercises on Neural Networks
Lab 5: 16 th April 01 Exercises on Neural Networks 1. What are the values of weights w 0, w 1, and w for the perceptron whose decision surface is illustrated in the figure? Assume the surface crosses the
More information4. Multilayer Perceptrons
4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output
More informationLecture 7 Artificial neural networks: Supervised learning
Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in
More informationARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD
ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided
More informationComputational Intelligence
Plan for Today Single-Layer Perceptron Computational Intelligence Winter Term 00/ Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS ) Fakultät für Informatik TU Dortmund Accelerated Learning
More informationNeural network modelling of reinforced concrete beam shear capacity
icccbe 2010 Nottingham University Press Proceedings of the International Conference on Computing in Civil and Building Engineering W Tizani (Editor) Neural network modelling of reinforced concrete beam
More informationRESPONSE PREDICTION OF STRUCTURAL SYSTEM SUBJECT TO EARTHQUAKE MOTIONS USING ARTIFICIAL NEURAL NETWORK
ASIAN JOURNAL OF CIVIL ENGINEERING (BUILDING AND HOUSING) VOL. 7, NO. 3 (006) PAGES 301-308 RESPONSE PREDICTION OF STRUCTURAL SYSTEM SUBJECT TO EARTHQUAKE MOTIONS USING ARTIFICIAL NEURAL NETWORK S. Chakraverty
More informationNeural Networks DWML, /25
DWML, 2007 /25 Neural networks: Biological and artificial Consider humans: Neuron switching time 0.00 second Number of neurons 0 0 Connections per neuron 0 4-0 5 Scene recognition time 0. sec 00 inference
More informationMultilayer Feedforward Networks. Berlin Chen, 2002
Multilayer Feedforard Netors Berlin Chen, 00 Introduction The single-layer perceptron classifiers discussed previously can only deal ith linearly separable sets of patterns The multilayer netors to be
More informationArtificial Neural Networks
Artificial Neural Networks 鮑興國 Ph.D. National Taiwan University of Science and Technology Outline Perceptrons Gradient descent Multi-layer networks Backpropagation Hidden layer representations Examples
More informationPortugaliae Electrochimica Acta 26/4 (2008)
Portugaliae Electrochimica Acta 6/4 (008) 6-68 PORTUGALIAE ELECTROCHIMICA ACTA Comparison of Regression Model and Artificial Neural Network Model for the Prediction of Volume Percent of Diamond Deposition
More informationNeural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET
Unit-. Definition Neural network is a massively parallel distributed processing system, made of highly inter-connected neural computing elements that have the ability to learn and thereby acquire knowledge
More informationArtificial Intelligence
Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement
More informationMr. Harshit K. Dave 1, Dr. Keyur P. Desai 2, Dr. Harit K. Raval 3
Investigations on Prediction of MRR and Surface Roughness on Electro Discharge Machine Using Regression Analysis and Artificial Neural Network Programming Mr. Harshit K. Dave 1, Dr. Keyur P. Desai 2, Dr.
More information<Special Topics in VLSI> Learning for Deep Neural Networks (Back-propagation)
Learning for Deep Neural Networks (Back-propagation) Outline Summary of Previous Standford Lecture Universal Approximation Theorem Inference vs Training Gradient Descent Back-Propagation
More informationMultilayer Perceptrons and Backpropagation
Multilayer Perceptrons and Backpropagation Informatics 1 CG: Lecture 7 Chris Lucas School of Informatics University of Edinburgh January 31, 2017 (Slides adapted from Mirella Lapata s.) 1 / 33 Reading:
More informationIntroduction to Machine Learning
Introduction to Machine Learning Neural Networks Varun Chandola x x 5 Input Outline Contents February 2, 207 Extending Perceptrons 2 Multi Layered Perceptrons 2 2. Generalizing to Multiple Labels.................
More informationFeedforward Neural Nets and Backpropagation
Feedforward Neural Nets and Backpropagation Julie Nutini University of British Columbia MLRG September 28 th, 2016 1 / 23 Supervised Learning Roadmap Supervised Learning: Assume that we are given the features
More informationModeling Confinement Efficiency of Reinforced Concrete Columns with Rectilinear Transverse Steel Using Artificial Neural Networks
Modeling Confinement Efficiency of Reinforced Concrete Columns with Rectilinear Transverse Steel Using Artificial Neural Networks By Chao-Wei Tang, 1 How-Ji Chen, 2 and Tsong Yen 3 Abstract: Artificial
More informationApplication of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption
Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption ANDRÉ NUNES DE SOUZA, JOSÉ ALFREDO C. ULSON, IVAN NUNES
More informationNeural Network Identification of Non Linear Systems Using State Space Techniques.
Neural Network Identification of Non Linear Systems Using State Space Techniques. Joan Codina, J. Carlos Aguado, Josep M. Fuertes. Automatic Control and Computer Engineering Department Universitat Politècnica
More informationArtificial Neural Network Method of Rock Mass Blastability Classification
Artificial Neural Network Method of Rock Mass Blastability Classification Jiang Han, Xu Weiya, Xie Shouyi Research Institute of Geotechnical Engineering, Hohai University, Nanjing, Jiangshu, P.R.China
More informationPOWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH
Abstract POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH A.H.M.A.Rahim S.K.Chakravarthy Department of Electrical Engineering K.F. University of Petroleum and Minerals Dhahran. Dynamic
More informationArtificial Neural Networks. Historical description
Artificial Neural Networks Historical description Victor G. Lopez 1 / 23 Artificial Neural Networks (ANN) An artificial neural network is a computational model that attempts to emulate the functions of
More informationArtificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino
Artificial Neural Networks Data Base and Data Mining Group of Politecnico di Torino Elena Baralis Politecnico di Torino Artificial Neural Networks Inspired to the structure of the human brain Neurons as
More informationReal Time wave forecasting using artificial neural network with varying input parameter
82 Indian Journal of Geo-Marine SciencesINDIAN J MAR SCI VOL. 43(1), JANUARY 2014 Vol. 43(1), January 2014, pp. 82-87 Real Time wave forecasting using artificial neural network with varying input parameter
More informationIntroduction to Neural Networks
Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning
More informationIntroduction to Artificial Neural Networks
Facultés Universitaires Notre-Dame de la Paix 27 March 2007 Outline 1 Introduction 2 Fundamentals Biological neuron Artificial neuron Artificial Neural Network Outline 3 Single-layer ANN Perceptron Adaline
More informationA correlation-based analysis on wind-induced interference effects between two tall buildings
Wind and Structures, Vol. 8, No. 3 (2005) 163-178 163 A correlation-based analysis on wind-induced interference effects between two tall buildings Z. N. Xie and M. Gu State Key Laboratory for Disaster
More informationECE521 Lectures 9 Fully Connected Neural Networks
ECE521 Lectures 9 Fully Connected Neural Networks Outline Multi-class classification Learning multi-layer neural networks 2 Measuring distance in probability space We learnt that the squared L2 distance
More informationArtificial Neural Network and Fuzzy Logic
Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks
More informationArtificial Neural Network
Artificial Neural Network Contents 2 What is ANN? Biological Neuron Structure of Neuron Types of Neuron Models of Neuron Analogy with human NN Perceptron OCR Multilayer Neural Network Back propagation
More informationCSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning
CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Learning Neural Networks Classifier Short Presentation INPUT: classification data, i.e. it contains an classification (class) attribute.
More informationMachine Learning. Neural Networks. (slides from Domingos, Pardo, others)
Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward
More informationMultilayer Perceptron Tutorial
Multilayer Perceptron Tutorial Leonardo Noriega School of Computing Staffordshire University Beaconside Staffordshire ST18 0DG email: l.a.noriega@staffs.ac.uk November 17, 2005 1 Introduction to Neural
More informationAn artificial neural networks (ANNs) model is a functional abstraction of the
CHAPER 3 3. Introduction An artificial neural networs (ANNs) model is a functional abstraction of the biological neural structures of the central nervous system. hey are composed of many simple and highly
More informationArtificial Neural Networks
Artificial Neural Networks Threshold units Gradient descent Multilayer networks Backpropagation Hidden layer representations Example: Face Recognition Advanced topics 1 Connectionist Models Consider humans:
More informationSIMULATION OF FREEZING AND FROZEN SOIL BEHAVIOURS USING A RADIAL BASIS FUNCTION NEURAL NETWORK
SIMULATION OF FREEZING AND FROZEN SOIL BEHAVIOURS USING A RADIAL BASIS FUNCTION NEURAL NETWORK Z.X. Zhang 1, R.L. Kushwaha 2 Department of Agricultural and Bioresource Engineering University of Saskatchewan,
More informationUnit III. A Survey of Neural Network Model
Unit III A Survey of Neural Network Model 1 Single Layer Perceptron Perceptron the first adaptive network architecture was invented by Frank Rosenblatt in 1957. It can be used for the classification of
More informationNeural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann
Neural Networks with Applications to Vision and Language Feedforward Networks Marco Kuhlmann Feedforward networks Linear separability x 2 x 2 0 1 0 1 0 0 x 1 1 0 x 1 linearly separable not linearly separable
More informationArtificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!
Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011! 1 Todayʼs lecture" How the brain works (!)! Artificial neural networks! Perceptrons! Multilayer feed-forward networks! Error
More informationEE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan
EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, 2012 Sasidharan Sreedharan www.sasidharan.webs.com 3/1/2012 1 Syllabus Artificial Intelligence Systems- Neural Networks, fuzzy logic,
More informationApproximate solutions of dual fuzzy polynomials by feed-back neural networks
Available online at wwwispacscom/jsca Volume 2012, Year 2012 Article ID jsca-00005, 16 pages doi:105899/2012/jsca-00005 Research Article Approximate solutions of dual fuzzy polynomials by feed-back neural
More informationy(x n, w) t n 2. (1)
Network training: Training a neural network involves determining the weight parameter vector w that minimizes a cost function. Given a training set comprising a set of input vector {x n }, n = 1,...N,
More informationConvergence of Hybrid Algorithm with Adaptive Learning Parameter for Multilayer Neural Network
Convergence of Hybrid Algorithm with Adaptive Learning Parameter for Multilayer Neural Network Fadwa DAMAK, Mounir BEN NASR, Mohamed CHTOUROU Department of Electrical Engineering ENIS Sfax, Tunisia {fadwa_damak,
More informationMachine Learning
Machine Learning 10-601 Maria Florina Balcan Machine Learning Department Carnegie Mellon University 02/10/2016 Today: Artificial neural networks Backpropagation Reading: Mitchell: Chapter 4 Bishop: Chapter
More informationIntroduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis
Introduction to Natural Computation Lecture 9 Multilayer Perceptrons and Backpropagation Peter Lewis 1 / 25 Overview of the Lecture Why multilayer perceptrons? Some applications of multilayer perceptrons.
More informationSupervised (BPL) verses Hybrid (RBF) Learning. By: Shahed Shahir
Supervised (BPL) verses Hybrid (RBF) Learning By: Shahed Shahir 1 Outline I. Introduction II. Supervised Learning III. Hybrid Learning IV. BPL Verses RBF V. Supervised verses Hybrid learning VI. Conclusion
More informationA Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation
1 Introduction A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation J Wesley Hines Nuclear Engineering Department The University of Tennessee Knoxville, Tennessee,
More informationChapter 9: The Perceptron
Chapter 9: The Perceptron 9.1 INTRODUCTION At this point in the book, we have completed all of the exercises that we are going to do with the James program. These exercises have shown that distributed
More informationSerious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions
BACK-PROPAGATION NETWORKS Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks Cannot approximate (learn) non-linear functions Difficult (if not impossible) to design
More informationIntroduction to feedforward neural networks
. Problem statement and historical context A. Learning framework Figure below illustrates the basic framework that we will see in artificial neural network learning. We assume that we want to learn a classification
More informationNeed for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels
Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)
More informationArtificial Neural Networks Examination, June 2005
Artificial Neural Networks Examination, June 2005 Instructions There are SIXTY questions. (The pass mark is 30 out of 60). For each question, please select a maximum of ONE of the given answers (either
More informationSGD and Deep Learning
SGD and Deep Learning Subgradients Lets make the gradient cheating more formal. Recall that the gradient is the slope of the tangent. f(w 1 )+rf(w 1 ) (w w 1 ) Non differentiable case? w 1 Subgradients
More informationAI Programming CS F-20 Neural Networks
AI Programming CS662-2008F-20 Neural Networks David Galles Department of Computer Science University of San Francisco 20-0: Symbolic AI Most of this class has been focused on Symbolic AI Focus or symbols
More information18.6 Regression and Classification with Linear Models
18.6 Regression and Classification with Linear Models 352 The hypothesis space of linear functions of continuous-valued inputs has been used for hundreds of years A univariate linear function (a straight
More informationComputational statistics
Computational statistics Lecture 3: Neural networks Thierry Denœux 5 March, 2016 Neural networks A class of learning methods that was developed separately in different fields statistics and artificial
More informationMachine Learning. Neural Networks
Machine Learning Neural Networks Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 Biological Analogy Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 THE
More informationOn the convergence speed of artificial neural networks in the solving of linear systems
Available online at http://ijimsrbiauacir/ Int J Industrial Mathematics (ISSN 8-56) Vol 7, No, 5 Article ID IJIM-479, 9 pages Research Article On the convergence speed of artificial neural networks in
More informationIntroduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen
Neural Networks - I Henrik I Christensen Robotics & Intelligent Machines @ GT Georgia Institute of Technology, Atlanta, GA 30332-0280 hic@cc.gatech.edu Henrik I Christensen (RIM@GT) Neural Networks 1 /
More informationAnalysis of Multilayer Neural Network Modeling and Long Short-Term Memory
Analysis of Multilayer Neural Network Modeling and Long Short-Term Memory Danilo López, Nelson Vera, Luis Pedraza International Science Index, Mathematical and Computational Sciences waset.org/publication/10006216
More informationConvolutional Neural Networks
Convolutional Neural Networks Books» http://www.deeplearningbook.org/ Books http://neuralnetworksanddeeplearning.com/.org/ reviews» http://www.deeplearningbook.org/contents/linear_algebra.html» http://www.deeplearningbook.org/contents/prob.html»
More informationArtifical Neural Networks
Neural Networks Artifical Neural Networks Neural Networks Biological Neural Networks.................................. Artificial Neural Networks................................... 3 ANN Structure...........................................
More informationRecurrent Neural Networks
Recurrent Neural Networks Datamining Seminar Kaspar Märtens Karl-Oskar Masing Today's Topics Modeling sequences: a brief overview Training RNNs with back propagation A toy example of training an RNN Why
More information8. Lecture Neural Networks
Soft Control (AT 3, RMA) 8. Lecture Neural Networks Learning Process Contents of the 8 th lecture 1. Introduction of Soft Control: Definition and Limitations, Basics of Intelligent" Systems 2. Knowledge
More informationMultivariate Analysis, TMVA, and Artificial Neural Networks
http://tmva.sourceforge.net/ Multivariate Analysis, TMVA, and Artificial Neural Networks Matt Jachowski jachowski@stanford.edu 1 Multivariate Analysis Techniques dedicated to analysis of data with multiple
More informationMachine Learning. Neural Networks. (slides from Domingos, Pardo, others)
Machine Learning Neural Networks (slides from Domingos, Pardo, others) Human Brain Neurons Input-Output Transformation Input Spikes Output Spike Spike (= a brief pulse) (Excitatory Post-Synaptic Potential)
More informationClassification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses about the label (Top-5 error) No Bounding Box
ImageNet Classification with Deep Convolutional Neural Networks Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton Motivation Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses
More information(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann
(Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for
More informationFrequency Selective Surface Design Based on Iterative Inversion of Neural Networks
J.N. Hwang, J.J. Choi, S. Oh, R.J. Marks II, "Query learning based on boundary search and gradient computation of trained multilayer perceptrons", Proceedings of the International Joint Conference on Neural
More informationArtificial Neural Networks. MGS Lecture 2
Artificial Neural Networks MGS 2018 - Lecture 2 OVERVIEW Biological Neural Networks Cell Topology: Input, Output, and Hidden Layers Functional description Cost functions Training ANNs Back-Propagation
More informationARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92
ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000
More informationPart 8: Neural Networks
METU Informatics Institute Min720 Pattern Classification ith Bio-Medical Applications Part 8: Neural Netors - INTRODUCTION: BIOLOGICAL VS. ARTIFICIAL Biological Neural Netors A Neuron: - A nerve cell as
More informationSimple neuron model Components of simple neuron
Outline 1. Simple neuron model 2. Components of artificial neural networks 3. Common activation functions 4. MATLAB representation of neural network. Single neuron model Simple neuron model Components
More informationProceedings of 12th International Heat Pipe Conference, pp , Moscow, Russia, 2002.
7KHUPDO3HUIRUPDQFH0RGHOLQJRI3XOVDWLQJ+HDW3LSHVE\$UWLILFLDO1HXUDO1HWZRUN Sameer Khandekar (a), Xiaoyu Cui (b), Manfred Groll (a) (a) IKE, University of Stuttgart, Pfaffenwaldring 31, 70569, Stuttgart, Germany.
More informationSupervised Learning in Neural Networks
The Norwegian University of Science and Technology (NTNU Trondheim, Norway keithd@idi.ntnu.no March 7, 2011 Supervised Learning Constant feedback from an instructor, indicating not only right/wrong, but
More informationBackpropagation Neural Net
Backpropagation Neural Net As is the case with most neural networks, the aim of Backpropagation is to train the net to achieve a balance between the ability to respond correctly to the input patterns that
More informationMultilayer Perceptrons (MLPs)
CSE 5526: Introduction to Neural Networks Multilayer Perceptrons (MLPs) 1 Motivation Multilayer networks are more powerful than singlelayer nets Example: XOR problem x 2 1 AND x o x 1 x 2 +1-1 o x x 1-1
More informationCourse 395: Machine Learning - Lectures
Course 395: Machine Learning - Lectures Lecture 1-2: Concept Learning (M. Pantic) Lecture 3-4: Decision Trees & CBC Intro (M. Pantic & S. Petridis) Lecture 5-6: Evaluating Hypotheses (S. Petridis) Lecture
More information2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller
2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks Todd W. Neller Machine Learning Learning is such an important part of what we consider "intelligence" that
More informationPrediction of p-wave velocity and anisotropic property of rock using artificial neural network technique
Journal of Scientific & Industrial Research Vol. 63, January 2004, pp 32-38 Prediction of velocity and anisotropic property of rock using artificial neural network technique T N Singh*, R Kanchan**, K
More informationBidirectional Representation and Backpropagation Learning
Int'l Conf on Advances in Big Data Analytics ABDA'6 3 Bidirectional Representation and Bacpropagation Learning Olaoluwa Adigun and Bart Koso Department of Electrical Engineering Signal and Image Processing
More information22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1
Neural Networks Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Brains as Computational Devices Brains advantages with respect to digital computers: Massively parallel Fault-tolerant Reliable
More informationEquivalence of Backpropagation and Contrastive Hebbian Learning in a Layered Network
LETTER Communicated by Geoffrey Hinton Equivalence of Backpropagation and Contrastive Hebbian Learning in a Layered Network Xiaohui Xie xhx@ai.mit.edu Department of Brain and Cognitive Sciences, Massachusetts
More informationUnit 8: Introduction to neural networks. Perceptrons
Unit 8: Introduction to neural networks. Perceptrons D. Balbontín Noval F. J. Martín Mateos J. L. Ruiz Reina A. Riscos Núñez Departamento de Ciencias de la Computación e Inteligencia Artificial Universidad
More informationInternational Journal of Scientific Research and Reviews
Research article Available online www.ijsrr.org ISSN: 2279 0543 International Journal of Scientific Research and Reviews Prediction of Compressive Strength of Concrete using Artificial Neural Network ABSTRACT
More informationNeural networks. Chapter 20. Chapter 20 1
Neural networks Chapter 20 Chapter 20 1 Outline Brains Neural networks Perceptrons Multilayer networks Applications of neural networks Chapter 20 2 Brains 10 11 neurons of > 20 types, 10 14 synapses, 1ms
More informationNeural Networks, Computation Graphs. CMSC 470 Marine Carpuat
Neural Networks, Computation Graphs CMSC 470 Marine Carpuat Binary Classification with a Multi-layer Perceptron φ A = 1 φ site = 1 φ located = 1 φ Maizuru = 1 φ, = 2 φ in = 1 φ Kyoto = 1 φ priest = 0 φ
More informationNeural networks. Chapter 19, Sections 1 5 1
Neural networks Chapter 19, Sections 1 5 Chapter 19, Sections 1 5 1 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 19, Sections 1 5 2 Brains 10
More informationArtificial Neural Network : Training
Artificial Neural Networ : Training Debasis Samanta IIT Kharagpur debasis.samanta.iitgp@gmail.com 06.04.2018 Debasis Samanta (IIT Kharagpur) Soft Computing Applications 06.04.2018 1 / 49 Learning of neural
More informationMachine Learning
Machine Learning 10-315 Maria Florina Balcan Machine Learning Department Carnegie Mellon University 03/29/2019 Today: Artificial neural networks Backpropagation Reading: Mitchell: Chapter 4 Bishop: Chapter
More information