f:\spss\14\14.doc 22/03/13

Size: px
Start display at page:

Download "f:\spss\14\14.doc 22/03/13"

Transcription

1 Neural Networks...2 Introduction...2 Multilayer Perceptron Neural Network Algorithm (MLP)...2 Biological Neural Structure Presenting The Structural Elements Of Neurons (Dendrites, Axon) And Synapse...3 Artificial Neuron Model With Inputs (x, x 1,, x n ), Weights (w, w 1,,w n ) And Output y...4 Example Of A Neural Network With A Hidden Layer...5 Data Set A...6 Data Characteristics...6 Summary Statistics...6 Variable Information...7 What Does It Look Like?...8 SPSS Commands...9 Neural Network With Optimal Architecture Obtained After The Algorithm Training Process Results Of The Fitting Of The Data Residuals Of The Results Of The Fitting Of The Data Neural Network Input Variables And Their Respective Relevance To The Fitting Process For additional analysis Syntax Data Set B Relevant Information Attribute Information What Does It Look Like? SPSS Commands For additional analysis Syntax Neural Network With Optimal Architecture Obtained After The Algorithm Training Process... 3 Predicted Pseudo-probability Neural Network Input Variables And Their Respective Relevance To The Fitting Process f:\spss\14\14.doc 22/3/13 1

2 Neural Networks Introduction Neural networks offer modelling procedures for nonlinear data, which enables you to discover more complex relationships in your data. You can thus develop more accurate and effective predictive models. The introductory section of these notes is loosely based on two notes by A.T.S. Carneiro ( and Multilayer Perceptron Neural Network Algorithm (MLP) The scientific literature presents several regression algorithms, several of which are implemented in the SPSS Modeller. The multilayer perceptron neural network algorithm (MLP) was selected for these examples. The MLP neural network algorithm is based on the functional principles of biological neural structures, as indicated in the figure. Neural computing researches attempt to organise mathematical models similarly to the structures and organization of neurons in biological brains to achieve similar processing abilities, in addition to the inherent capacities of biological brains, such as learning based on examples, trial and error, knowledge generalization, among many others. 2

3 Biological Neural Structure Presenting The Structural Elements Of Neurons (Dendrites, Axon) And Synapse Based on this analogy to biological neurons, the MLP neural network algorithm implements a neural network that is composed of layers of artificial neurons that are stimulated by input signals, which are transmitted through the network via synapses connecting neurons in different layers. The figure presents an artificial perceptron neuron model with n inputs {x 1, x 2,, x n }, in which each input x i has an associated synapse w i, and an output y. 3

4 Artificial Neuron Model With Inputs (x, x 1,, x n ), Weights (w, w 1,,w n ) And Output y There is also an additional neuron parameter, named w, known as bias that can be interpreted as a synapse associated to an input x = -1. The output of the neuron y is based on the product between input vector x (x, x 1, x 2,, x n ) and vector w (w,w 1, w 2,, w n ) composed of synapses, including the bias (w ), n x. w x w i i i The neuron output is then obtained through the activation function of neuron y ( x. w), in which a hyperbolic tangent function is usually adopted (sigmoid-nature function), defined by for a generic value a; however, it is convenient to use other activation functions in certain scenarios. 1 e ( a) 1 e a a The artificial neuron model is feed forward, that is, connections are directed from inputs (x, x 1,, x n ) to output y of the neuron. The figure presents the layout of perceptron neurons in an MLP neural network, in which there are two neuron layers, one hidden and one output. Regarding the neural network that is presented in the figure, each neuron of the hidden layer is connected to each neuron in the output layer. Therefore, inputs of the output layer neurons correspond to the outputs of hidden layer neurons. The analyst that uses the artificial neural network algorithm must choose how many neurons to use in the hidden layer, considering the set of input data, since with a low number of neurons in the hidden layer; the neural 4

5 network is not able to generalize each class's data. However, a high number of neurons in the hidden layer prompts the over fitting phenomenon, in which the neural network exclusively learns training data, and does not generalize learning for data classes. Example Of A Neural Network With A Hidden Layer The neural network training process is conducted based on a back propagation algorithm, with the purpose of adjusting values that are associated to synapses to allow the neural network to map an input space and output space, in which the input vectors x are samples of the input space and each input vector is associated to an output z, which can be represented by a vector z (z 1, z 2,, z n ), based on a scalable value or a symbolic value. Specifically, for symbolic values, a neuron in the output layer corresponds to each of the possible symbols that are associated to the input vector. During the neural network training process, a set of input data is initially determined, to which the associated output is known, and random values are attributed to each synapse in the neural network. The data is presented to the neural network and the supplied output is compared to the actual output, generating an error value. The error value is then employed to adjust reverse neural network synapses, from output to inputs (back propagated). 5

6 The process of adjusting synapse values is repeated until an interruption criterion is established, for example, a fixed number of repetitions or a minimum error. Thus, in each repetition, the outputs that are provided by the neural network get closer to the actual output. The synapse value correction equations minimize errors between the output that is provided by the neural network and the actual output. For conventional regression, the methodology employed consists in using neurons with linear activation function and assigning an output neuron to map each of the output vector's components. In cases where the output is the only scalable value, the neural network is designed with a single neuron in the output layer. Two examples are considered. Data Set A This data set was downloaded from or more directly It describes variables affecting concrete compressive strength. Concrete is the most important material in civil engineering. The concrete compressive strength is a highly nonlinear function of age and ingredients. These ingredients include cement, blast furnace slag, fly ash, water, super-plasticizer, coarse aggregate, and fine aggregate. Original Owner and Donor Prof. I-Cheng Yeh Department of Information Management Chung-Hua University, Hsin Chu, Taiwan 367, R.O.C. Data Characteristics The actual concrete compressive strength (MPa) for a given mixture under a specific age (days) was determined from laboratory. Data is in raw form (not scaled). Summary Statistics Number of instances (observations): 13 Number of Attributes: 9 Attribute breakdown: 8 quantitative input variables, and 1 quantitative output variable 6

7 Missing Attribute Values: None The file that is used is organised in nine columns, in which each line represents data that is collected from a concrete mixture analysed in a lab. The first seven columns correspond to data about concentration of elements in the mixture, in kg by m3 of concrete; the following column corresponds to the age of the concrete, in days; and the last column corresponds to the sturdiness of the concrete, which is measured in MPa (mega Pascal, pressure measurement unit). Variable Information Given are the variable name, variable type, the measurement unit and a brief description. The concrete compressive strength is the regression problem. The order of this listing corresponds to the order of numerals along the rows of the database. Name Data Type Measurement Description Cement quantitative kg in a m 3 mixture Input Variable Blast Furnace Slag quantitative kg in a m 3 mixture Input Variable Fly Ash quantitative kg in a m 3 mixture Input Variable Water quantitative kg in a m 3 mixture Input Variable Superplasticizer quantitative kg in a m 3 mixture Input Variable Coarse Aggregate quantitative kg in a m 3 mixture Input Variable Fine Aggregate quantitative kg in a m 3 mixture Input Variable Age quantitative Day (1~365) Input Variable Concrete compressive strength quantitative MPa Output Variable All of these attributes are numerical variables whose values correspond to the measurement unit; thus, the neural network that is used is designed to solve a regression type problem in which the input space comprises the first eight columns of the file (cement concentration to age) and the output space corresponds to the ninth column of the file (concrete sturdiness). Past Usage 1. I-Cheng Yeh, "Modeling of strength of high performance concrete using artificial neural networks," Cement and Concrete Research, Vol. 28, No. 12, pp (1998). 2. I-Cheng Yeh, "Modeling Concrete Strength with Augment-Neuron Networks," J. of Materials in Civil Engineering, ASCE, Vol. 1, No. 4, pp (1998). 7

8 3. I-Cheng Yeh, "Design of High Performance Concrete Mixture Using Neural Networks," J. of Computing in Civil Engineering, ASCE, Vol. 13, No. 1, pp (1999). 4. I-Cheng Yeh, "Prediction of Strength of Fly Ash and Slag Concrete By The Use of Artificial Neural Networks," Journal of the Chinese Institute of Civil and Hydraulic Engineering, Vol. 15, No. 4, pp (23). 5. I-Cheng Yeh, "A mix Proportioning Methodology for Fly Ash and Slag Concrete Using Artificial Neural Networks," Chung Hua Journal of Science and Engineering, Vol. 1, No. 1, pp (23). 6. Yeh, I-Cheng, "Analysis of strength of concrete using design of experiments and neural networks,": Journal of Materials in Civil Engineering, ASCE, Vol.18, No.4, pp (26). What Does It Look Like? Cement 4 Blast Furnace Slag Fly Ash 25 Water Superplasticizer 11 Coarse Aggregate Fine Aggregate 4 8 Age 2 4 Concrete compressive strength

9 SPSS Commands 9

10 Note that the covariates have been standardised, so that none are numerically dominant. 1

11 As a first trial a small hidden layer is adopted, this may be increased in succeeding applications of the procedure. 11

12 12

13 The outputs are saved so that further analysis may be undertaken. 13

14 Neural Network With Optimal Architecture Obtained After The Algorithm Training Process The figure presents a dispersion chart containing actual concrete resistance values on the horizontal axis and values that are estimated by the neural network with two hidden neurons in the vertical axis. 14

15 Results Of The Fitting Of The Data The identity line (diagonal), which represents the ideal result, in which each concrete sample would present the exact same resistance as estimated by the neural network. In this case, the high concentration of points next to the identity line is visually evident, which is confirmed by correlation and error, reported in the final table, evidencing the efficiency of the proposed methodology. 15

16 Residuals Of The Results Of The Fitting Of The Data As previously mentioned, each neural network input element has an associated synapse, which is represented by a numerical value that controls input; the higher the synapse value, the more relevant is the input for the result that is generated by the neural network. The figure presents a relevance chart for each input field to obtain fitting results, information that becomes available after the creation of the node corresponding to the model generated. 16

17 Neural Network Input Variables And Their Respective Relevance To The Fitting Process The outputs are saved so that further analysis may be undertaken. Metrics are applied to the results represent the linear correlation coefficient, which is based on the equation and absolute average error. Which is based on the equation for both generic data sets, in which the correlation coefficient measures the influence between data sets, and can also be interpreted as a coefficient of similarity between two data sets; and the absolute average error measures inconsistencies between two data sets. 17

18 For additional analysis 18

19 Also 19

20 2

21 21

22 Syntax Set Printback=on 1 *Multilayer Perceptron Network. MLP Concrete_compressive_strengthMPa_megapascals (MLEVEL=S) WITH Cement_kg_in_a_m3_mixture Blast_Furnace_Slag_kg_in_a_m3_mixture Fly_Ash_kg_in_a_m3_mixture Water_kg_in_a_m3_mixture Superplasticizer_kg_in_a_m3_mixture Coarse_Aggregate_kg_in_a_m3_mixture Fine_Aggregate_kg_in_a_m3_mixture Age_day /RESCALE COVARIATE=STANDARDIZED /PARTITION TRAINING=7 TESTING=3 HOLDOUT= /ARCHITECTURE AUTOMATIC=YES (MINUNITS=1 MAXUNITS=2) 2 /CRITERIA TRAINING=BATCH OPTIMIZATION=SCALEDCONJUGATE LAMBDAINITIAL=.5 SIGMAINITIAL=.5 INTERVALCENTER= INTERVALOFFSET=.5 MEMSIZE=1 /PRINT CPS NETWORKINFO SUMMARY CLASSIFICATION IMPORTANCE /PLOT NETWORK PREDICTED RESIDUAL /SAVE PREDVAL /STOPPINGRULES ERRORSTEPS= 1 (DATA=AUTO) TRAININGTIMER=ON (MAXTIME=15) MAXEPOCHS=AUTO ERRORCHANGE=1.E-4 ERRORRATIO=.1 /MISSING USERMISSING=EXCLUDE. CORRELATIONS 3 /VARIABLES=Concrete_compressive_strengthMPa_megapascals MLP_PredictedValue /PRINT=TWOTAIL NOSIG /MISSING=PAIRWISE. COMPUTE abs=abs(concrete_compressive_strengthmpa_megapascals-mlp_predictedvalue). EXECUTE. 22

23 DESCRIPTIVES VARIABLES=abs /STATISTICS=MEAN. delete variables MLP_PredictedValue abs 4 1 This is equivalent to <edit> <option> <viewer> drop the list to Notes <hidden>, and turns off the Notes output. 2 The size of the hidden layer may be adjusted, MINUNITS=1 MAXUNITS=2 with MINUNITS < MAXUNITS. 3 This is the start of the analysis stage. 4 It is wise to tidy up any analysis variables prior to repeating the process. Any solution will depend on the random number seed employed, so values will only be broadly similar with those presented here. Size of Hidden Layer Correlation Mean Absolute Deviation Sum of Squares Error By analysing results that are obtained by the neural network with two, four, and six hidden layer neurons, it is concluded that the best neural network configuration is probably four hidden neurons, which presents both a higher correlation and decreased error between actual data and estimated values. The goal is to be parsimonious, combining a good fit with as few neurons as possible. The second example is possibly closer to those you might encounter. The SPSS commands are identical and the secondary analysis is simply reduces to a comparative table. Data Set B This data set was downloaded from or more directly This breast cancer database was obtained from the University of Wisconsin Hospitals, Madison from Dr. William H. Wolberg. 1. O. L. Mangasarian and W. H. Wolberg: "Cancer diagnosis via linear programming", SIAM News, Volume 23, Number 5, September 199, pp 1 &

24 2. William H. Wolberg and O.L. Mangasarian: "Multisurface method of pattern separation for medical diagnosis applied to breast cytology", Proceedings of the National Academy of Sciences, U.S.A., Volume 87, December 199, pp Attributes 2 through 1 have been used to represent instances. Each instance has one of 2 possible classes: benign or malignant. -- Size of data set: only 369 instances (at that point in time) -- Collected classification results: 1 trial only -- Two pairs of parallel hyperplanes were found to be consistent with 5% of the data -- Accuracy on remaining 5% of dataset: 93.5% -- Three pairs of parallel hyperplanes were found to be consistent with 67% of data -- Accuracy on remaining 33% of dataset: 95.9% 3. O. L. Mangasarian, R. Setiono, and W.H. Wolberg: "Pattern recognition via linear programming: Theory and application to medical diagnosis", in: "Large-scale numerical optimization", Thomas F. Coleman and Yuying Li, editors, SIAM Publications, Philadelphia 199, pp K. P. Bennett & O. L. Mangasarian: "Robust linear programming discrimination of two linearly inseparable sets", Optimization Methods and Software 1, 1992, pp (Gordon & Breach Science Publishers). 5. J. Zhang: (1992). Selecting typical instances in instance-based learning. Proceedings of the Ninth International Machine Learning Conference} 1992 pp Aberdeen, Scotland: Morgan Kaufmann. Attributes 2 through 1 have been used to represent instances. Each instance has one of 2 possible classes: benign or malignant. -- Size of data set: only 369 instances (at that point in time) -- Applied 4 instance-based learning algorithms -- Collected classification results averaged over 1 trials -- Best accuracy result: -- 1-nearest neighbor: 93.7% -- trained on 2 instances, tested on the other Also of interest: -- Using only typical instances: 92.2% (storing only 23.1 instances) -- trained on 2 instances, tested on the other

25 Relevant Information Samples arrive periodically as Dr. Wolberg reports his clinical cases. The database therefore reflects this chronological grouping of the data. This grouping information appears immediately below, having been removed from the data itself: Group 1: 367 instances (January 1989) Group 2: 7 instances (October 1989) Group 3: 31 instances (February 199) Group 4: 17 instances (April 199) Group 5: 48 instances (August 199) Group 6: 49 instances (Updated January 1991) Group 7: 31 instances (June 1991) Group 8: 86 instances (November 1991) Total: 699 points (as of the donated database on 15 July 1992) Note that the results summarized above refer to a dataset of size 369, while Group 1 has only 367 instances. This is because it originally contained 369 instances; 2 were removed. The following statements summarizes changes to the original Group 1's set of data: Number of Instances: 699 (as of 15 July 1992) Number of Attributes: 1 plus the class attribute Attribute Information # Attribute Domain 1 Sample code number id number 2 Clump Thickness Uniformity of Cell Size Uniformity of Cell Shape Marginal Adhesion Single Epithelial Cell Size Bare Nuclei Bland Chromatin Normal Nucleoli Mitoses Class 2 for benign 4 for malignant 25

26 Missing attribute values 16 There are 16 instances in Groups 1 to 6 that contain a single missing (i.e., unavailable) attribute value. Class distribution Benign: 458 (65.5%) Malignant: 241 (34.5%) What Does It Look Like? Clump Thickness Uniformity of Cell Size Class Uniformity of Cell Shape Marginal Adhesion Single Epithelial Cell Size Bare Nuclei Bland Chromatin Normal Nucleoli Mitoses

27 SPSS Commands For additional analysis 27

28 Drag Class_... to columns and Predict to rows. Syntax Set Printback=on *Multilayer Perceptron Network. MLP Class_2_for_benign_4_for_malignant (MLEVEL=N) WITH Clump_Thickness_11 Uniformity_of_Cell_Size_11 niformity_of_cell_shape_11 Marginal_Adhesion_11 Single_Epithelial_Cell_Size_11 Bare_Nuclei_11 Bland_Chromatin_11 Normal_Nucleoli_11 Mitoses_11 /RESCALE COVARIATE=STANDARDIZED /PARTITION TRAINING=7 TESTING=3 HOLDOUT= /ARCHITECTURE AUTOMATIC=YES (MINUNITS=1 MAXUNITS=2) /CRITERIA TRAINING=BATCH OPTIMIZATION=SCALEDCONJUGATE LAMBDAINITIAL=.5 SIGMAINITIAL=.5 INTERVALCENTER= INTERVALOFFSET=.5 MEMSIZE=1 /PRINT CPS NETWORKINFO SUMMARY CLASSIFICATION IMPORTANCE 28

29 /PLOT NETWORK PREDICTED /SAVE PREDVAL /STOPPINGRULES ERRORSTEPS= 1 (DATA=AUTO) TRAININGTIMER=ON (MAXTIME=15) MAXEPOCHS=AUTO ERRORCHANGE=1.E-4 ERRORRATIO=.1 /MISSING USERMISSING=EXCLUDE. * Custom Tables. 1 CTABLES /VLABELS VARIABLES=MLP_PredictedValue Class_2_for_benign_4_for_malignant DISPLAY=LABEL /TABLE MLP_PredictedValue [C] BY Class_2_for_benign_4_for_malignant [C][COUNT F4.] /CATEGORIES VARIABLES=MLP_PredictedValue Class_2_for_benign_4_for_malignant ORDER=A KEY=VALUE EMPTY=EXCLUDE. delete variables MLP_PredictedValue 1 For this example it is only necessary to prepare a comparative table of the outputs.. 29

30 Neural Network With Optimal Architecture Obtained After The Algorithm Training Process 3

31 Predicted Pseudo-probability The columns have been split for clarity. Effectively the plot should display only two columns (2 and 4). 31

32 Neural Network Input Variables And Their Respective Relevance To The Fitting Process A brief summary of the analysis is shown in the table. Predicted Value for Class Range 1-2 selects 1 Range 1-4 selects 3 Range 1-6 selects 3 Class Class Class Count The random nature of the procedure employed explains the difference in the final two columns. Adopting a parsimonious approach, suggests that a single hidden neuron should suffice. The decision of which algorithm is most appropriate, in this case, falls to a medical field expert, since only such qualified professional is able to assess which error would entail greater damages to patients, considering that a benign tumour erroneously identified as a malign tumour might cause psychological 32

33 damages to patients, and most malign tumour treatments have severe side effects, while an erroneous malign tumour diagnosis, identified as a benign tumour, might delay treatment, causing the patient to lose valuable time in his/her recovery process. An application with the goal of assisting medical diagnosis of cancer diseases, identifying whether patients have malign or benign cancer. Considering the greatest average success rates, as an alternative base for a complete diagnosis support system, which can be implemented in hospitals, medical clinics, or any other health care institutions, thus reducing the probability of incorrect diagnosis. 33

International Journal of Scientific Research and Reviews

International Journal of Scientific Research and Reviews Research article Available online www.ijsrr.org ISSN: 2279 0543 International Journal of Scientific Research and Reviews Prediction of Compressive Strength of Concrete using Artificial Neural Network ABSTRACT

More information

Lecture 4: Feed Forward Neural Networks

Lecture 4: Feed Forward Neural Networks Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training

More information

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000

More information

Neural networks. Chapter 19, Sections 1 5 1

Neural networks. Chapter 19, Sections 1 5 1 Neural networks Chapter 19, Sections 1 5 Chapter 19, Sections 1 5 1 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 19, Sections 1 5 2 Brains 10

More information

Civil and Environmental Research ISSN (Paper) ISSN (Online) Vol.8, No.1, 2016

Civil and Environmental Research ISSN (Paper) ISSN (Online) Vol.8, No.1, 2016 Developing Artificial Neural Network and Multiple Linear Regression Models to Predict the Ultimate Load Carrying Capacity of Reactive Powder Concrete Columns Prof. Dr. Mohammed Mansour Kadhum Eng.Ahmed

More information

Neural Networks and the Back-propagation Algorithm

Neural Networks and the Back-propagation Algorithm Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely

More information

Multilayer Perceptron

Multilayer Perceptron Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Single Perceptron 3 Boolean Function Learning 4

More information

Neural networks. Chapter 20. Chapter 20 1

Neural networks. Chapter 20. Chapter 20 1 Neural networks Chapter 20 Chapter 20 1 Outline Brains Neural networks Perceptrons Multilayer networks Applications of neural networks Chapter 20 2 Brains 10 11 neurons of > 20 types, 10 14 synapses, 1ms

More information

Artificial Neural Network

Artificial Neural Network Artificial Neural Network Contents 2 What is ANN? Biological Neuron Structure of Neuron Types of Neuron Models of Neuron Analogy with human NN Perceptron OCR Multilayer Neural Network Back propagation

More information

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA   1/ 21 Neural Networks Chapter 8, Section 7 TB Artificial Intelligence Slides from AIMA http://aima.cs.berkeley.edu / 2 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural

More information

Artificial neural networks

Artificial neural networks Artificial neural networks Chapter 8, Section 7 Artificial Intelligence, spring 203, Peter Ljunglöf; based on AIMA Slides c Stuart Russel and Peter Norvig, 2004 Chapter 8, Section 7 Outline Brains Neural

More information

Neural networks. Chapter 20, Section 5 1

Neural networks. Chapter 20, Section 5 1 Neural networks Chapter 20, Section 5 Chapter 20, Section 5 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 20, Section 5 2 Brains 0 neurons of

More information

Lecture 7 Artificial neural networks: Supervised learning

Lecture 7 Artificial neural networks: Supervised learning Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis Introduction to Natural Computation Lecture 9 Multilayer Perceptrons and Backpropagation Peter Lewis 1 / 25 Overview of the Lecture Why multilayer perceptrons? Some applications of multilayer perceptrons.

More information

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller 2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks Todd W. Neller Machine Learning Learning is such an important part of what we consider "intelligence" that

More information

Machine Learning and Data Mining. Multi-layer Perceptrons & Neural Networks: Basics. Prof. Alexander Ihler

Machine Learning and Data Mining. Multi-layer Perceptrons & Neural Networks: Basics. Prof. Alexander Ihler + Machine Learning and Data Mining Multi-layer Perceptrons & Neural Networks: Basics Prof. Alexander Ihler Linear Classifiers (Perceptrons) Linear Classifiers a linear classifier is a mapping which partitions

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm Volume 4, Issue 5, May 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Huffman Encoding

More information

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,

More information

Linear discriminant functions

Linear discriminant functions Andrea Passerini passerini@disi.unitn.it Machine Learning Discriminative learning Discriminative vs generative Generative learning assumes knowledge of the distribution governing the data Discriminative

More information

Combination of M-Estimators and Neural Network Model to Analyze Inside/Outside Bark Tree Diameters

Combination of M-Estimators and Neural Network Model to Analyze Inside/Outside Bark Tree Diameters Combination of M-Estimators and Neural Network Model to Analyze Inside/Outside Bark Tree Diameters Kyriaki Kitikidou, Elias Milios, Lazaros Iliadis, and Minas Kaymakis Democritus University of Thrace,

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE 4: Linear Systems Summary # 3: Introduction to artificial neural networks DISTRIBUTED REPRESENTATION An ANN consists of simple processing units communicating with each other. The basic elements of

More information

ECE662: Pattern Recognition and Decision Making Processes: HW TWO

ECE662: Pattern Recognition and Decision Making Processes: HW TWO ECE662: Pattern Recognition and Decision Making Processes: HW TWO Purdue University Department of Electrical and Computer Engineering West Lafayette, INDIANA, USA Abstract. In this report experiments are

More information

Artificial Neural Networks. Historical description

Artificial Neural Networks. Historical description Artificial Neural Networks Historical description Victor G. Lopez 1 / 23 Artificial Neural Networks (ANN) An artificial neural network is a computational model that attempts to emulate the functions of

More information

To appear in Optimization Methods and Software DISCRIMINATION OF TWO LINEARLY INSEPARABLE SETS. Computer Sciences Department, University of Wisconsin,

To appear in Optimization Methods and Software DISCRIMINATION OF TWO LINEARLY INSEPARABLE SETS. Computer Sciences Department, University of Wisconsin, To appear in Optimization Methods and Software ROBUST LINEAR PROGRAMMING DISCRIMINATION OF TWO LINEARLY INSEPARABLE SETS KRISTIN P. BENNETT and O. L. MANGASARIAN Computer Sciences Department, University

More information

Unit III. A Survey of Neural Network Model

Unit III. A Survey of Neural Network Model Unit III A Survey of Neural Network Model 1 Single Layer Perceptron Perceptron the first adaptive network architecture was invented by Frank Rosenblatt in 1957. It can be used for the classification of

More information

Learning and Memory in Neural Networks

Learning and Memory in Neural Networks Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units

More information

An artificial neural networks (ANNs) model is a functional abstraction of the

An artificial neural networks (ANNs) model is a functional abstraction of the CHAPER 3 3. Introduction An artificial neural networs (ANNs) model is a functional abstraction of the biological neural structures of the central nervous system. hey are composed of many simple and highly

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks 鮑興國 Ph.D. National Taiwan University of Science and Technology Outline Perceptrons Gradient descent Multi-layer networks Backpropagation Hidden layer representations Examples

More information

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore Lecture - 27 Multilayer Feedforward Neural networks with Sigmoidal

More information

Mr. Harshit K. Dave 1, Dr. Keyur P. Desai 2, Dr. Harit K. Raval 3

Mr. Harshit K. Dave 1, Dr. Keyur P. Desai 2, Dr. Harit K. Raval 3 Investigations on Prediction of MRR and Surface Roughness on Electro Discharge Machine Using Regression Analysis and Artificial Neural Network Programming Mr. Harshit K. Dave 1, Dr. Keyur P. Desai 2, Dr.

More information

Artificial Neural Network and Fuzzy Logic

Artificial Neural Network and Fuzzy Logic Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks

More information

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, 2012 Sasidharan Sreedharan www.sasidharan.webs.com 3/1/2012 1 Syllabus Artificial Intelligence Systems- Neural Networks, fuzzy logic,

More information

Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!

Artificial Neural Networks and Nonparametric Methods CMPSCI 383 Nov 17, 2011! Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011! 1 Todayʼs lecture" How the brain works (!)! Artificial neural networks! Perceptrons! Multilayer feed-forward networks! Error

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks Stephan Dreiseitl University of Applied Sciences Upper Austria at Hagenberg Harvard-MIT Division of Health Sciences and Technology HST.951J: Medical Decision Support Knowledge

More information

Revision: Neural Network

Revision: Neural Network Revision: Neural Network Exercise 1 Tell whether each of the following statements is true or false by checking the appropriate box. Statement True False a) A perceptron is guaranteed to perfectly learn

More information

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau Last update: October 26, 207 Neural networks CMSC 42: Section 8.7 Dana Nau Outline Applications of neural networks Brains Neural network units Perceptrons Multilayer perceptrons 2 Example Applications

More information

Prediction of compressive strength of heavyweight concrete by ANN and FL models

Prediction of compressive strength of heavyweight concrete by ANN and FL models DOI 10.1007/s00521-009-0292-9 ORIGINAL ARTICLE Prediction of compressive strength of heavyweight concrete by ANN and FL models C. Başyigit Æ Iskender Akkurt Æ S. Kilincarslan Æ A. Beycioglu Received: 15

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

Lecture 4: Perceptrons and Multilayer Perceptrons

Lecture 4: Perceptrons and Multilayer Perceptrons Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons

More information

Unsupervised Classification via Convex Absolute Value Inequalities

Unsupervised Classification via Convex Absolute Value Inequalities Unsupervised Classification via Convex Absolute Value Inequalities Olvi L. Mangasarian Abstract We consider the problem of classifying completely unlabeled data by using convex inequalities that contain

More information

Sections 18.6 and 18.7 Artificial Neural Networks

Sections 18.6 and 18.7 Artificial Neural Networks Sections 18.6 and 18.7 Artificial Neural Networks CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline The brain vs artifical neural networks

More information

Sections 18.6 and 18.7 Analysis of Artificial Neural Networks

Sections 18.6 and 18.7 Analysis of Artificial Neural Networks Sections 18.6 and 18.7 Analysis of Artificial Neural Networks CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline Univariate regression

More information

Mining Classification Knowledge

Mining Classification Knowledge Mining Classification Knowledge Remarks on NonSymbolic Methods JERZY STEFANOWSKI Institute of Computing Sciences, Poznań University of Technology COST Doctoral School, Troina 2008 Outline 1. Bayesian classification

More information

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks INFOB2KI 2017-2018 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Artificial Neural Networks Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html

More information

Feed-forward Network Functions

Feed-forward Network Functions Feed-forward Network Functions Sargur Srihari Topics 1. Extension of linear models 2. Feed-forward Network Functions 3. Weight-space symmetries 2 Recap of Linear Models Linear Models for Regression, Classification

More information

Sections 18.6 and 18.7 Artificial Neural Networks

Sections 18.6 and 18.7 Artificial Neural Networks Sections 18.6 and 18.7 Artificial Neural Networks CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline The brain vs. artifical neural

More information

CS:4420 Artificial Intelligence

CS:4420 Artificial Intelligence CS:4420 Artificial Intelligence Spring 2018 Neural Networks Cesare Tinelli The University of Iowa Copyright 2004 18, Cesare Tinelli and Stuart Russell a a These notes were originally developed by Stuart

More information

Article from. Predictive Analytics and Futurism. July 2016 Issue 13

Article from. Predictive Analytics and Futurism. July 2016 Issue 13 Article from Predictive Analytics and Futurism July 2016 Issue 13 Regression and Classification: A Deeper Look By Jeff Heaton Classification and regression are the two most common forms of models fitted

More information

4. Multilayer Perceptrons

4. Multilayer Perceptrons 4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output

More information

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18 CSE 417T: Introduction to Machine Learning Final Review Henry Chai 12/4/18 Overfitting Overfitting is fitting the training data more than is warranted Fitting noise rather than signal 2 Estimating! "#$

More information

CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning

CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Learning Neural Networks Classifier Short Presentation INPUT: classification data, i.e. it contains an classification (class) attribute.

More information

Artificial Neural Networks The Introduction

Artificial Neural Networks The Introduction Artificial Neural Networks The Introduction 01001110 01100101 01110101 01110010 01101111 01101110 01101111 01110110 01100001 00100000 01110011 01101011 01110101 01110000 01101001 01101110 01100001 00100000

More information

Optimal Artificial Neural Network Modeling of Sedimentation yield and Runoff in high flow season of Indus River at Besham Qila for Terbela Dam

Optimal Artificial Neural Network Modeling of Sedimentation yield and Runoff in high flow season of Indus River at Besham Qila for Terbela Dam Optimal Artificial Neural Network Modeling of Sedimentation yield and Runoff in high flow season of Indus River at Besham Qila for Terbela Dam Akif Rahim 1, Amina Akif 2 1 Ph.D Scholar in Center of integrated

More information

Introduction Biologically Motivated Crude Model Backpropagation

Introduction Biologically Motivated Crude Model Backpropagation Introduction Biologically Motivated Crude Model Backpropagation 1 McCulloch-Pitts Neurons In 1943 Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, published A logical calculus of the

More information

Artificial Neural Networks Examination, March 2004

Artificial Neural Networks Examination, March 2004 Artificial Neural Networks Examination, March 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum

More information

Neural network modelling of reinforced concrete beam shear capacity

Neural network modelling of reinforced concrete beam shear capacity icccbe 2010 Nottingham University Press Proceedings of the International Conference on Computing in Civil and Building Engineering W Tizani (Editor) Neural network modelling of reinforced concrete beam

More information

Unit 8: Introduction to neural networks. Perceptrons

Unit 8: Introduction to neural networks. Perceptrons Unit 8: Introduction to neural networks. Perceptrons D. Balbontín Noval F. J. Martín Mateos J. L. Ruiz Reina A. Riscos Núñez Departamento de Ciencias de la Computación e Inteligencia Artificial Universidad

More information

Artificial Neural Network Method of Rock Mass Blastability Classification

Artificial Neural Network Method of Rock Mass Blastability Classification Artificial Neural Network Method of Rock Mass Blastability Classification Jiang Han, Xu Weiya, Xie Shouyi Research Institute of Geotechnical Engineering, Hohai University, Nanjing, Jiangshu, P.R.China

More information

COMPARISON OF CONCRETE STRENGTH PREDICTION TECHNIQUES WITH ARTIFICIAL NEURAL NETWORK APPROACH

COMPARISON OF CONCRETE STRENGTH PREDICTION TECHNIQUES WITH ARTIFICIAL NEURAL NETWORK APPROACH BUILDING RESEARCH JOURNAL VOLUME 56, 2008 NUMBER 1 COMPARISON OF CONCRETE STRENGTH PREDICTION TECHNIQUES WITH ARTIFICIAL NEURAL NETWORK APPROACH MELTEM ÖZTURAN 1, BIRGÜL KUTLU 1, TURAN ÖZTURAN 2 Prediction

More information

Machine Learning (CSE 446): Neural Networks

Machine Learning (CSE 446): Neural Networks Machine Learning (CSE 446): Neural Networks Noah Smith c 2017 University of Washington nasmith@cs.washington.edu November 6, 2017 1 / 22 Admin No Wednesday office hours for Noah; no lecture Friday. 2 /

More information

STA 414/2104: Lecture 8

STA 414/2104: Lecture 8 STA 414/2104: Lecture 8 6-7 March 2017: Continuous Latent Variable Models, Neural networks With thanks to Russ Salakhutdinov, Jimmy Ba and others Outline Continuous latent variable models Background PCA

More information

Artificial Neural Networks Examination, June 2005

Artificial Neural Networks Examination, June 2005 Artificial Neural Networks Examination, June 2005 Instructions There are SIXTY questions. (The pass mark is 30 out of 60). For each question, please select a maximum of ONE of the given answers (either

More information

Learning Bayesian Networks for Biomedical Data

Learning Bayesian Networks for Biomedical Data Learning Bayesian Networks for Biomedical Data Faming Liang (Texas A&M University ) Liang, F. and Zhang, J. (2009) Learning Bayesian Networks for Discrete Data. Computational Statistics and Data Analysis,

More information

Portugaliae Electrochimica Acta 26/4 (2008)

Portugaliae Electrochimica Acta 26/4 (2008) Portugaliae Electrochimica Acta 6/4 (008) 6-68 PORTUGALIAE ELECTROCHIMICA ACTA Comparison of Regression Model and Artificial Neural Network Model for the Prediction of Volume Percent of Diamond Deposition

More information

Nonlinear Classification

Nonlinear Classification Nonlinear Classification INFO-4604, Applied Machine Learning University of Colorado Boulder October 5-10, 2017 Prof. Michael Paul Linear Classification Most classifiers we ve seen use linear functions

More information

AI Programming CS F-20 Neural Networks

AI Programming CS F-20 Neural Networks AI Programming CS662-2008F-20 Neural Networks David Galles Department of Computer Science University of San Francisco 20-0: Symbolic AI Most of this class has been focused on Symbolic AI Focus or symbols

More information

Artifical Neural Networks

Artifical Neural Networks Neural Networks Artifical Neural Networks Neural Networks Biological Neural Networks.................................. Artificial Neural Networks................................... 3 ANN Structure...........................................

More information

Part 8: Neural Networks

Part 8: Neural Networks METU Informatics Institute Min720 Pattern Classification ith Bio-Medical Applications Part 8: Neural Netors - INTRODUCTION: BIOLOGICAL VS. ARTIFICIAL Biological Neural Netors A Neuron: - A nerve cell as

More information

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function

More information

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Neural Networks Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Brains as Computational Devices Brains advantages with respect to digital computers: Massively parallel Fault-tolerant Reliable

More information

Comparison of Predictive Accuracy of Neural Network Methods and Cox Regression for Censored Survival Data

Comparison of Predictive Accuracy of Neural Network Methods and Cox Regression for Censored Survival Data Comparison of Predictive Accuracy of Neural Network Methods and Cox Regression for Censored Survival Data Stanley Azen Ph.D. 1, Annie Xiang Ph.D. 1, Pablo Lapuerta, M.D. 1, Alex Ryutov MS 2, Jonathan Buckley

More information

Neural Networks Introduction

Neural Networks Introduction Neural Networks Introduction H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011 H. A. Talebi, Farzaneh Abdollahi Neural Networks 1/22 Biological

More information

Simple Neural Nets For Pattern Classification

Simple Neural Nets For Pattern Classification CHAPTER 2 Simple Neural Nets For Pattern Classification Neural Networks General Discussion One of the simplest tasks that neural nets can be trained to perform is pattern classification. In pattern classification

More information

MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October,

MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October, MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October, 23 2013 The exam is closed book. You are allowed a one-page cheat sheet. Answer the questions in the spaces provided on the question sheets. If you run

More information

CHAPTER 4 STATISTICAL MODELS FOR STRENGTH USING SPSS

CHAPTER 4 STATISTICAL MODELS FOR STRENGTH USING SPSS 69 CHAPTER 4 STATISTICAL MODELS FOR STRENGTH USING SPSS 4.1 INTRODUCTION Mix design for concrete is a process of search for a mixture satisfying the required performance of concrete, such as workability,

More information

Neural Networks and Ensemble Methods for Classification

Neural Networks and Ensemble Methods for Classification Neural Networks and Ensemble Methods for Classification NEURAL NETWORKS 2 Neural Networks A neural network is a set of connected input/output units (neurons) where each connection has a weight associated

More information

Artificial Neural Network Based Approach for Design of RCC Columns

Artificial Neural Network Based Approach for Design of RCC Columns Artificial Neural Network Based Approach for Design of RCC Columns Dr T illai, ember I Karthekeyan, Non-member Recent developments in artificial neural network have opened up new possibilities in the field

More information

Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso

Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso xsu@utep.edu Fall, 2018 Outline Introduction A Brief History ANN Architecture Terminology

More information

Multitask Learning of Environmental Spatial Data

Multitask Learning of Environmental Spatial Data 9th International Congress on Environmental Modelling and Software Brigham Young University BYU ScholarsArchive 6th International Congress on Environmental Modelling and Software - Leipzig, Germany - July

More information

Package MAVE. May 20, 2018

Package MAVE. May 20, 2018 Type Package Title Methods for Dimension Reduction Version 1.3.8 Date 2018-05-18 Package MAVE May 20, 2018 Author Hang Weiqiang, Xia Yingcun Maintainer Hang Weiqiang

More information

18.6 Regression and Classification with Linear Models

18.6 Regression and Classification with Linear Models 18.6 Regression and Classification with Linear Models 352 The hypothesis space of linear functions of continuous-valued inputs has been used for hundreds of years A univariate linear function (a straight

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement

More information

Deep Feedforward Networks

Deep Feedforward Networks Deep Feedforward Networks Liu Yang March 30, 2017 Liu Yang Short title March 30, 2017 1 / 24 Overview 1 Background A general introduction Example 2 Gradient based learning Cost functions Output Units 3

More information

Practicals 5 : Perceptron

Practicals 5 : Perceptron Université Paul Sabatier M2 SE Data Mining Practicals 5 : Perceptron Framework The aim of this last session is to introduce the basics of neural networks theory through the special case of the perceptron.

More information

What Do Neural Networks Do? MLP Lecture 3 Multi-layer networks 1

What Do Neural Networks Do? MLP Lecture 3 Multi-layer networks 1 What Do Neural Networks Do? MLP Lecture 3 Multi-layer networks 1 Multi-layer networks Steve Renals Machine Learning Practical MLP Lecture 3 7 October 2015 MLP Lecture 3 Multi-layer networks 2 What Do Single

More information

CS 4700: Foundations of Artificial Intelligence

CS 4700: Foundations of Artificial Intelligence CS 4700: Foundations of Artificial Intelligence Prof. Bart Selman selman@cs.cornell.edu Machine Learning: Neural Networks R&N 18.7 Intro & perceptron learning 1 2 Neuron: How the brain works # neurons

More information

Latent Gaussian Processes for Distribution Estimation of Multivariate Categorical Data

Latent Gaussian Processes for Distribution Estimation of Multivariate Categorical Data Latent Gaussian Processes for Distribution Estimation of Multivariate Categorical Data Yarin Gal Yutian Chen Zoubin Ghahramani yg279@cam.ac.uk Distribution Estimation Distribution estimation of categorical

More information

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6 Machine Learning for Large-Scale Data Analysis and Decision Making 80-629-17A Neural Networks Week #6 Today Neural Networks A. Modeling B. Fitting C. Deep neural networks Today s material is (adapted)

More information

A Novel Activity Detection Method

A Novel Activity Detection Method A Novel Activity Detection Method Gismy George P.G. Student, Department of ECE, Ilahia College of,muvattupuzha, Kerala, India ABSTRACT: This paper presents an approach for activity state recognition of

More information

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen Neural Networks - I Henrik I Christensen Robotics & Intelligent Machines @ GT Georgia Institute of Technology, Atlanta, GA 30332-0280 hic@cc.gatech.edu Henrik I Christensen (RIM@GT) Neural Networks 1 /

More information

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann Neural Networks with Applications to Vision and Language Feedforward Networks Marco Kuhlmann Feedforward networks Linear separability x 2 x 2 0 1 0 1 0 0 x 1 1 0 x 1 linearly separable not linearly separable

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

Introduction to Artificial Neural Networks

Introduction to Artificial Neural Networks Facultés Universitaires Notre-Dame de la Paix 27 March 2007 Outline 1 Introduction 2 Fundamentals Biological neuron Artificial neuron Artificial Neural Network Outline 3 Single-layer ANN Perceptron Adaline

More information

#33 - Genomics 11/09/07

#33 - Genomics 11/09/07 BCB 444/544 Required Reading (before lecture) Lecture 33 Mon Nov 5 - Lecture 31 Phylogenetics Parsimony and ML Chp 11 - pp 142 169 Genomics Wed Nov 7 - Lecture 32 Machine Learning Fri Nov 9 - Lecture 33

More information

A. Pelliccioni (*), R. Cotroneo (*), F. Pungì (*) (*)ISPESL-DIPIA, Via Fontana Candida 1, 00040, Monteporzio Catone (RM), Italy.

A. Pelliccioni (*), R. Cotroneo (*), F. Pungì (*) (*)ISPESL-DIPIA, Via Fontana Candida 1, 00040, Monteporzio Catone (RM), Italy. Application of Neural Net Models to classify and to forecast the observed precipitation type at the ground using the Artificial Intelligence Competition data set. A. Pelliccioni (*), R. Cotroneo (*), F.

More information

NEAREST NEIGHBOR CLASSIFICATION WITH IMPROVED WEIGHTED DISSIMILARITY MEASURE

NEAREST NEIGHBOR CLASSIFICATION WITH IMPROVED WEIGHTED DISSIMILARITY MEASURE THE PUBLISHING HOUSE PROCEEDINGS OF THE ROMANIAN ACADEMY, Series A, OF THE ROMANIAN ACADEMY Volume 0, Number /009, pp. 000 000 NEAREST NEIGHBOR CLASSIFICATION WITH IMPROVED WEIGHTED DISSIMILARITY MEASURE

More information

Lecture 5: Logistic Regression. Neural Networks

Lecture 5: Logistic Regression. Neural Networks Lecture 5: Logistic Regression. Neural Networks Logistic regression Comparison with generative models Feed-forward neural networks Backpropagation Tricks for training neural networks COMP-652, Lecture

More information

Stat 4510/7510 Homework 7

Stat 4510/7510 Homework 7 Stat 4510/7510 Due: 1/10. Stat 4510/7510 Homework 7 1. Instructions: Please list your name and student number clearly. In order to receive credit for a problem, your solution must show sufficient details

More information