LR-KFNN: Logistic Regression-Kernel Function Neural Networks and the GFR-NN Model for Renal Function Evaluation
|
|
- Stephany Foster
- 5 years ago
- Views:
Transcription
1 LR-KF: Logistic Regression-Kernel Function eural etworks and the GFR- Model for Renal Function Evaluation Qun Song, Tianmin Ma and ikola Kasaov Knowledge Engineering & Discovery Research Institute Auckland University of Technology Private Bag 9006, Auckland 00, ew Zealand Astract This paper introduces a novel knowledge ased neural network models that incorporate and adapt oth existing logistic regression formulas and kernel functions in there structures to improve the learning and adaptation aility of a connectionist model when there is an existing knowledge on the prolem in the form of a logistic regression. Different from standard feed-forward neural networks, the proposed model uses several non-linear function pairs as neurons in its hidden layer. Each of these pairs consists of one knowledge-ased function and one distance ased function. An existing gloal model a logistic regression formula, is transformed into a set of su-models each representing a cluster of the prolem space. These su-models constitute new, local knowledge functions. The Gaussian kernel functions are taken as the distance functions. After modifying each su-model individually, all these su-models are aggregated through incremental learning. The gradient descent method is applied for parameter optimization. The method is illustrated on a model called GFR-, for creating an accurate estimation of glomerular filtration rate (GFR) ased on an existing formula MDRD and real data from the clinic. The performance of the GFR- is compared with the MDRD formula, MLP neural network, AFIS and DEFIS fuzzy models on the same data. The results show that the GFR- is effective on this task and superior over the other models. Keywords: Logistic regression neural networks, kernel function neural networks, glomerular filtration rate; renal function evaluation.. Introduction: Logistic Regression-Kernel Function eural etworks (LR-KF) The introduced here LR-KF can e represented as the following generic function and its structure is shown in Fig.: y(x) G (x) F (x) + G (x i ) F (x)+ + G M (x) F M (x) () where, x [x, x,, x P ] is the input vector; y is the output vector; G l are kernel functions and F l, are logistic regression functions, for l,, M. Using different G l and F l, Eq. can represent different kinds of neural networks: if the G l are Gaussian kernel functions and the F l are constants it is a RBF neural network; if the G l are sigmoid transform functions and the F l are constants it is a generic MLP neural network; if the G l are fuzzy rule ased weighted functions and the F l are linear functions it is a first-order TSK fuzzy inference model; and in the simplest case, if each G l represents a single input variale and the F l are constants it is a linear regression function.
2 Fig. A generic structure of the LR-KF The proposed LR-KF can also e represented y Eq.. Different from standard feedforward ack-propagation (MLP) or radial asis function (RBF) neural networks, which usually use sigmoid transfer functions or Gaussian kernel functions as the neurons in their hidden layer(s), in the case of LR-KF, the F l are non-linear functions that represent the knowledge in local areas, and the G l are Gaussian kernel functions that control the contriution of each F l to the system output. In this paper we will develop as an illustration of the LR-KF a model that incorporate oth existing formula (the MDRD [4]) and new clinical data for the evaluation of renal functions. The model is called GFR-. The MDRD formula [4], which will e introduced in Section 3, is popular to use in the clinic and in research of nephrology for calculating the estimate value of glomerular filtration rate (GFR), and the modified MDRD formula is taken as F l the knowledge functions in the LR- KF. The paper is organized as follows: Section presents the LR-KF learning algorithm. Sections 3 illustrates the implementation of the LR-KF on the GFR data collected at hospitals in ew Zealand and Australia and conclusions are drawn in Section 4. LR-KF Learning Algorithm The LR-KF learning procedure performs the following steps: () Cluster the whole training data set into M clusters and each of these clusters includes a su-dataset. () In each cluster l, l,,, M, take an existing logistic formula (e.g. MDRD) as the initial function F l and then, using gradient descent method to modified the function on the su-dataset. (3) Create a Gaussian kernel function as the distance function G l : the cluster centre and radius are respectively taken as the initial values of the centre and width of G l. (4) Aggregate F l and G l and optimize the parameters in the LR-KF using the gradient descent method on the whole data set.
3 The parameter optimisation procedure is descried elow: Consider the system having P inputs, one output, and M neuron pairs in the hidden layer, the output value of the system can e calculated on input vector x i [x, x,, x P ] y the rewrote Eq. : y(x i ) G (x i ) F (x i ) + G (x i ) F (x i )+ + G M (x i ) F M (x i ) In the case of the GFR model an MDRD-type logistic regression formula is used: F l l l lp ( x i) l0 x x x, (the MDRD-type logistic regression formula) () p P ( xij m ) and G l( x i) αl exp[ ] (Gaussian kernel function) (3) σ j In the LR-KF learning algorithm, the following indexes are used: Training Data pairs: i,,, ; Input Variales: j,,, P; euron pairs in the hidden layer: l,,, M; Learning Iterations: k,, Suppose the LR-KF is given the training input-output data pairs [x i, t i ], the system minimizes the following ojective function (an error function): E i [ y ( x ) ] (4) i t i The gradient descent algorithm (ack-propagation algorithm) is used to otain the recursions for updating the parameters,, m and such that E of Eq. 4 is minimized: l 0 ] { Gl( i) y ( i) ti } ( k + ) l0( x x (5) l 0( i { log( xij) Gl( i) y ( i) ti] } ( k + ) ( x x (6) i { Gl( xi) y ( i) ti] } α αl( k + ) αl( x (7) α ( l i m ( k + ) m ( σ ( k + ) σ ( m m i Gl( xi) y ( xi) ti]( x m) i σ Gl( xi) y ( xi) ti]( x m) 3 σ (8) (9) here,,, m, and are learning rates for updating the parameters,, m and respectively.
4 3 KF for the GFR Estimation the GFR- Model Here, the LR-KF is applied for the creation of a knowledge ased neural network model for the evaluation (identification) of renal functions of patients in a renal clinic the GFR- model. Real data is collected in a clinical environment. The accurate evaluation of renal function is fundamental to sound nephrology practice. The early detection of renal impairment will allow for the institution of appropriate diagnostic and therapeutic measures, and potentially maximize preservation of intact nephrons [4]. Glomerular filtration rate (GFR) is traditionally considered the est overall index to determine renal function in healthy and in diseased people. Most clinicians rely upon the clearance of creatinine (CrCl) as a convenient and inexpensive surrogate for GFR. CrCl can e determined y either timed urine collection, or from serum creatinine using equations developed from regression analyses such as that y Cockcroft-Gault formula, ut the accuracy of CrCl is limited y methodological imprecision and the systematic ias [4]. Recently, the Modification of Diet in Renal Disease (MDRD) study group developed a new formula to more accurately evaluate the GFR [4]. The formula uses six input variales: age, gender, Screat, Race, Sal and Surea and is defined as follows: GFR 70 Screat Age (if female).8(if race is lac Surea -0.7 Sal 0.38 (0) In Eq. (0) Screat (Serum creatinine) is a protein which is expected to e filtered in the kidneys and the residual of it - released into the lood. The creatinine level in the serum is determined y the rate it is eing removed in the kidney and is also a measure of the kidney function. Surea (Serum urea) is a sustance produced in the liver as a means of disposing of ammonia from protein metaolism. It is filtered y the kidney and can e reasored to the loodstream. Sal (Serum alumin) is the protein of the highest concentration in plasma. Decreased serum alumin may result from kidney disease, which allows alumin to escape into the urine. Decreased alumin may also e explained y malnutrition or liver disease. However, prediction of GFR with the use of the existing formulas that constitute gloal and fixed models, can e misleading as to the presence and progression of renal disease [4]. Using proposed model on a GFR data set (447 samples) collected in hospitals in ew Zealand and Australia, we have otained more accurate results than with the use of the MDRD formula or the use of other connectionist models. For comparison, the results produced y the MDRD function [4], MLP and RBF neural networks [5], adaptive neural fuzzy inference system (AFIS) [] and dynamic evolving neural fuzzy inference system (DEFIS) [3] along with the results produced y proposed GFR-, are also listed in Tale. The results include the numer of fuzzy rules (for AFIS and DEFIS), or neurons in the hidden layer (for RBF and MLP), the testing RMSE (root mean square error), and the testing MAE (mean asolute error). All experimental results reported here are ased on 0 cross-validation experiments with the same model and parameter values, and the results are averaged. In each experiment, 70% of the whole data set is randomly selected as training data and another 30% as testing data.
5 The proposed model, GFR-, performs etter than the other neural networks and fuzzy models. This is a result of the fine tuning of each local model in GFR- and the fine aggregating of these local models. Tale. Experimental results on GFR data Model eurons or rules RMS E MAE MDRD MLP AFIS DEFIS RBF GFR-.3 (Average) Conclusions This paper presents a knowledge ased neural network LR-KF for the integration of sumodels and new data related to the same prolem resulting in incrementally adaptive model. The GFR- model is an application of LR-KF for the GFR Estimation. The GFR- performs local generalization and its learning is a procedure of knowledge-insertion, knowledge-modification and knowledge extraction. The new knowledge can e extracted from the GFR-: () In each local area, there is a modified MDRD function (or su-model) that can represent the knowledge est in this area. () For a new input vector, the contriution, each local function gives to the output, can e known. Further directions for research include: () in each local area, the est one will e selected as the knowledge function from several different formulas, e.g. MDRD, CG, Gates and Walser [4]; () the weighted data normalization [6] will e applied to the GFR-. Acknowledgements The research presented in the paper is funded y the ew Zealand Foundation for Research, Science and Technology under grant ERF/AUTX0-0. The authors acknowledge the assistance of Dr Mark Marshal from the Middlemore hospital in Auckland for providing data and expertise for the analysis and the validation of the results. References [] Fuzzy Logical Toolox User s Guide, The Math Works Inc., ver., 00. [] Jang, R. AFIS: adaptive network ased fuzzy inference system, IEEE Trans. on Syst.,Man, and Cyernetics, vol. 3, o.3, pp , 993.
6 [3] Kasaov,. and Song, Q. "DEFIS: Dynamic, evolving neural-fuzzy inference systems and its application for time-series prediction," IEEE Trans. on Fuzzy Systems, vol. 0, pp , 00. [4] Levey, A.S., Bosch, J,P., Lewis, J.B., Greene, T., Rogers,., Roth, D. for the Modification of Diet in Renal Disease Study Group, A More Accurate Method To Estimate Glomerular Filtration Rate from Serum Creatinine: A ew Prediction Equation, Annals of Internal Medicine, vol. 30, pp , 999. [5] eural etwork Toolox User s Guide, The Math Works Inc., ver. 4, 00. [6] Song, Q. and Kasaov,. "Weighted Data ormalizations and feature Selection for Evolving Connectionist Systems Proceedings, Proc. of The Eighth Australian and ew Zealand Intelligence Information Systems Conference (AZIIS003), pp , Sydney, Australia, Decemer, 003.
Radial-Basis Function Networks
Radial-Basis Function etworks A function is radial basis () if its output depends on (is a non-increasing function of) the distance of the input from a given stored vector. s represent local receptors,
More informationNeural Networks and the Back-propagation Algorithm
Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely
More informationRadial-Basis Function Networks
Radial-Basis Function etworks A function is radial () if its output depends on (is a nonincreasing function of) the distance of the input from a given stored vector. s represent local receptors, as illustrated
More informationArtificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso
Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso xsu@utep.edu Fall, 2018 Outline Introduction A Brief History ANN Architecture Terminology
More informationA Modified Incremental Principal Component Analysis for On-line Learning of Feature Space and Classifier
A Modified Incremental Principal Component Analysis for On-line Learning of Feature Space and Classifier Seiichi Ozawa, Shaoning Pang, and Nikola Kasabov Graduate School of Science and Technology, Kobe
More informationECE521 Lectures 9 Fully Connected Neural Networks
ECE521 Lectures 9 Fully Connected Neural Networks Outline Multi-class classification Learning multi-layer neural networks 2 Measuring distance in probability space We learnt that the squared L2 distance
More informationA Modified Incremental Principal Component Analysis for On-Line Learning of Feature Space and Classifier
A Modified Incremental Principal Component Analysis for On-Line Learning of Feature Space and Classifier Seiichi Ozawa 1, Shaoning Pang 2, and Nikola Kasabov 2 1 Graduate School of Science and Technology,
More informationVorlesung Neuronale Netze - Radiale-Basisfunktionen (RBF)-Netze -
Vorlesung Neuronale Netze - Radiale-Basisfunktionen (RBF)-Netze - SS 004 Holger Fröhlich (abg. Vorl. von S. Kaushik¹) Lehrstuhl Rechnerarchitektur, Prof. Dr. A. Zell ¹www.cse.iitd.ernet.in/~saroj Radial
More informationNeural Networks, Computation Graphs. CMSC 470 Marine Carpuat
Neural Networks, Computation Graphs CMSC 470 Marine Carpuat Binary Classification with a Multi-layer Perceptron φ A = 1 φ site = 1 φ located = 1 φ Maizuru = 1 φ, = 2 φ in = 1 φ Kyoto = 1 φ priest = 0 φ
More informationNeural Networks Lecture 4: Radial Bases Function Networks
Neural Networks Lecture 4: Radial Bases Function Networks H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011. A. Talebi, Farzaneh Abdollahi
More informationy(x n, w) t n 2. (1)
Network training: Training a neural network involves determining the weight parameter vector w that minimizes a cost function. Given a training set comprising a set of input vector {x n }, n = 1,...N,
More informationCOGS Q250 Fall Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November.
COGS Q250 Fall 2012 Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November. For the first two questions of the homework you will need to understand the learning algorithm using the delta
More informationStein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm
Stein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm Qiang Liu and Dilin Wang NIPS 2016 Discussion by Yunchen Pu March 17, 2017 March 17, 2017 1 / 8 Introduction Let x R d
More informationIntroduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis
Introduction to Natural Computation Lecture 9 Multilayer Perceptrons and Backpropagation Peter Lewis 1 / 25 Overview of the Lecture Why multilayer perceptrons? Some applications of multilayer perceptrons.
More informationArtificial Neural Networks. Edward Gatt
Artificial Neural Networks Edward Gatt What are Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning Very
More informationData Mining Part 5. Prediction
Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,
More informationSTA 414/2104: Lecture 8
STA 414/2104: Lecture 8 6-7 March 2017: Continuous Latent Variable Models, Neural networks With thanks to Russ Salakhutdinov, Jimmy Ba and others Outline Continuous latent variable models Background PCA
More informationNONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition
NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function
More informationMultilayer Perceptron
Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Single Perceptron 3 Boolean Function Learning 4
More informationMulti-layer Neural Networks
Multi-layer Neural Networks Steve Renals Informatics 2B Learning and Data Lecture 13 8 March 2011 Informatics 2B: Learning and Data Lecture 13 Multi-layer Neural Networks 1 Overview Multi-layer neural
More informationARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92
ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000
More information1. A discrete-time recurrent network is described by the following equation: y(n + 1) = A y(n) + B x(n)
Neuro-Fuzzy, Revision questions June, 25. A discrete-time recurrent network is described by the following equation: y(n + ) = A y(n) + B x(n) where A =.7.5.4.6, B = 2 (a) Sketch the dendritic and signal-flow
More informationWhat Do Neural Networks Do? MLP Lecture 3 Multi-layer networks 1
What Do Neural Networks Do? MLP Lecture 3 Multi-layer networks 1 Multi-layer networks Steve Renals Machine Learning Practical MLP Lecture 3 7 October 2015 MLP Lecture 3 Multi-layer networks 2 What Do Single
More information4. Multilayer Perceptrons
4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output
More informationNONLINEAR PLANT IDENTIFICATION BY WAVELETS
NONLINEAR PLANT IDENTIFICATION BY WAVELETS Edison Righeto UNESP Ilha Solteira, Department of Mathematics, Av. Brasil 56, 5385000, Ilha Solteira, SP, Brazil righeto@fqm.feis.unesp.br Luiz Henrique M. Grassi
More informationLecture 2: Logistic Regression and Neural Networks
1/23 Lecture 2: and Neural Networks Pedro Savarese TTI 2018 2/23 Table of Contents 1 2 3 4 3/23 Naive Bayes Learn p(x, y) = p(y)p(x y) Training: Maximum Likelihood Estimation Issues? Why learn p(x, y)
More informationArtificial Intelligence
Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement
More informationCSE446: Neural Networks Spring Many slides are adapted from Carlos Guestrin and Luke Zettlemoyer
CSE446: Neural Networks Spring 2017 Many slides are adapted from Carlos Guestrin and Luke Zettlemoyer Human Neurons Switching time ~ 0.001 second Number of neurons 10 10 Connections per neuron 10 4-5 Scene
More informationLab 5: 16 th April Exercises on Neural Networks
Lab 5: 16 th April 01 Exercises on Neural Networks 1. What are the values of weights w 0, w 1, and w for the perceptron whose decision surface is illustrated in the figure? Assume the surface crosses the
More informationARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD
ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided
More informationFeed-forward Networks Network Training Error Backpropagation Applications. Neural Networks. Oliver Schulte - CMPT 726. Bishop PRML Ch.
Neural Networks Oliver Schulte - CMPT 726 Bishop PRML Ch. 5 Neural Networks Neural networks arise from attempts to model human/animal brains Many models, many claims of biological plausibility We will
More informationFEEDBACK GMDH-TYPE NEURAL NETWORK AND ITS APPLICATION TO MEDICAL IMAGE ANALYSIS OF LIVER CANCER. Tadashi Kondo and Junji Ueno
International Journal of Innovative Computing, Information and Control ICIC International c 2012 ISSN 1349-4198 Volume 8, Number 3(B), March 2012 pp. 2285 2300 FEEDBACK GMDH-TYPE NEURAL NETWORK AND ITS
More informationSPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks
Topics in Machine Learning-EE 5359 Neural Networks 1 The Perceptron Output: A perceptron is a function that maps D-dimensional vectors to real numbers. For notational convenience, we add a zero-th dimension
More informationMultilayer Perceptrons and Backpropagation
Multilayer Perceptrons and Backpropagation Informatics 1 CG: Lecture 7 Chris Lucas School of Informatics University of Edinburgh January 31, 2017 (Slides adapted from Mirella Lapata s.) 1 / 33 Reading:
More informationLecture 10. Neural networks and optimization. Machine Learning and Data Mining November Nando de Freitas UBC. Nonlinear Supervised Learning
Lecture 0 Neural networks and optimization Machine Learning and Data Mining November 2009 UBC Gradient Searching for a good solution can be interpreted as looking for a minimum of some error (loss) function
More informationADAPTIVE NEURO-FUZZY INFERENCE SYSTEMS
ADAPTIVE NEURO-FUZZY INFERENCE SYSTEMS RBFN and TS systems Equivalent if the following hold: Both RBFN and TS use same aggregation method for output (weighted sum or weighted average) Number of basis functions
More informationThe Perceptron Algorithm
The Perceptron Algorithm Greg Grudic Greg Grudic Machine Learning Questions? Greg Grudic Machine Learning 2 Binary Classification A binary classifier is a mapping from a set of d inputs to a single output
More informationArtificial Neural Network
Artificial Neural Network Contents 2 What is ANN? Biological Neuron Structure of Neuron Types of Neuron Models of Neuron Analogy with human NN Perceptron OCR Multilayer Neural Network Back propagation
More informationVasil Khalidov & Miles Hansard. C.M. Bishop s PRML: Chapter 5; Neural Networks
C.M. Bishop s PRML: Chapter 5; Neural Networks Introduction The aim is, as before, to find useful decompositions of the target variable; t(x) = y(x, w) + ɛ(x) (3.7) t(x n ) and x n are the observations,
More informationGene Expression Data Classification with Revised Kernel Partial Least Squares Algorithm
Gene Expression Data Classification with Revised Kernel Partial Least Squares Algorithm Zhenqiu Liu, Dechang Chen 2 Department of Computer Science Wayne State University, Market Street, Frederick, MD 273,
More informationASSESSING PROGNOSIS FOR CHRONIC KIDNEY DISEASE USING NEURAL NETWORKS. by Michael R. Zeleny B.S. Biology, Juniata College, 2012
ASSESSING PROGNOSIS FOR CHRONIC KIDNEY DISEASE USING NEURAL NETWORKS by Michael R. Zeleny B.S. Biology, Juniata College, 2012 Submitted to the Graduate Faculty of the Graduate School of Public Health in
More informationDeep Neural Networks (1) Hidden layers; Back-propagation
Deep Neural Networs (1) Hidden layers; Bac-propagation Steve Renals Machine Learning Practical MLP Lecture 3 4 October 2017 / 9 October 2017 MLP Lecture 3 Deep Neural Networs (1) 1 Recap: Softmax single
More informationOptmization Methods for Machine Learning Beyond Perceptron Feed Forward neural networks (FFN)
Optmization Methods for Machine Learning Beyond Perceptron Feed Forward neural networks (FFN) Laura Palagi http://www.dis.uniroma1.it/ palagi Dipartimento di Ingegneria informatica automatica e gestionale
More informationNeural Networks. Bishop PRML Ch. 5. Alireza Ghane. Feed-forward Networks Network Training Error Backpropagation Applications
Neural Networks Bishop PRML Ch. 5 Alireza Ghane Neural Networks Alireza Ghane / Greg Mori 1 Neural Networks Neural networks arise from attempts to model human/animal brains Many models, many claims of
More informationMachine Learning. Neural Networks. Le Song. CSE6740/CS7641/ISYE6740, Fall Lecture 7, September 11, 2012 Based on slides from Eric Xing, CMU
Machine Learning CSE6740/CS7641/ISYE6740, Fall 2012 Neural Networks Le Song Lecture 7, September 11, 2012 Based on slides from Eric Xing, CMU Reading: Chap. 5 CB Learning highly non-linear functions f:
More informationCSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18
CSE 417T: Introduction to Machine Learning Final Review Henry Chai 12/4/18 Overfitting Overfitting is fitting the training data more than is warranted Fitting noise rather than signal 2 Estimating! "#$
More informationIntroduction to Neural Networks
Introduction to Neural Networks Steve Renals Automatic Speech Recognition ASR Lecture 10 24 February 2014 ASR Lecture 10 Introduction to Neural Networks 1 Neural networks for speech recognition Introduction
More informationA FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE
A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE Li Sheng Institute of intelligent information engineering Zheiang University Hangzhou, 3007, P. R. China ABSTRACT In this paper, a neural network-driven
More informationSerious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions
BACK-PROPAGATION NETWORKS Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks Cannot approximate (learn) non-linear functions Difficult (if not impossible) to design
More informationDeep Neural Networks (1) Hidden layers; Back-propagation
Deep Neural Networs (1) Hidden layers; Bac-propagation Steve Renals Machine Learning Practical MLP Lecture 3 2 October 2018 http://www.inf.ed.ac.u/teaching/courses/mlp/ MLP Lecture 3 / 2 October 2018 Deep
More informationReading Group on Deep Learning Session 1
Reading Group on Deep Learning Session 1 Stephane Lathuiliere & Pablo Mesejo 2 June 2016 1/31 Contents Introduction to Artificial Neural Networks to understand, and to be able to efficiently use, the popular
More informationSupervised Learning. George Konidaris
Supervised Learning George Konidaris gdk@cs.brown.edu Fall 2017 Machine Learning Subfield of AI concerned with learning from data. Broadly, using: Experience To Improve Performance On Some Task (Tom Mitchell,
More informationClick Prediction and Preference Ranking of RSS Feeds
Click Prediction and Preference Ranking of RSS Feeds 1 Introduction December 11, 2009 Steven Wu RSS (Really Simple Syndication) is a family of data formats used to publish frequently updated works. RSS
More informationNeural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington
Neural Networks CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 Perceptrons x 0 = 1 x 1 x 2 z = h w T x Output: z x D A perceptron
More informationJ. Sadeghi E. Patelli M. de Angelis
J. Sadeghi E. Patelli Institute for Risk and, Department of Engineering, University of Liverpool, United Kingdom 8th International Workshop on Reliable Computing, Computing with Confidence University of
More informationLecture 5: Logistic Regression. Neural Networks
Lecture 5: Logistic Regression. Neural Networks Logistic regression Comparison with generative models Feed-forward neural networks Backpropagation Tricks for training neural networks COMP-652, Lecture
More information<Special Topics in VLSI> Learning for Deep Neural Networks (Back-propagation)
Learning for Deep Neural Networks (Back-propagation) Outline Summary of Previous Standford Lecture Universal Approximation Theorem Inference vs Training Gradient Descent Back-Propagation
More informationNeuro-Fuzzy Comp. Ch. 4 March 24, R p
4 Feedforward Multilayer Neural Networks part I Feedforward multilayer neural networks (introduced in sec 17) with supervised error correcting learning are used to approximate (synthesise) a non-linear
More informationComputational statistics
Computational statistics Lecture 3: Neural networks Thierry Denœux 5 March, 2016 Neural networks A class of learning methods that was developed separately in different fields statistics and artificial
More informationConvolutional Neural Network Architecture
Convolutional Neural Network Architecture Zhisheng Zhong Feburary 2nd, 2018 Zhisheng Zhong Convolutional Neural Network Architecture Feburary 2nd, 2018 1 / 55 Outline 1 Introduction of Convolution Motivation
More informationCSC 411: Lecture 04: Logistic Regression
CSC 411: Lecture 04: Logistic Regression Raquel Urtasun & Rich Zemel University of Toronto Sep 23, 2015 Urtasun & Zemel (UofT) CSC 411: 04-Prob Classif Sep 23, 2015 1 / 16 Today Key Concepts: Logistic
More informationInvestigation of a Ball Screw Feed Drive System Based on Dynamic Modeling for Motion Control
Investigation of a Ball Screw Feed Drive System Based on Dynamic Modeling for Motion Control Yi-Cheng Huang *, Xiang-Yuan Chen Department of Mechatronics Engineering, National Changhua University of Education,
More informationNeural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann
Neural Networks with Applications to Vision and Language Feedforward Networks Marco Kuhlmann Feedforward networks Linear separability x 2 x 2 0 1 0 1 0 0 x 1 1 0 x 1 linearly separable not linearly separable
More informationLinear & nonlinear classifiers
Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1394 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1394 1 / 34 Table
More information2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller
2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks Todd W. Neller Machine Learning Learning is such an important part of what we consider "intelligence" that
More informationA Novel Activity Detection Method
A Novel Activity Detection Method Gismy George P.G. Student, Department of ECE, Ilahia College of,muvattupuzha, Kerala, India ABSTRACT: This paper presents an approach for activity state recognition of
More informationArtificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!
Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011! 1 Todayʼs lecture" How the brain works (!)! Artificial neural networks! Perceptrons! Multilayer feed-forward networks! Error
More informationKernel Learning via Random Fourier Representations
Kernel Learning via Random Fourier Representations L. Law, M. Mider, X. Miscouridou, S. Ip, A. Wang Module 5: Machine Learning L. Law, M. Mider, X. Miscouridou, S. Ip, A. Wang Kernel Learning via Random
More informationComparison of Predictive Accuracy of Neural Network Methods and Cox Regression for Censored Survival Data
Comparison of Predictive Accuracy of Neural Network Methods and Cox Regression for Censored Survival Data Stanley Azen Ph.D. 1, Annie Xiang Ph.D. 1, Pablo Lapuerta, M.D. 1, Alex Ryutov MS 2, Jonathan Buckley
More informationMLPR: Logistic Regression and Neural Networks
MLPR: Logistic Regression and Neural Networks Machine Learning and Pattern Recognition Amos Storkey Amos Storkey MLPR: Logistic Regression and Neural Networks 1/28 Outline 1 Logistic Regression 2 Multi-layer
More informationOutline. MLPR: Logistic Regression and Neural Networks Machine Learning and Pattern Recognition. Which is the correct model? Recap.
Outline MLPR: and Neural Networks Machine Learning and Pattern Recognition 2 Amos Storkey Amos Storkey MLPR: and Neural Networks /28 Recap Amos Storkey MLPR: and Neural Networks 2/28 Which is the correct
More informationRobust Pareto Design of GMDH-type Neural Networks for Systems with Probabilistic Uncertainties
. Hybrid GMDH-type algorithms and neural networks Robust Pareto Design of GMDH-type eural etworks for Systems with Probabilistic Uncertainties. ariman-zadeh, F. Kalantary, A. Jamali, F. Ebrahimi Department
More informationEngineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 5: Single Layer Perceptrons & Estimating Linear Classifiers
Engineering Part IIB: Module 4F0 Statistical Pattern Processing Lecture 5: Single Layer Perceptrons & Estimating Linear Classifiers Phil Woodland: pcw@eng.cam.ac.uk Michaelmas 202 Engineering Part IIB:
More informationNonlinear Classification
Nonlinear Classification INFO-4604, Applied Machine Learning University of Colorado Boulder October 5-10, 2017 Prof. Michael Paul Linear Classification Most classifiers we ve seen use linear functions
More informationA Residual Gradient Fuzzy Reinforcement Learning Algorithm for Differential Games
International Journal of Fuzzy Systems manuscript (will be inserted by the editor) A Residual Gradient Fuzzy Reinforcement Learning Algorithm for Differential Games Mostafa D Awheda Howard M Schwartz Received:
More informationSTA 414/2104: Lecture 8
STA 414/2104: Lecture 8 6-7 March 2017: Continuous Latent Variable Models, Neural networks Delivered by Mark Ebden With thanks to Russ Salakhutdinov, Jimmy Ba and others Outline Continuous latent variable
More informationMODELLING OF TOOL LIFE, TORQUE AND THRUST FORCE IN DRILLING: A NEURO-FUZZY APPROACH
ISSN 1726-4529 Int j simul model 9 (2010) 2, 74-85 Original scientific paper MODELLING OF TOOL LIFE, TORQUE AND THRUST FORCE IN DRILLING: A NEURO-FUZZY APPROACH Roy, S. S. Department of Mechanical Engineering,
More informationDeep Feedforward Networks
Deep Feedforward Networks Liu Yang March 30, 2017 Liu Yang Short title March 30, 2017 1 / 24 Overview 1 Background A general introduction Example 2 Gradient based learning Cost functions Output Units 3
More informationLecture 13 Back-propagation
Lecture 13 Bac-propagation 02 March 2016 Taylor B. Arnold Yale Statistics STAT 365/665 1/21 Notes: Problem set 4 is due this Friday Problem set 5 is due a wee from Monday (for those of you with a midterm
More informationGenerative v. Discriminative classifiers Intuition
Logistic Regression Machine Learning 070/578 Carlos Guestrin Carnegie Mellon University September 24 th, 2007 Generative v. Discriminative classifiers Intuition Want to Learn: h:x a Y X features Y target
More informationBidirectional Representation and Backpropagation Learning
Int'l Conf on Advances in Big Data Analytics ABDA'6 3 Bidirectional Representation and Bacpropagation Learning Olaoluwa Adigun and Bart Koso Department of Electrical Engineering Signal and Image Processing
More informationAn Adaptive Neural Network Scheme for Radar Rainfall Estimation from WSR-88D Observations
2038 JOURNAL OF APPLIED METEOROLOGY An Adaptive Neural Network Scheme for Radar Rainfall Estimation from WSR-88D Observations HONGPING LIU, V.CHANDRASEKAR, AND GANG XU Colorado State University, Fort Collins,
More informationLecture 3 Feedforward Networks and Backpropagation
Lecture 3 Feedforward Networks and Backpropagation CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor University of Chicago April 3, 2017 Things we will look at today Recap of Logistic Regression
More informationSubcellular Localisation of Proteins in Living Cells Using a Genetic Algorithm and an Incremental Neural Network
Subcellular Localisation of Proteins in Living Cells Using a Genetic Algorithm and an Incremental Neural Network Marko Tscherepanow and Franz Kummert Applied Computer Science, Faculty of Technology, Bielefeld
More informationMachine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6
Machine Learning for Large-Scale Data Analysis and Decision Making 80-629-17A Neural Networks Week #6 Today Neural Networks A. Modeling B. Fitting C. Deep neural networks Today s material is (adapted)
More informationDual Estimation and the Unscented Transformation
Dual Estimation and the Unscented Transformation Eric A. Wan ericwan@ece.ogi.edu Rudolph van der Merwe rudmerwe@ece.ogi.edu Alex T. Nelson atnelson@ece.ogi.edu Oregon Graduate Institute of Science & Technology
More informationSource localization in an ocean waveguide using supervised machine learning
Source localization in an ocean waveguide using supervised machine learning Haiqiang Niu, Emma Reeves, and Peter Gerstoft Scripps Institution of Oceanography, UC San Diego Part I Localization on Noise09
More informationNeural networks and support vector machines
Neural netorks and support vector machines Perceptron Input x 1 Weights 1 x 2 x 3... x D 2 3 D Output: sgn( x + b) Can incorporate bias as component of the eight vector by alays including a feature ith
More informationNeural Networks and Deep Learning
Neural Networks and Deep Learning Professor Ameet Talwalkar November 12, 2015 Professor Ameet Talwalkar Neural Networks and Deep Learning November 12, 2015 1 / 16 Outline 1 Review of last lecture AdaBoost
More informationBayesian Machine Learning
Bayesian Machine Learning Andrew Gordon Wilson ORIE 6741 Lecture 2: Bayesian Basics https://people.orie.cornell.edu/andrew/orie6741 Cornell University August 25, 2016 1 / 17 Canonical Machine Learning
More informationArtificial Neural Networks
Introduction ANN in Action Final Observations Application: Poverty Detection Artificial Neural Networks Alvaro J. Riascos Villegas University of los Andes and Quantil July 6 2018 Artificial Neural Networks
More informationWeek 5: Logistic Regression & Neural Networks
Week 5: Logistic Regression & Neural Networks Instructor: Sergey Levine 1 Summary: Logistic Regression In the previous lecture, we covered logistic regression. To recap, logistic regression models and
More informationVarying Coefficients Model with Measurement Error
Cleveland Clinic Department of Quantitative Health Sciences Technical Report Series 2006.01 Varying Coefficients Model with Measurement Error Liang Li Department of Quantitative Health Sciences Cleveland
More informationECE521 Lecture 7/8. Logistic Regression
ECE521 Lecture 7/8 Logistic Regression Outline Logistic regression (Continue) A single neuron Learning neural networks Multi-class classification 2 Logistic regression The output of a logistic regression
More informationPOWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH
Abstract POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH A.H.M.A.Rahim S.K.Chakravarthy Department of Electrical Engineering K.F. University of Petroleum and Minerals Dhahran. Dynamic
More informationFrom perceptrons to word embeddings. Simon Šuster University of Groningen
From perceptrons to word embeddings Simon Šuster University of Groningen Outline A basic computational unit Weighting some input to produce an output: classification Perceptron Classify tweets Written
More informationSupport Vector Regression (SVR) Descriptions of SVR in this discussion follow that in Refs. (2, 6, 7, 8, 9). The literature
Support Vector Regression (SVR) Descriptions of SVR in this discussion follow that in Refs. (2, 6, 7, 8, 9). The literature suggests the design variables should be normalized to a range of [-1,1] or [0,1].
More informationArtificial Neural Networks
Artificial Neural Networks Oliver Schulte - CMPT 310 Neural Networks Neural networks arise from attempts to model human/animal brains Many models, many claims of biological plausibility We will focus on
More informationBack-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples
Back-Propagation Algorithm Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples 1 Inner-product net =< w, x >= w x cos(θ) net = n i=1 w i x i A measure
More informationNeural Network Training
Neural Network Training Sargur Srihari Topics in Network Training 0. Neural network parameters Probabilistic problem formulation Specifying the activation and error functions for Regression Binary classification
More information