Module Based Neural Networks for Modeling Gene Regulatory Networks
|
|
- Esther Porter
- 5 years ago
- Views:
Transcription
1 Module Based Neural Networks for Modeling Gene Regulatory Networks Paresh Chandra Barman, Std 1 ID: Term Project: BiS732 Bio-Network Department of BioSystems, Korea Advanced Institute of Science and Technology Daejeon, , Republic of Korea, pcbarman@yahoo.com ; pcbarman@neuron.kaist.ac.kr ABSTRACT. A new hybrid neural network model is proposed to model the gene regulatory networks. As we know most of a cell s activity is organized as a network of interacting modules: set of genes coregulated to respond to different conditions. This hybrid neural network consists with two parts: first part is for gene module extraction, which follows the Non-negative Matrix Factorization (NMF algorithm, and the second part is a Single Layer Perceptron (SLP to represent regulatory relationship between regulators and gene modules as a function of weight matrix. Our procedure identifies modules or cluster of semantic genes on the basis of their expression labels. The procedure follows the hypotheses M o d u l e Y r e g u l a t e s r e g u l a t o r X u n d e r c o n d i t i o n W. The proposed network e u k e m i a c a n c e r model has been tested to the L data. The network performance is investigated as the functions of the gene modules numbers, and also the slope of sigmoidal nonlinearity at the hidden neurons. Proposed the reverse engineering approach i.e., predicting gene expression profile for a given cell response. The developed model demonstrates a well defined gene regulatory network even for a smaller number of given experimental data point. 1 Introduction Systematic gene expression analyses provide comprehensive information about the transcriptional response to different environmental and developmental conditions. As these expression analysis technologies mature, biologists will be presented with accumulated data sets detailing the transcriptional response of a cell, tissue, or organism to many environmental, genetic, and developmental stimuli. In addition to elucidating the cellular response to such stimuli, these experimental results provide an opportunity to understand the regulatory pathways that underlie the observed gene expression patterns. While our ability to predict such regulatory pathways will remain rudimentary with limited data, as more data points are collected, we will be able to define ever more accurate predictions of the transcriptional regulatory pathway. It has been observed that most of the microarray data consists with limited number of experimental data points as compare to the number of genes. So it is very difficult to predict the regulatory path way by simply using the weighted matrices [1, 2]. In our model we try to solve these problems by defining a small numbers of modules of semantic genes according to their expression profile. The NMF network [3] of the proposed hybrid network provides the modeling or clustering the genes. Currently NMF algorithm is widely using for clustering the microarray data [4, 5, 6] or text data [7] we have extended the NMF algorithm to identify the gene regulatory network by adding a single layer perceptron. The probabilistic graphical models [8] identified the regulatory modules and their condition-specific regulators from gene expression data which provides a clear global view of functional modules, but one important limitation in this method is to define a proper number of modules. In our approaches this limitation is not too much important. In the proposed (NMF_SLP model a bipolar sigmoid function has used as a transfer function for signal transaction from NMF hidden layer to the input of SLP layer. The transfer function parameter η 1 has adjusted by minimizing the average Sum Squared Error (SSE of the trained SLP network. The regulative interaction matrix or weight matrix W r between the modules and the cells has been determined by using the gradient descent update rule. Finally cells response has been determined for a test set of gene expression profile. We have also modeled the reverse engineering i.e., for a given cell response predict or determine the corresponding gene expression profile. 1 To whom it will be correspondent : Paresh Chandra Barman, Computational Neuro Systems Lab, Dept. of BioSystems KAIST, Republic of Korea, pcbarman@yahoo.com
2 Term Project: BiS732 Bio-Network, Dept. of BioSystems, KAIST 2 Propose NMF_SLP Model for Modeling Gene Regulatory Network NMF_SLP model consists with two adaptive layers of the neural networks the basis block diagram has shown in figure 1. Input layer to apply the gene-expression profile Vn m, m represents the number of experimental time point or cell response, and n is the number of genes. The hidden layer (NMF feature extraction layer This is unsupervised adaptation layer. This layer extracts r number of modules from the input matrix V. The modules are semantic representation of the gene expression levels. The connection weight matrix W n r represents the r module vectors with dimension n, and the hidden layer response or module coefficient matrix r m represent the m time point s coefficient corresponding to r modules. The unsupervised adaptation of this layer is made by using the NMF algorithm (see Sect. 2.1 subject to minimizing the KL divergence among the input matrix V and the multiplication of weight matrix W and the coefficient matrix, i.e. ˆ. Fig. 1. The simple diagram of the NMF_SLP model for modeling gene regulatory network Classification or output (Single Layer Perceptron layer: This is supervised layer. This layer classifies the output of the hidden layer =f( into a given number of classes, on the basis of minimizing the error among the network output and target i.e., the well-known gradientdescent learning algorithm. Activation or transfer Functions: Two types of activation or transfer function have been used: function f for the hidden layer is the bisigmoid or bipolar sigmoid = ( ˆ = ˆ ( η + where Ĥ=W -1 V and the activation function for the output layer is unipolar sigmoid = ( ˆ = ( + η ˆ where Ô = U ; U c r is the hidden to output connection weight matrix, c is the number of document classes and r is the number of basis clusters. 2.1 Unsupervised Adaptation Rule For a given non-negative matrix V n m ; find non-negative factors, semantic modules W n r, and encoded matrix r m, such that: V W or
3 * "# & 6, % "# & * 6 / ˆ *!, ( +, 9 D? 5 ˆ C & B "$ D A 8 & B "$ 9 ' & D % 8 B "# Term Project: BiS732 Bio-Network, Dept. of BioSystems, KAIST ˆ ( = ˆ ( where r is chosen as r < nm n + m For our application purpose we make a single modification in the update rule of [3] [7]. In our case we also normalize the encoding matrix, like W, which are as follows, ˆ ( + ˆ ( ( = [ ], ( ( [ ˆ ( ] ˆ ( + ˆ ( + = ( ˆ ( + ( [ ˆ ( ], [ ˆ ( + ]# $ ( ( + ( -( + = ( + = ( + + D - - ij k ik kj Where Q A B. = A B = AB, and all the (. ij indicates that the noted division and multiplications are computed element by element. ( 2.2 Supervised Single Layer Perceptron Learning For this layer considering obtained by using equation 1, as the input for the SLP, find the net value Ô = W r *, and the output O by using equation 2. The Sum Square Error (SSE: E=(T-O 2 /2 where T is the target matrix. The weight matrix Wr by using the gradient descent update rule [8], such as = η ( ( 2 where k represents the iteration steps and η 2 learning rate constant. 2.3 Proposed Algorithm of Reverse Engineering Let us consider we for a certain gene expression profile we have already estimated the model parameters such as regulative interaction matrix Wr, transfer function parameters, and module weight matrix W, now if we have certain values of regulated cell response O r at a certain time t, then we can easily predict the corresponding gene expression profile by following algorithm: 1. Input: W, Wr, cell response (O r, η 1, η 2, Output gene expression profile V r. : ; < = : ; < = 2. determine O [ ] η O r O r 3. determine invwr O E F G C E F G C ˆ + η 4. determine [ ] 5. finally W V r 3 Data Set and Preprocessing Leukemia data: Todd Golub s genomic and computational approaches to cancer biology and cancer medicine represent seminal efforts in cancer microarray study design and analysis. Golub s 1999 [9] analysis of his leukemia data sets using hierarchical clustering (C and self-organizing map (SOM techniques is a first generation benchmark methodology for molecular classification. It is well know that acute leukemias may be classified as acute lymphoblastic leukemia (ALL originating from lymphoid precursors or acute myeloid leukemia (AML originating from myeloid precursors. ALL types can be further classified into Tlineage and B-lineage subtypes. Ramaswamy and Golub noted that distinct cellular Precursors likely account for the robust expression signatures that distinguish these
4 Term Project: BiS732 Bio-Network, Dept. of BioSystems, KAIST two cancers. [10] This distinction is critical for treatment planning but it is clinically difficult to determine. Golub applied a variety of clustering algorithms to systematically determine cell class without subjective analysis. ere we used this database to predict the module gene regulatory networks corresponding to cell response. In our experiment we have divided the data base into training (75% and testing (25% samples sets and make the data base non-negative by shifted the average above zero. The gene expression profile V is represented like figure 2. The graphical representation of the data base clustering by NMF has shown in figure 3. L L L L = L L M M M O M M M M M M O M L L Fig. 2. The simple example of the Gene expression profile V, where g s represent the genes, v i,j represents the expression levels of gene i, at experimental time point or corresponding cell C j, and C s represents the cells. 4 Experiments Apply the normalized V to the input of the NMF_SLP neural network. By using the NMF update equation 3 to 7, determine the modules W, where each column of W represents a module vector, (such as the i th column w i represents the i th module vector, i= 1, 2, r, where r is the number of modules and the coefficient matrix. Then using the transfer function as shown in equation 1, calculate the hidden layer output, and then adjust the perceptron layer connection weight matrix Wr using equation 8. After training the network for training set of gene expression profile, store both the basis cluster matrix W and SLP connection weight matrix Wr. Then for the test set of gene expression profile V ts (ts denotes the gene expression matrix for the test profile set, where n is same for both case, calculate = W -1 V ts. Using equation 1 calculate, then by using the trained weight matrix Wr and activation function equation 2 calculate the output or cell response O. We observe the NMF-SLP hybrid Neural Network s performance with respect to the following parameters and methods: We try to adjust the parameter η 1 and η 2 ; in our experiment we used η 1 =5 and η 2 =0.005 (as we considered in one of our previous work but the values may be dependent on experimental data set We determine different number of modules and investigate the corresponding regulative interaction matrix Wr (Fig. 5 and 7. Finally we observe the performance for various numbers of modules such as r= 3, 5, 7 (Fig. 8 shows the result. 5 Results We choose the bi-sigmoid transfer function from the output of NMF hidden layer to the input of SLP network and adjust the parameter η 1. Initially we consider the value of η 1 from 0.1 to 10 then observe the training and testing accuracy and also the SSE (result has not shown here of the SLP network. As we know each column of the coefficient matrix of the NMF hidden layer is positive and it is column-wise normalized, i.e., sum of each column is unit. So the distribution of each column is a random probability distribution. It has been also observed (Fig. 4 shows some example that the distribution of bipolar sigmoid function for an independent variable x i (with in the range from 0 x i 1 is fully distributed over the range from 0 to 1, if the value of parameter eta1=η 1 = 5. For the case of unipolar sigmoid function the distribution is always incomplete compare to the bipolar sigmoid function. Due to this reason we choose the bipolar sigmoid function as the transfer function from NMF hidden layer output to SLP input layer.
5 Fig.3. this figure represents Leukemia data set in the clustered format (using NMF clustering Algorithm; there are three categories of Leukemia cancer cells, ALL-B type (19 cells, ALL-T type (8 cells and AML type (11 cells. Fig.4. This figure represents training and testing accuracy of the NMF_SLP network with respect to the different values of parameter eta1 (from 0.1 to 50 of the bipolar sigmoid transfer function Figure 5 shows the gene modules for different numbers of modules. It has been observed that each modules represents different set of semantic or co-regulated gene groups. According to the significant labels of the semantic or co-regulation of the gene expression profile top-most 25 genes for each module has been shown in figure 6. Few of the genes shows significant expression label for each module or groups and the rest are insignificant. Although we have not tested but it is possible to reduce the dimension of module vectors by excluding insignificant expression label as a noise. Figure 5a, and 5b, it has been observed that one or two modules are not contains any significant gene expression labels we can exclude those modules from our final model. It has been observed from figure 7b and 7c that few connection weights are almost zero. Fore example Module 1 of Fig. 5b does not contains highly significant gene expression labels correspondingly the weight element W1 in figure 7b are almost zero, which means the regulative interaction for this module to the cell response is negligible or there are no interactions among those gene group and the corresponding cell or cell types. Figure 8 represents the regulated cells responses are maximum to the corresponding cell categories, which ensure the network function or performances. Even though in our experimental data set the number of samples is not large but it works well. From figure
6 7 is has been observed that for a certain categories of cell some of the modules are stimulating i.e. the value of interaction weight is positive, some has no interaction i.e., weight is zero, and some modules show repression interaction i.e., negative weight. For example in figure 7a (top most panels shows for cell categories one (T-type module one interact as stimulating, while the others show repression interaction. Fig.5a. 3 modules extracted by the NMF layer Fig.5b. 5 modules extracted by the NMF layer
7 Fig.5c. 7 modules extracted by the NMF layer Fig. 6a Top 25 genes according to the significant expression corresponding to each module
8 Fig. 6b Top 25 genes according to the significant expression corresponding to each module Fig. 6c Top 25 genes according to the significant expression corresponding to each module
9 Fig. 7a adjusted interaction or weight matrix between modules (W1, W2, W3 + 1 bias (W4 and 3 regulated cell types Fig. 7b adjusted interaction or weight matrix between modules (W1, W2, W3, W4, W5 + 1 bias (W6 and 3 regulated cell types
10 Fig. 7c adjusted interaction or weight matrix between modules (W1, W2, W3, W4, W5, W6, W7 + 1 bias (W8 and 3 regulated cell types Indices Test cells 1 ALL_5982_B-cell 2 ALL_7092_B-cell 3 ALL_R11_B-cell 4 ALL_R23_B-cell 5 ALL_17638_T-cell 6 ALL_22474_T-cell 7 AML_5 8 AML_6 9 AML_7 # of Module =7 Test cells B-type T-type AML-type # of Module =5 B-type T-type AML-type # of Module =3 B-type T-type AML-type Fig 8: a. Represents the test set of cells and b. represents the response for different cell types
11 6 Conclusion and Future Works The main focus point in this term project is to introduce a two-layer hybrid (NMF-SLP neural network for predicting the module based gene regulatory network. Where the first layer (NMF layer is adopted for the unsupervised module extraction and the supervised learning is implemented for the training of Single layer Perceptron (SLP neural networks for the adjusting the regulative interaction between the extracted gene modules and corresponding to regulated cell response or experimental time points. We used a bipolar sigmoid function as a transfer or activation function to transmit the hidden layer signal to the input of the SLP network to distribute the module effects in the wide area of the unit space so that the interaction effect of the modules are become distinguishable. Since the NMF hidden layer response is always positive i.e., 0 1; for the case of unipolar sigmoid transfer or activation function the distribution of the responded data imitated to the half of the unit space, i.e., 0.5 to 1. So the module coefficients are much more compact compare to the bipolar sigmoid function. It has been observed that if the number of modules become more than the number of sample categories some of the modules become insignificant and does not provide significant interactions to the regulated cell or cell groups, in some cases more than one module share common semantic gene groups they also share the same interaction to the regulated cell or cells group. So choosing the number of modules is little bit relax with in the limit of equation 3. Figure 8 it has also observed that the regulated cells responses are maximum to the corresponding cell categories, which ensure the network function or performances. Even though in our experimental data set the number of samples is not large but it works well. This work can be easily extended to predict the artificial or approximate gene expression profile. To valid this approach we need to observe for different types of micro array data. References 1. D. C. Weaver, C.T. Workman, G. D. Storm: Modeling Regulatory Networks with Weight Matrices, Pacific Symposium on Biocomputing 4: ( Jennifer allinan, Janet Wiles: Evolving Regulatory Networks Using an Artificial Genome, Asia-Pacific Bioinformatics Conference (APB D. D. Lee and. S. Seung: Learning the parts of objects by non-negative matrix factorization. Nature, 401(1999: Pascual-Montano, A.1,Carmona-Sáez, P.2, Pascual-Marqui, R.D.3, Tirado, F.1, Carazo, J.M. : Two-way clustering of gene expression profiles by sparse matrix factorization, Proceedings of the 2005 IEEE Computational Systems Bioinformatics Conference Workshops (CSBW 05, IEEE Guoli Wang, Andrew V Kossenkov and Michael F Ochs: LS-NMF: A modified non-negative matrix factorization algorithm utilizing uncertainty estimates, BMC Bioinformatics2006, 7: Philip M. Kim and Bruce Tidor: Subsystem Identification Through Dimensionality Reduction of Large-Scale Gene Expression Data, Genome Res : P.C. Barman, Nadeem Iqbal, Soo-Young Lee; Non-negative Matrix Factorization Based Text Mining: Feature Extraction and Classification, I. King et al. (Eds.: ICONIP 2006, Part II, LNCS 4233, pp , Springer- Verlag, ( Jacek M. Zurada,: Introduction to Artificial Neural Systems, Chapter 2 & 3, West Publishing Company ( T.R. Golub and D.K. Slonim et al. Molecular classification of cancer: Class discovery and class prediction by gene expression monitoring. Science, 286: , P. Tamayo and S. Ramaswamy. Expression Profiling of uman Tumors: Diagnostic and Research Applications. June 2002.
AN ENHANCED INITIALIZATION METHOD FOR NON-NEGATIVE MATRIX FACTORIZATION. Liyun Gong 1, Asoke K. Nandi 2,3 L69 3BX, UK; 3PH, UK;
213 IEEE INERNAIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING, SEP. 22 25, 213, SOUHAMPON, UK AN ENHANCED INIIALIZAION MEHOD FOR NON-NEGAIVE MARIX FACORIZAION Liyun Gong 1, Asoke K. Nandi 2,3
More informationarxiv: v3 [cs.lg] 18 Mar 2013
Hierarchical Data Representation Model - Multi-layer NMF arxiv:1301.6316v3 [cs.lg] 18 Mar 2013 Hyun Ah Song Department of Electrical Engineering KAIST Daejeon, 305-701 hyunahsong@kaist.ac.kr Abstract Soo-Young
More informationNon-Negative Factorization for Clustering of Microarray Data
INT J COMPUT COMMUN, ISSN 1841-9836 9(1):16-23, February, 2014. Non-Negative Factorization for Clustering of Microarray Data L. Morgos Lucian Morgos Dept. of Electronics and Telecommunications Faculty
More informationMicroarray Data Analysis: Discovery
Microarray Data Analysis: Discovery Lecture 5 Classification Classification vs. Clustering Classification: Goal: Placing objects (e.g. genes) into meaningful classes Supervised Clustering: Goal: Discover
More informationArtificial Intelligence
Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement
More informationEUSIPCO
EUSIPCO 2013 1569741067 CLUSERING BY NON-NEGAIVE MARIX FACORIZAION WIH INDEPENDEN PRINCIPAL COMPONEN INIIALIZAION Liyun Gong 1, Asoke K. Nandi 2,3 1 Department of Electrical Engineering and Electronics,
More information10-810: Advanced Algorithms and Models for Computational Biology. Optimal leaf ordering and classification
10-810: Advanced Algorithms and Models for Computational Biology Optimal leaf ordering and classification Hierarchical clustering As we mentioned, its one of the most popular methods for clustering gene
More information6.047 / Computational Biology: Genomes, Networks, Evolution Fall 2008
MIT OpenCourseWare http://ocw.mit.edu 6.047 / 6.878 Computational Biology: Genomes, Networks, Evolution Fall 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
More informationNonnegative Matrix Factorization
Nonnegative Matrix Factorization Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr
More informationMODULE -4 BAYEIAN LEARNING
MODULE -4 BAYEIAN LEARNING CONTENT Introduction Bayes theorem Bayes theorem and concept learning Maximum likelihood and Least Squared Error Hypothesis Maximum likelihood Hypotheses for predicting probabilities
More informationIntroduction to clustering methods for gene expression data analysis
Introduction to clustering methods for gene expression data analysis Giorgio Valentini e-mail: valentini@dsi.unimi.it Outline Levels of analysis of DNA microarray data Clustering methods for functional
More informationComputational Systems Biology
Computational Systems Biology Vasant Honavar Artificial Intelligence Research Laboratory Bioinformatics and Computational Biology Graduate Program Center for Computational Intelligence, Learning, & Discovery
More informationNONNEGATIVE matrix factorization (NMF) is a
Algorithms for Orthogonal Nonnegative Matrix Factorization Seungjin Choi Abstract Nonnegative matrix factorization (NMF) is a widely-used method for multivariate analysis of nonnegative data, the goal
More informationIntroduction to clustering methods for gene expression data analysis
Introduction to clustering methods for gene expression data analysis Giorgio Valentini e-mail: valentini@dsi.unimi.it Outline Levels of analysis of DNA microarray data Clustering methods for functional
More informationComparison of Shannon, Renyi and Tsallis Entropy used in Decision Trees
Comparison of Shannon, Renyi and Tsallis Entropy used in Decision Trees Tomasz Maszczyk and W lodzis law Duch Department of Informatics, Nicolaus Copernicus University Grudzi adzka 5, 87-100 Toruń, Poland
More informationNeural Networks Introduction
Neural Networks Introduction H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011 H. A. Talebi, Farzaneh Abdollahi Neural Networks 1/22 Biological
More informationArtificial Neural Networks. Edward Gatt
Artificial Neural Networks Edward Gatt What are Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning Very
More informationMathematics, Genomics, and Cancer
School of Informatics IUB April 6, 2009 Outline Introduction Class Comparison Class Discovery Class Prediction Example Biological states and state modulation Software Tools Research directions Math & Biology
More informationIterative Laplacian Score for Feature Selection
Iterative Laplacian Score for Feature Selection Linling Zhu, Linsong Miao, and Daoqiang Zhang College of Computer Science and echnology, Nanjing University of Aeronautics and Astronautics, Nanjing 2006,
More informationGene Expression Data Classification with Revised Kernel Partial Least Squares Algorithm
Gene Expression Data Classification with Revised Kernel Partial Least Squares Algorithm Zhenqiu Liu, Dechang Chen 2 Department of Computer Science Wayne State University, Market Street, Frederick, MD 273,
More informationArtificial Neural Network Method of Rock Mass Blastability Classification
Artificial Neural Network Method of Rock Mass Blastability Classification Jiang Han, Xu Weiya, Xie Shouyi Research Institute of Geotechnical Engineering, Hohai University, Nanjing, Jiangshu, P.R.China
More informationLarge-Scale Feature Learning with Spike-and-Slab Sparse Coding
Large-Scale Feature Learning with Spike-and-Slab Sparse Coding Ian J. Goodfellow, Aaron Courville, Yoshua Bengio ICML 2012 Presented by Xin Yuan January 17, 2013 1 Outline Contributions Spike-and-Slab
More informationInteger weight training by differential evolution algorithms
Integer weight training by differential evolution algorithms V.P. Plagianakos, D.G. Sotiropoulos, and M.N. Vrahatis University of Patras, Department of Mathematics, GR-265 00, Patras, Greece. e-mail: vpp
More informationIntroduction to Neural Networks
CUONG TUAN NGUYEN SEIJI HOTTA MASAKI NAKAGAWA Tokyo University of Agriculture and Technology Copyright by Nguyen, Hotta and Nakagawa 1 Pattern classification Which category of an input? Example: Character
More informationIntroduction to Bioinformatics
CSCI8980: Applied Machine Learning in Computational Biology Introduction to Bioinformatics Rui Kuang Department of Computer Science and Engineering University of Minnesota kuang@cs.umn.edu History of Bioinformatics
More informationFeedforward Neural Nets and Backpropagation
Feedforward Neural Nets and Backpropagation Julie Nutini University of British Columbia MLRG September 28 th, 2016 1 / 23 Supervised Learning Roadmap Supervised Learning: Assume that we are given the features
More informationData Mining Part 5. Prediction
Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,
More informationArtificial Neural Network : Training
Artificial Neural Networ : Training Debasis Samanta IIT Kharagpur debasis.samanta.iitgp@gmail.com 06.04.2018 Debasis Samanta (IIT Kharagpur) Soft Computing Applications 06.04.2018 1 / 49 Learning of neural
More informationArtificial Neural Networks Examination, June 2005
Artificial Neural Networks Examination, June 2005 Instructions There are SIXTY questions. (The pass mark is 30 out of 60). For each question, please select a maximum of ONE of the given answers (either
More informationBIOLOGY 111. CHAPTER 1: An Introduction to the Science of Life
BIOLOGY 111 CHAPTER 1: An Introduction to the Science of Life An Introduction to the Science of Life: Chapter Learning Outcomes 1.1) Describe the properties of life common to all living things. (Module
More information#33 - Genomics 11/09/07
BCB 444/544 Required Reading (before lecture) Lecture 33 Mon Nov 5 - Lecture 31 Phylogenetics Parsimony and ML Chp 11 - pp 142 169 Genomics Wed Nov 7 - Lecture 32 Machine Learning Fri Nov 9 - Lecture 33
More informationAn artificial neural networks (ANNs) model is a functional abstraction of the
CHAPER 3 3. Introduction An artificial neural networs (ANNs) model is a functional abstraction of the biological neural structures of the central nervous system. hey are composed of many simple and highly
More informationArtificial Neural Networks Examination, March 2004
Artificial Neural Networks Examination, March 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum
More informationUnit III. A Survey of Neural Network Model
Unit III A Survey of Neural Network Model 1 Single Layer Perceptron Perceptron the first adaptive network architecture was invented by Frank Rosenblatt in 1957. It can be used for the classification of
More informationArtificial Neural Networks Examination, June 2004
Artificial Neural Networks Examination, June 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum
More informationEEE 241: Linear Systems
EEE 4: Linear Systems Summary # 3: Introduction to artificial neural networks DISTRIBUTED REPRESENTATION An ANN consists of simple processing units communicating with each other. The basic elements of
More informationDeep unsupervised learning
Deep unsupervised learning Advanced data-mining Yongdai Kim Department of Statistics, Seoul National University, South Korea Unsupervised learning In machine learning, there are 3 kinds of learning paradigm.
More informationOrthogonal Nonnegative Matrix Factorization: Multiplicative Updates on Stiefel Manifolds
Orthogonal Nonnegative Matrix Factorization: Multiplicative Updates on Stiefel Manifolds Jiho Yoo and Seungjin Choi Department of Computer Science Pohang University of Science and Technology San 31 Hyoja-dong,
More informationLearning in Bayesian Networks
Learning in Bayesian Networks Florian Markowetz Max-Planck-Institute for Molecular Genetics Computational Molecular Biology Berlin Berlin: 20.06.2002 1 Overview 1. Bayesian Networks Stochastic Networks
More informationMultilayer Neural Networks
Multilayer Neural Networks Introduction Goal: Classify objects by learning nonlinearity There are many problems for which linear discriminants are insufficient for minimum error In previous methods, the
More informationIntroduction to Neural Networks
Introduction to Neural Networks Steve Renals Automatic Speech Recognition ASR Lecture 10 24 February 2014 ASR Lecture 10 Introduction to Neural Networks 1 Neural networks for speech recognition Introduction
More informationCS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes
CS 6501: Deep Learning for Computer Graphics Basics of Neural Networks Connelly Barnes Overview Simple neural networks Perceptron Feedforward neural networks Multilayer perceptron and properties Autoencoders
More informationNeural Networks and the Back-propagation Algorithm
Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely
More informationCSC Neural Networks. Perceptron Learning Rule
CSC 302 1.5 Neural Networks Perceptron Learning Rule 1 Objectives Determining the weight matrix and bias for perceptron networks with many inputs. Explaining what a learning rule is. Developing the perceptron
More informationFuzzy Clustering of Gene Expression Data
Fuzzy Clustering of Gene Data Matthias E. Futschik and Nikola K. Kasabov Department of Information Science, University of Otago P.O. Box 56, Dunedin, New Zealand email: mfutschik@infoscience.otago.ac.nz,
More informationPreserving Privacy in Data Mining using Data Distortion Approach
Preserving Privacy in Data Mining using Data Distortion Approach Mrs. Prachi Karandikar #, Prof. Sachin Deshpande * # M.E. Comp,VIT, Wadala, University of Mumbai * VIT Wadala,University of Mumbai 1. prachiv21@yahoo.co.in
More informationIntroduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis
Introduction to Natural Computation Lecture 9 Multilayer Perceptrons and Backpropagation Peter Lewis 1 / 25 Overview of the Lecture Why multilayer perceptrons? Some applications of multilayer perceptrons.
More informationClassification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012
Classification CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Topics Discriminant functions Logistic regression Perceptron Generative models Generative vs. discriminative
More informationLecture 4: Feed Forward Neural Networks
Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training
More informationArtificial Neural Network
Artificial Neural Network Eung Je Woo Department of Biomedical Engineering Impedance Imaging Research Center (IIRC) Kyung Hee University Korea ejwoo@khu.ac.kr Neuron and Neuron Model McCulloch and Pitts
More informationNeural Networks. Mark van Rossum. January 15, School of Informatics, University of Edinburgh 1 / 28
1 / 28 Neural Networks Mark van Rossum School of Informatics, University of Edinburgh January 15, 2018 2 / 28 Goals: Understand how (recurrent) networks behave Find a way to teach networks to do a certain
More informationA Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation
1 Introduction A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation J Wesley Hines Nuclear Engineering Department The University of Tennessee Knoxville, Tennessee,
More informationArtificial Neural Networks. Historical description
Artificial Neural Networks Historical description Victor G. Lopez 1 / 23 Artificial Neural Networks (ANN) An artificial neural network is a computational model that attempts to emulate the functions of
More informationIntroduction to Artificial Neural Networks
Facultés Universitaires Notre-Dame de la Paix 27 March 2007 Outline 1 Introduction 2 Fundamentals Biological neuron Artificial neuron Artificial Neural Network Outline 3 Single-layer ANN Perceptron Adaline
More informationUsing a Hopfield Network: A Nuts and Bolts Approach
Using a Hopfield Network: A Nuts and Bolts Approach November 4, 2013 Gershon Wolfe, Ph.D. Hopfield Model as Applied to Classification Hopfield network Training the network Updating nodes Sequencing of
More informationAdministration. Registration Hw3 is out. Lecture Captioning (Extra-Credit) Scribing lectures. Questions. Due on Thursday 10/6
Administration Registration Hw3 is out Due on Thursday 10/6 Questions Lecture Captioning (Extra-Credit) Look at Piazza for details Scribing lectures With pay; come talk to me/send email. 1 Projects Projects
More informationNeural Networks, Computation Graphs. CMSC 470 Marine Carpuat
Neural Networks, Computation Graphs CMSC 470 Marine Carpuat Binary Classification with a Multi-layer Perceptron φ A = 1 φ site = 1 φ located = 1 φ Maizuru = 1 φ, = 2 φ in = 1 φ Kyoto = 1 φ priest = 0 φ
More informationPart 8: Neural Networks
METU Informatics Institute Min720 Pattern Classification ith Bio-Medical Applications Part 8: Neural Netors - INTRODUCTION: BIOLOGICAL VS. ARTIFICIAL Biological Neural Netors A Neuron: - A nerve cell as
More informationNeural Networks DWML, /25
DWML, 2007 /25 Neural networks: Biological and artificial Consider humans: Neuron switching time 0.00 second Number of neurons 0 0 Connections per neuron 0 4-0 5 Scene recognition time 0. sec 00 inference
More informationARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92
ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000
More informationTraining Multi-Layer Neural Networks. - the Back-Propagation Method. (c) Marcin Sydow
Plan training single neuron with continuous activation function training 1-layer of continuous neurons training multi-layer network - back-propagation method single neuron with continuous activation function
More informationClassification with Perceptrons. Reading:
Classification with Perceptrons Reading: Chapters 1-3 of Michael Nielsen's online book on neural networks covers the basics of perceptrons and multilayer neural networks We will cover material in Chapters
More informationNeural Networks and Ensemble Methods for Classification
Neural Networks and Ensemble Methods for Classification NEURAL NETWORKS 2 Neural Networks A neural network is a set of connected input/output units (neurons) where each connection has a weight associated
More informationArtificial Neural Networks
Artificial Neural Networks Threshold units Gradient descent Multilayer networks Backpropagation Hidden layer representations Example: Face Recognition Advanced topics 1 Connectionist Models Consider humans:
More informationInferring Transcriptional Regulatory Networks from High-throughput Data
Inferring Transcriptional Regulatory Networks from High-throughput Data Lectures 9 Oct 26, 2011 CSE 527 Computational Biology, Fall 2011 Instructor: Su-In Lee TA: Christopher Miles Monday & Wednesday 12:00-1:20
More informationContingency Table Analysis via Matrix Factorization
Contingency Table Analysis via Matrix Factorization Kumer Pial Das 1, Jay Powell 2, Myron Katzoff 3, S. Stanley Young 4 1 Department of Mathematics,Lamar University, TX 2 Better Schooling Systems, Pittsburgh,
More informationRevision: Neural Network
Revision: Neural Network Exercise 1 Tell whether each of the following statements is true or false by checking the appropriate box. Statement True False a) A perceptron is guaranteed to perfectly learn
More informationSerious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions
BACK-PROPAGATION NETWORKS Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks Cannot approximate (learn) non-linear functions Difficult (if not impossible) to design
More informationCourse Structure. Psychology 452 Week 12: Deep Learning. Chapter 8 Discussion. Part I: Deep Learning: What and Why? Rufus. Rufus Processed By Fetch
Psychology 452 Week 12: Deep Learning What Is Deep Learning? Preliminary Ideas (that we already know!) The Restricted Boltzmann Machine (RBM) Many Layers of RBMs Pros and Cons of Deep Learning Course Structure
More informationECE521 Lectures 9 Fully Connected Neural Networks
ECE521 Lectures 9 Fully Connected Neural Networks Outline Multi-class classification Learning multi-layer neural networks 2 Measuring distance in probability space We learnt that the squared L2 distance
More informationModified Learning for Discrete Multi-Valued Neuron
Proceedings of International Joint Conference on Neural Networks, Dallas, Texas, USA, August 4-9, 2013 Modified Learning for Discrete Multi-Valued Neuron Jin-Ping Chen, Shin-Fu Wu, and Shie-Jue Lee Department
More informationProtein Structure Prediction Using Multiple Artificial Neural Network Classifier *
Protein Structure Prediction Using Multiple Artificial Neural Network Classifier * Hemashree Bordoloi and Kandarpa Kumar Sarma Abstract. Protein secondary structure prediction is the method of extracting
More informationCSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning
CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Learning Neural Networks Classifier Short Presentation INPUT: classification data, i.e. it contains an classification (class) attribute.
More informationMachine Learning. Neural Networks
Machine Learning Neural Networks Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 Biological Analogy Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 THE
More informationADALINE for Pattern Classification
POLYTECHNIC UNIVERSITY Department of Computer and Information Science ADALINE for Pattern Classification K. Ming Leung Abstract: A supervised learning algorithm known as the Widrow-Hoff rule, or the Delta
More informationStatistical Machine Learning from Data
January 17, 2006 Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Multi-Layer Perceptrons Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole
More informationSolving SVM: Quadratic Programming
MA 751 Part 7 Solving SVM: Quadratic Programming 1. Quadratic programming (QP): Introducing Lagrange multipliers α 4 and. 4 (can be justified in QP for inequality as well as equality constraints) we define
More informationMultilayer Feedforward Networks. Berlin Chen, 2002
Multilayer Feedforard Netors Berlin Chen, 00 Introduction The single-layer perceptron classifiers discussed previously can only deal ith linearly separable sets of patterns The multilayer netors to be
More informationANN Control of Non-Linear and Unstable System and its Implementation on Inverted Pendulum
Research Article International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347-5161 2014 INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet ANN
More informationLogistic Regression & Neural Networks
Logistic Regression & Neural Networks CMSC 723 / LING 723 / INST 725 Marine Carpuat Slides credit: Graham Neubig, Jacob Eisenstein Logistic Regression Perceptron & Probabilities What if we want a probability
More informationCourse 395: Machine Learning - Lectures
Course 395: Machine Learning - Lectures Lecture 1-2: Concept Learning (M. Pantic) Lecture 3-4: Decision Trees & CBC Intro (M. Pantic & S. Petridis) Lecture 5-6: Evaluating Hypotheses (S. Petridis) Lecture
More informationRegression Model In The Analysis Of Micro Array Data-Gene Expression Detection
Jamal Fathima.J.I 1 and P.Venkatesan 1. Research Scholar -Department of statistics National Institute For Research In Tuberculosis, Indian Council For Medical Research,Chennai,India,.Department of statistics
More informationMultilayer Neural Networks
Multilayer Neural Networks Multilayer Neural Networks Discriminant function flexibility NON-Linear But with sets of linear parameters at each layer Provably general function approximators for sufficient
More informationArtificial Neural Networks The Introduction
Artificial Neural Networks The Introduction 01001110 01100101 01110101 01110010 01101111 01101110 01101111 01110110 01100001 00100000 01110011 01101011 01110101 01110000 01101001 01101110 01100001 00100000
More informationArtifical Neural Networks
Neural Networks Artifical Neural Networks Neural Networks Biological Neural Networks.................................. Artificial Neural Networks................................... 3 ANN Structure...........................................
More informationSeveral ways to solve the MSO problem
Several ways to solve the MSO problem J. J. Steil - Bielefeld University - Neuroinformatics Group P.O.-Box 0 0 3, D-3350 Bielefeld - Germany Abstract. The so called MSO-problem, a simple superposition
More informationNeural Networks Lecture 4: Radial Bases Function Networks
Neural Networks Lecture 4: Radial Bases Function Networks H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011. A. Talebi, Farzaneh Abdollahi
More informationNeural Networks. Nicholas Ruozzi University of Texas at Dallas
Neural Networks Nicholas Ruozzi University of Texas at Dallas Handwritten Digit Recognition Given a collection of handwritten digits and their corresponding labels, we d like to be able to correctly classify
More informationArtificial Neural Networks. MGS Lecture 2
Artificial Neural Networks MGS 2018 - Lecture 2 OVERVIEW Biological Neural Networks Cell Topology: Input, Output, and Hidden Layers Functional description Cost functions Training ANNs Back-Propagation
More informationLearning and Memory in Neural Networks
Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units
More informationFEATURE SELECTION COMBINED WITH RANDOM SUBSPACE ENSEMBLE FOR GENE EXPRESSION BASED DIAGNOSIS OF MALIGNANCIES
FEATURE SELECTION COMBINED WITH RANDOM SUBSPACE ENSEMBLE FOR GENE EXPRESSION BASED DIAGNOSIS OF MALIGNANCIES Alberto Bertoni, 1 Raffaella Folgieri, 1 Giorgio Valentini, 1 1 DSI, Dipartimento di Scienze
More informationSimple Neural Nets For Pattern Classification
CHAPTER 2 Simple Neural Nets For Pattern Classification Neural Networks General Discussion One of the simplest tasks that neural nets can be trained to perform is pattern classification. In pattern classification
More informationMultilayer Perceptron
Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Single Perceptron 3 Boolean Function Learning 4
More informationClassification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses about the label (Top-5 error) No Bounding Box
ImageNet Classification with Deep Convolutional Neural Networks Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton Motivation Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses
More informationCMSC 421: Neural Computation. Applications of Neural Networks
CMSC 42: Neural Computation definition synonyms neural networks artificial neural networks neural modeling connectionist models parallel distributed processing AI perspective Applications of Neural Networks
More informationComputational Biology Course Descriptions 12-14
Computational Biology Course Descriptions 12-14 Course Number and Title INTRODUCTORY COURSES BIO 311C: Introductory Biology I BIO 311D: Introductory Biology II BIO 325: Genetics CH 301: Principles of Chemistry
More informationLecture 4: Perceptrons and Multilayer Perceptrons
Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons
More informationNeuro-Fuzzy Comp. Ch. 4 March 24, R p
4 Feedforward Multilayer Neural Networks part I Feedforward multilayer neural networks (introduced in sec 17) with supervised error correcting learning are used to approximate (synthesise) a non-linear
More informationNote on Algorithm Differences Between Nonnegative Matrix Factorization And Probabilistic Latent Semantic Indexing
Note on Algorithm Differences Between Nonnegative Matrix Factorization And Probabilistic Latent Semantic Indexing 1 Zhong-Yuan Zhang, 2 Chris Ding, 3 Jie Tang *1, Corresponding Author School of Statistics,
More informationIntroduction To Artificial Neural Networks
Introduction To Artificial Neural Networks Machine Learning Supervised circle square circle square Unsupervised group these into two categories Supervised Machine Learning Supervised Machine Learning Supervised
More information