Master s Thesis June 2018 Supervisor: Christian Nørgaard Storm Pedersen Aarhus University

Size: px
Start display at page:

Download "Master s Thesis June 2018 Supervisor: Christian Nørgaard Storm Pedersen Aarhus University"

Transcription

1 P R O T E I N S E C O N D A RY S T R U C T U R E P R E D I C T I O N U S I N G A RT I F I C I A L N E U R A L N E T W O R K S judit kisistók, bakhtawar noor, Master s Thesis June 2018 Supervisor: Christian Nørgaard Storm Pedersen Aarhus University

2 Bakhtawar Noor, Judit Kisistók: Protein secondary structure prediction using artificial neural networks, Master s Thesis, June 2018

3 A B S T R A C T Protein secondary structure prediction is an important step in the process of attempting to infer a protein s tertiary structure and its function. This thesis intends to explore the usage of feed-forward artificial neural networks to solve this problem. As a frame of reference, we present experimental and machine learning methods used to determine and predict protein secondary structure, with an in-depth overview of artificial neural networks. We give an overview of the four algorithms that have been implemented as part of this thesis: a simple, one-layer neural network described by Qian and Sejnowski [1], an extension of the previous implementation incorporating multiple sequence alignments by majority voting, a cascaded neural network utilizing a profile table created from multiple sequence alignment data described by Rost and Sander [2] and a convolutional neural network learning from the positionspecific scoring matrices of proteins described by Liu and Cheng [3]. We have conducted experiments in order to optimize the performance of our models and tested the optimal networks on previously unseen data. In each case, we obtained results that are comparable to the results presented in the papers we based our implementations on. When considered collectively, the conducted experiments gave us the impression that our models are robust and we believe they will generalize well to data not featured in this thesis. iii

4

5 A C K N O W L E D G E M E N T S First and foremost, we want to thank our supervisor, Christian Nørgaard Storm Pedersen for encouraging us to freely explore and experiment within our project. We appreciate all his valuable advice, time and work he devoted to this thesis, and the faith he put in the quality of our work - we hope we could live up to it. We want to express our gratitude to our close friends, Emil Malta- Müller and Tine Sneibjerg Ebsen, a.k.a. the Knights of 420, for seeing us at our most annoyingly frustrated states and still wanting to hang out with us. They never failed to lift our spirits, and we appreciate every single insider joke, deep conversation and cup of coffee we shared. Last but not least, we would like to acknowledge our respective families in Pakistan and Hungary for always supporting us, believing in us and being there for us. v

6

7 C O N T E N T S i theoretical framework 1 1 introduction Proteins Primary structure Secondary structure Tertiary structure Quaternary structure Objective experimental protein secondary structure determination Spectroscopic methods Circular dichroism spectroscopy Fourier transform infrared (FT-IR) spectroscopy Raman spectroscopy NMR spectroscopy X-ray crystallography machine learning approaches to predict protein secondary structure Support Vector Machines Hidden Markov Models Neural networks The biological neuron The artificial neuron Convolutional neural networks Training a neural network Commonly used PSSP tools utilizing neural networks tools and methods Simple neural network (jnn) Using multiple sequence alignments (jsnn) Cascaded neural network (mnn) Convolutional neural network (snn) ii practical experiments 31 5 fnn - our command line tool for secondary structure prediction Dataset Framework used Implemented algorithms Parsing Encoding Neural networks vii

8 viii contents 5.4 User manual System specifications Getting started Structure prediction experiments Workbench Preliminaries Batch size Epochs Dropout regularization L1/L2 regularization Q3 score Experiments JNN JSNN MNN SNN conclusion and outlook 85 iii appendix 87 a appendix 89 a.1 Derivation of the backpropagation algorithm a.2 Supplementary material bibliography 111

9 L I S T O F F I G U R E S Figure 1 Structure of an amino acid Figure 2 Levels of protein structure. [8] Figure 3 The schematic diagram of SVMs as given by [22]. (a) shows the linear separable and (b) shows the non-linear separable case Figure 4 The schematic diagram of HMMs, where x i are observables, z i are hidden variables, A i are transition probabilities and φ i are emission probabilities Figure 5 Structure of a biological neuron [33] Figure 6 Structure of a single artificial neuron Figure 7 Topology of a fully connected feed-forward neural network [33] Figure 8 Computing output values of a convolutional layer Figure 9 Max pooling on the output obtained from some convolutional layer. Different patches are represented by a different color Figure 10 Part of the neural network considered for the derivation of backpropagation. See section A.1 in appendix for the whole derivation Figure 11 The outline of the PHD method, as give by Rost and Sander in [36] Figure 12 The outline of the PSIPRED method, as give by Jones in [37] Figure 13 The outline of the JPred method, as give by Cuff and Barton in [11] Figure 14 The neural network architecture described by Qian and Sejnowski [1] Figure 15 Majority voting on a multiple sequence alignment Figure 16 Sequence to Structure neural network architecture described by Rost and Sander [2]. Structure to structure neural network is not shown. 28 Figure 17 The convolutional neural network described by Liu and Cheng in [3] Figure 18 The process of encoding target sequences before presenting them to the neural networks. 35 Figure 19 Steps followed to find the optimal set of hyperparameters for jnn and jsnn ix

10 x List of Figures Figure 20 Validation accuracies observed using five different batch sizes and twenty different number of nodes in the hidden layer of jnn. Windows sizes 13,17 and 21 were considered Figure 21 Validation accuracies obtained after doing local search around batch size of 100 in jnn.. 44 Figure 22 Accuracy and loss plot for jnn. The neural network was trained without regularization.. 45 Figure 23 Accuracy and loss plots of jnn with L1 regularizer and Adam optimization Figure 24 Validation and training accuracies obtained by iteratively increasing the number of hidden layers in jnn Figure 25 Final cross-validation accuracies of jnn Figure 26 Validation accuracy and loss obtained by jnn using the TSP1607 dataset Figure 27 Validation accuracies observed using five different batch sizes in jsnn. Windows sizes of 13, 17 and 21 were considered Figure 28 Validation accuracy using different batch sizes for window sizes of 17 and 21 in jsnn Figure 29 Validation and loss plots for jsnn using L2 regularizer with a dropout layer and Adam optimizer Figure 30 Validation accuracy and test accuracy obtained by iteratively increasing the number of hidden layers in jsnn Figure 31 Validation accuracy and test accuracy observed after performing K-fold cross-validation on jsnn. 59 Figure 32 Steps followed to find optimal set of hyperparameters for mnn Figure 33 Validation accuracy using different number of nodes in the first neural network of mnn Figure 34 Validation accuracy using different number of nodes in the second neural network of mnn.. 63 Figure 35 Validation accuracy using different batch sizes in mnn Figure 36 Accuracy and loss plots of mnn using 10 and 100 nodes in the first and second neural network and a batch size of Figure 37 Accuracy and loss plots of mnn using Adam optimizer and one dropout layer in each network with a dropout rate of Figure 38 Training and validation accuracy using different number of hidden layers in mnn

11 List of Figures xi Figure 39 Validation accuracy of mnn using different number of folds in K-fold cross-validation Figure 40 Steps followed to find optimal set of hyperparameters for snn Figure 41 Accuracy and loss plots of snn without regularization Figure 42 Accuracy and loss plots of snn with L2 norm regularization and two dropout layers (one in each convolutional layer) Figure 43 Accuracy and loss plots of snn with L1 norm regularization Figure 44 Mean validation accuracy for batch sizes 100, 500 and 1000 in snn Figure 45 Mean validation accuracy for number of filters in the first convolutional layer of snn Figure 46 Mean validation accuracy for number of filters in the second convolutional layer of snn Figure 47 Accuracy and loss plots for snn obtained using 96 and 10 filters in the first and second convolutional layers, respectively, and a batch size of Figure 48 Accuracy and loss plots of snn obtained using Adam and 5*5 and 2*2 filters in the first and second convolutional layers Figure 49 Validation accuracy using different number of folds in K-fold cross-validation in snn Figure 50 Part of the neural network considered in order to do derivation for backpropagation Figure 51 Left: Training and validation accuracies of jnn using L2 regularizer with Adam optimizer. Right: Training and validation losses. Batch size used is 50 and number of nodes is Figure 52 Left: Training and validation accuracies of jnn using L2 regularizer with Adam optimizer. Right: Training and validation losses. Batch size used is 60 and number of nodes is Figure 53 Left: Training and validation accuracies of jnn using L2 regularizer with Adam optimizer. Right: Training and validation losses. Batch size used is 60 and number of nodes is Figure 54 Left: Training and validation accuracies of jnn using L2 regularizer with Adam optimizer. Right: Training and validation losses. Batch size used is 50 and number of nodes is

12 xii List of Figures Figure 55 Figure 56 Figure 57 Figure 58 Figure 59 Figure 60 Figure 61 Figure 62 Figure 63 Figure 64 Left: Training and validation accuracies of jnn using L2 regularizer with Adam optimizer. Right: Training and validation losses. Batch size used is 60 and number of nodes is Left: Training and validation accuracies of jnn using L2 regularizer with Adam optimizer. Right: Training and validation losses. Batch size used is 60 and number of nodes is Left: Training and validation accuracies of jnn using L1 regularizer with Adam optimizer. Right: Training and validation losses. Batch size used is 50 and number of nodes is Left: Training and validation accuracies of jnn using L1 regularizer with Adam optimizer. Right: Training and validation losses. Batch size used is 60 and number of nodes is Left: Training and validation accuracies of jsnn using L1 regularizer with Adam optimizer. Right: Training and validation losses. Batch size, number of nodes, and window size are 100, 90 and 21, respectively Left: Training and validation accuracies of jsnn using L1 regularizer with Adam optimizer. Right: Training and validation losses. Batch size, number of nodes and window size are 100, 180 and 21, respectively Left: Training and validation accuracies of jsnn using L1 regularizer with Adam optimizer. Right: Training and validation losses. Batch size, number of nodes and window size are 100, 90 and 21, respectively Accuracy and loss plots obtained using 20 and 10 nodes in the first and second convolutional layers, respectively, and a batch size of 500 in snn Accuracy and loss plots of snn obtained using Adam and 5*5 filters in the first and second convolutional layers Accuracy and loss plots of snn obtained using Adam and 10*10 and 5*5 filters in the first and second convolutional layers

13 L I S T O F TA B L E S Table 1 Mean validation accuracies for window sizes 13, 17 and 21 after experimenting with the number of nodes and batch size in jnn Table 2 Hyperparameters chosen to find the optimal regularization method and optimizer Table 3 Hyperparameters chosen to investigate the effect of the number of hidden layers on jnn.. 47 Table 4 Final architecture of jnn Table 5 Q 3 accuracy of jnn (trained on CB513 dataset) for each test sequence Table 6 Q 3 accuracy of jnn (trained on the TSP1607 dataset) for each test sequence Table 7 Mean validation accuracies for window sizes 13, 17 and 21 after experimenting with the number of nodes and batch size in jsnn Table 8 Hyperparameters chosen to find the optimal regularization method and optimizer for jsnn 56 Table 9 Final architecture of jsnn Table 10 Q 3 accuracy of jsnn (trained by virtue of majority voting) for each test sequence Table 11 Q 3 accuracy of jnn for each test sequence.. 60 Table 12 Mean validation accuracies for window sizes 7, 13, 17 and 21 after experimenting with the number of nodes and batch size in mnn. Validation accuracy 1 indicates the results obtained with only the first neural network and validation accuracy 2 indicates the total accuracy obtained from the entire cascaded system Table 13 The effect of the choice of regularizer and optimizer on the validation accuracy in mnn Table 14 Final hyperparameters chosen to be used in mnn Table 15 Q 3 accuracy of mnn for each test sequence. 70 Table 16 Mean validation accuracy for window sizes 13, 17 and 21 after regularization experiments using snn Table 17 The effect of different regularization methods on the accuracy of snn Table 18 Mean validation accuracy for window sizes 13, 17 and 21 after filter number and batch size experiments in snn xiii

14 xiv List of Tables Table 19 Validation accuracy using different filter sizes and optimizers in snn Table 20 Final hyperparameters chosen for snn Table 21 Results obtained by testing snn on unseen data. 83 Table 22 Training and validation accuracies obtained after performing regularization and optimizer experiments on jnn Table 23 Validation accuracies obtained by performing regularization and optimizer experiments on jsnn Table 24 The effect of regularization on the accuracy of snn, for window sizes 13, 17 and Table 25 The effect of the number of nodes in the convolutional layers and the batch size on the accuracy of snn, for window size

15 A C R O N Y M S CD Circular Dichroism NMR Nuclear Magnetic Resonance FT-IR Fourier Transform Infrared SVM Support Vector Machines ANN Artificial Neural Network PSSM Position Specific Scoring Matrix CNN Convolutional Neural Network PSSP Protein Secondary Structure Prediction HMM Hidden Markov Model ML NN Machine Learning Neural Network xv

16

17 Part I T H E O R E T I C A L F R A M E W O R K

18

19 I N T R O D U C T I O N 1 In this chapter we give an overview of proteins and their four levels of protein structure required to understand the reported work. We also explain the main objective (in section 1.2) behind implementing neural network models for protein secondary structure prediction (PSSP). 1.1 proteins Proteins are macromolecules that play a pivotal role in various processes in all organisms. Proteins serve as catalysts that could be part of a large complex or temporarily associate with a cofactor thus either accelerating or inhibiting chemical processes in living organisms. They can also be responsible for various other functions like transportation of other molecules, immunity, cell growth and differentiation, mechanical support, and storage. All of these functions are dictated by the three-dimensional (3D) structure of a protein which in turn is determined by the linear sequence of amino acids. This process of going from linear sequence of amino acids to 3D protein structure is referred to as protein folding [4]. Protein folding is an important biological process which, if not done in the right way, could lead to certain neurological diseases such as Alzheimer s disease, Parkinson s disease, Huntington s disease, and several diseases related to cancer as well. Therefore, understanding the process of protein folding and elucidating a protein structure is important in order to understand its function which in turn provides useful insights for various medical and pharmaceutical applications. Although the amino acid sequence drives the process of the protein folding, there is an intermediate step, i.e. identifying β-sheets, helices and coil regions in a protein that also contribute to form a 3D protein structure [4] Primary structure Proteins are linear chains of polymers composed of 20 naturally occurring amino acids. Each amino acid consists of a carboxyl group, an amino group, a hydrogen atom and an R-side chain attached to a central carbon atom called α-carbon (shown in Figure 1). It is this side chain that distinguishes one amino acid from another and determines whether it will be hydrophilic, hydrophobic or neutral [5]. 3

20 4 introduction Figure 1: Structure of an amino acid Amino acids can form a peptide bond with each other whereby a carboxyl group of one amino acid forms a bond with the amino group of another. Peptide bonds hold amino acids in a linear polypeptide chain which is known as the primary structure of a protein [5] Secondary structure Polypeptide chains have various segments that are either coiled or folded, which are stabilized by hydrogen bonds between atoms of the polypeptide backbone. These coiled and folded segments are referred to as the secondary structure of a protein. Two major folds in a protein are α-helices and β-sheets [6]. α-helices are rigid, rod-like coiled strands held together by hydrogen bonds between every fourth amino acid, whereas β-sheets are arranged side by side and held together by hydrogen bonds. β-sheets can either be parallel or anti-parallel depending on whether the direction of the strands is the same or the opposite. Anti-parallel sheets have more well-aligned hydrogen bonds, thus making them more stable than parallel sheets [6] Tertiary structure The three dimensional structure of a protein is referred to as tertiary structure which is formed when a protein molecule twists, folds and bends in a certain way. The aim of protein folding is to minimize the energy of the structure thus maximizing the structural stability. Unlike secondary structure, tertiary structure is formed by various interactions between the R-side chains of the amino acids. Hydrophobic interaction ensures that polypeptide chains fold into the correct shape such that the non-polar amino acids cluster together to form a nonpolar protein core away from the aqueous environment. Weak van der Waals forces act on this hydrophobic core to further stabilize the protein. Polar amino acids interacting with each other via hydrogen

21 1.1 proteins 5 bonds and ionic interactions also contribute to the structural stability of a protein. Individually all of these interactions are weak but their cumulative effect is enough to help to give the protein a unique 3D structure. Lastly, there are covalent bonds formed between two cysteine residues which further stabilize the structure. These covalent bonds are known as disulfide bridges and are categorized as weak interactions [7] Quaternary structure Quaternary structure is the result of several proteins interacting with each other and arranging to form a protein complex. This protein complex is stabilized by various interactions: hydrogen bonding, disulfide bridges and salt bridges. The four levels of protein structure are shown in Figure 2. Figure 2: Levels of protein structure. [8]

22 6 introduction 1.2 objective A protein s function is determined by its 3D structure which in turn is dictated by its amino acid sequence. Therefore, elucidating protein structure is important to understand its function. Experimental approaches such as X-ray crystallography and nuclear magnetic resonance spectroscopy have played a major role in determining protein structures [9]. There are 40,000 proteins in Protein Data Bank whose structure has been determined experimentally [9] [10]. These structures have provided useful insights as to how protein chains fold into their unique 3D structure, how chains interact to form complexes, and how to use the amino acid sequence of a protein to predict its structure. However, due to current high-throughput DNA and protein sequencing technologies, the number of proteins with known sequences has increased exponentially and structures are not being resolved at that pace. It has been estimated that only 40,000 out of 2.5 million known sequences have resolved structures. This gap is still increasing because experimental approaches are not only expensive but also time-consuming, labor-intensive and at times not possible to use. Therefore, computational approaches are required to narrow the gap between known sequences and solved structures [9]. In our thesis, we focused on one of the computational approaches i.e. the Artificial Neural Networks (ANN) to decipher proteins secondary structure using their primary sequence. We implemented classic neural network algorithms that predict helices, β-sheets and coil regions in a protein given its primary sequence. Although all of the models are well-reported, they are largely unexplored in practice in terms of their validation accuracy and Q 3 score when trained and tested on a different dataset. Our thesis gives a working implementation of the algorithms in addition to the theoretical background. We use two novel datasets (CB513 dataset [11] and a combination of TMP166 and SP1441 dataset [12]) and see if we get results similar to those reported by the authors. We give a comparison of the different implementations based on their accuracies. The thesis is divided into two parts: theory and experiments. Chapter 2 focuses on experimental approaches that are used for protein structure prediction. From there, we delve into the details of machine learning approaches that are being used to study the challenging problem of protein secondary structure prediction in Chapter 3, including ANN. Following these, Chapter 4 focuses on different neural network models that we implemented. The second part of the thesis solely focuses on how the models worked in practice. Chapter 5 presents the datasets and framework used in our project. In this chapter we also explain how you can use our command line tool to predict the structure of any protein using the models we implemented. In Chapter 6 we explain the experiments

23 1.2 objective 7 carried out to find the optimal set of hyperparameters for the different neural network models. We compare our results to those reported by the authors and we cover how similar or different our hyperparameters are to those used in the original publications. We end our thesis by stating our conclusions and possible future work in Chapter 7.

24

25 E X P E R I M E N TA L P R O T E I N S E C O N D A RY S T R U C T U R E D E T E R M I N AT I O N 2 In this chapter we give an overview of several analytical techniques used in experimental protein secondary structure determination - spectroscopic methods including circular dichroism, Fourier transform infrared, Raman and NMR spectroscopy, and X-ray crystallography. 2.1 spectroscopic methods Circular dichroism spectroscopy Circular dichroism spectroscopy is a method that enables the quick determination of protein secondary structure. It is based on the concept of circular dichroism, defined as the differential absorption of left and right circularly polarized light. In proteins, the intensity and the wavelength of the optical transition depends on the orientation of the peptide bonds, therefore, many secondary structure motifs give rise to characteristic CD spectra, allowing for the estimation of the structure of unknown proteins. The CD of the protein molecule is measured over a range of wavelengths and secondary structure information is obtained from the resulting CD spectra utilizing the assumption that the spectrum of a protein molecule is given by a linear combination of the spectra of its secondary structure elements and a noise term. For the analysis, one can use polypeptide standards with defined compositions in known conformations or proteins whose secondary structures have been determined with X-ray crystallography. There is a variety of methods used to compare the reference spectra to the spectrum of the unknown protein, including linear regression aiming to fit the spectrum of the unknown protein to the spectra of fixed standards, or neural networks trained on the CD spectra of reference sequences aiming to predict the secondary structure of unknown proteins. [13] Fourier transform infrared (FT-IR) spectroscopy Fourier transform spectroscopy is a type of vibrational spectroscopy using a beam combining many frequencies of light to obtain the interferogram, the raw light absorption data. This raw data is then turned into the spectrum using Fourier transform. [14] 9

26 10 experimental protein secondary structure determination The strength and polarity of the vibrating bonds of the molecules influence the wavelength and probability of absorption, hence, the spectrum is influenced by the conformation. In proteins, there are nine characteristic IR absorption bands - amides A, B and I-VII. The amide I band is related to the backbone conformation, therefore, it is quite sensitive to the secondary structure compositions and has been widely used to determine protein secondary structures. [15] Raman spectroscopy Raman spectroscopy, similarly to FT-IR, is a vibrational spectroscopy method, however, it differs in the manner in which the vibrational state of the molecule is changed, thus resulting in complimentary information. [16] It uses a monochromatic beam of radiation, normally laser, to illuminate the sample and measures the molecular vibrations triggered by the inelastic scattering of light, resulting in characteristic spectra. [14] [17] Primarily amide I and amide III bands are used in practice to determine protein secondary structures. [18] The spectra are analyzed as the linear combination of the spectra of proteins with known structures. [19] NMR spectroscopy Nuclei in a magnetic field absorb and re-emit electromagnetic radiation, which phenomenon serves as the basis of nuclear magnetic resonance spectroscopy. The nuclear chemical shift observed in the NMR spectra is a reliable indicator of biomolecular structure. [20] This method can provide detailed structural information at atomic resolution. The link between the chemical shifts and secondary structure elements is not fully understood, however, a library of NMR chemical shift data exists for peptides and small proteins that can be used to infer the structure of larger sequences. [18] 2.2 x-ray crystallography X-ray crystallography enables the study of protein structure at near atomic resolution. The proteins have to be arranged into three-dimensional periodic arrays, also known as crystals, to amplify the scattering signal so that meaningful X-ray diffraction data can be obtained that would not have been possible otherwise as scattering from individual molecules is rather weak. Crystallization is a time-consuming process influenced by many factors such as protein concentration and purity, temperature or ionic strength.

27 2.2 x-ray crystallography 11 X-ray diffraction can be used to determine atomic structures because X-ray has wavelengths comparable to atomic bond distances. As the waves encounter the molecules, they bend and interfere, and the intensities of these diffracted waves can be analyzed using diffraction theory to reconstruct the atomic structures of the molecules. [21]

28

29 M A C H I N E L E A R N I N G A P P R O A C H E S T O P R E D I C T P R O T E I N S E C O N D A RY S T R U C T U R E 3 In this chapter we give an overview of several machine learning techniques used to predict protein secondary structure, including Support Vector Machines, Hidden Markov Models and Neural Networks. In the section covering neural networks we also describe PHD, PSIPRED and JPred, three protein secondary structure prediction tools utilizing this machine learning technique. 3.1 support vector machines When presented with a linear separable problem, Support Vector Machines aim to find a separating hyperplane with maximum margin, which means that the method attempts to create a hyperplane such that the distance between the hyperplane and the closest training samples is maximized. It can also be used to find the boundary between classes in non-linear separable datasets. This is achieved using a kernel function, which maps the input data into a higher-dimensional feature space where it may indeed be linear separable. The decision boundary found in the higher dimension space is then projected back to the original space, giving a nonlinear decision boundary. A schematic diagram of SVMs can be seen in Figure 3. The task of protein secondary structure prediction is high - dimensional and nonlinear. Due to SVMs ability to implicitly map nonlinear data into high-dimensional space using kernel functions, this method is well-suited to attempt to solve the PSSP problem as the complexity of the problem will continue to depend on the dimensionality of the input data and not of the feature space. This kernel trick helps to avoid the "curse of dimensionality", the problem of overfitting when the number of parameters is too large with respect to the number of training samples. [22] A variety of methods exist to implement SVMs in the context of protein secondary structure prediction. The frequency patterns of consecutive amino acids or amino acid groups with common properties can be used as the input vector as introduced by Birzele and Kramer in [23]. In this case, only the patterns exceeding a frequency threshold are considered to ensure good predictivity. Karypis method discussed in [24] is based on an input coding scheme combining both position-specific and nonposition-specific information, generated by PSI-BLAST and BLOSUM62, and utilizing a kernel function designed to capture sequence conservation signals around the local window of 13

30 14 machine learning approaches to predict protein secondary structure Figure 3: The schematic diagram of SVMs as given by [22]. (a) shows the linear separable and (b) shows the non-linear separable case. each residue. Zamani and Kremer [25] propose the usage of amino acid codon encoding, which incorporates evolutionary information in the prediction model since encoding is based on the genetic code. SVMs were originally designed to perform binary classification - how to extend it effectively to be able to handle multiclass classification is still an ongoing research issue. [26] 3.2 hidden markov models Hidden Markov Models are probabilistic graphical models that can be represented as directed acyclic graphs reflecting a series of probabilistic dependency relationships among variables. In a HMM, the states can not be observed directly, they are latent, however, they each emit an observation. The nth observable in a chain of observations only depends on the corresponding hidden variable and the nth hidden variable only depends on the n 1st hidden variable, as shown in Figure 4. The parameters governing the model are π, A and φ, the initial, transition and emission probabilities, respectively. The initial probability π k expresses the probability of state k being the initial state, the transition probability A jk gives the probability of going from state j to state k, and the emission probability φ kn describes the probability of emitting observable k from latent state n. [27] In the context of protein secondary structure prediction, the secondary structure of a given residue is the hidden variable and the amino acids are the observables. The first attempt to use HMMs for protein secondary structure prediction was due to Asai et al. [28] In their approach, four submodels are trained separately, representing the four secondary structures: helix, sheet, turn and other. At the end, the four submodels are combined to give a network suitable for practical use. Other methods were proposed to add more biologically relevant details such as the solvent accessibility status, the length distribution of the secondary structure segments [29], distinction between N-cap and C-cap positions or the explicit modeling of amphipatic helices and β-turns [30],

31 3.3 neural networks 15 Figure 4: The schematic diagram of HMMs, where x i are observables, z i are hidden variables, A i are transition probabilities and φ i are emission probabilities. however, good results have also been obtained with models not taking into account prior biological knowledge. [31] 3.3 neural networks Neural networks are non-linear hypothesis sets that are widely used in Machine Learning (ML). The topology of the present day ANN was developed simulating a network of biological neurons The biological neuron A neuron (shown in Figure 7) consists of three basic units: dendrites, soma and axon. Dendrites are input wires, arising from the main cell body of the soma, receiving signals from other neurons. The axon is an output wire that sends signals to other neurons. The neurons communicate with each other by sending electrical impulses via their axons. The axon s terminal makes contact with a dendrite through a synapse. A neuron receiving these pulses of electricity manipulates the cell potential and it fires an electrical impulse if the cell potential exceeds a certain threshold. The incoming signals from each synapse are categorized as either excitatory or inhibitory. The neuron assigns a "positive weight" to the excitatory signal and a "negative weight" to the inhibitory signal. The excitatory signal must exceed the inhibitory signal by a certain threshold in a certain amount of time which then causes the firing of electrical impulses from a neuron [32] The artificial neuron The artificial neuron is composed of a very simple model of what a neuron does (shown in Figure 6). An artificial neuron receives an input x 1,x 2,x 3,...,x N from the input layer where each input has some

32 16 machine learning approaches to predict protein secondary structure Figure 5: Structure of a biological neuron [33] weight associated to it. Therefore, a neuron/node receives a weighted sum of the inputs: N x i.w i i=1 This weighted sum of the inputs are passed through an activation function, such as sigmoid, ReLu or tanh, which then outputs some value. It is the activation function that introduces non-linearity to the model. Figure 6: Structure of a single artificial neuron

33 3.3 neural networks 17 A neural network is simply a group of these neurons stacked together. The most common neural network is a fully-connected feedforward neural network shown in Figure 7 in which the first is the input layer, the last is the output layer and all other layers are termed as hidden layers. Each layer can have different number of nodes and a given layer is fully connected to its adjacent layer thus making neural network an acyclic graph. Output from a given layer serves as an input to the next [33]. Figure 7: Topology of a fully connected feed-forward neural network [33] Convolutional neural networks Convolutional neural networks (CNN) have been very successful in the field of image recognition and classification. They are feed-forward neural networks inspired by the organization of the visual cortex system. Mathematically, the term "convolution" means how much two functions overlap when one function is passed over another. Convolution can be thought of as multiplying two functions (or matrices in the case of NN) in order to mix two functions. The convolutional layer has small matrices called kernels, also known as filters, that are sliding across the input matrix [34]. This layer involves element-wise multiplication of the values in the filter and the input matrix as shown in Figure 8. Figure 8: Computing output values of a convolutional layer

34 18 machine learning approaches to predict protein secondary structure For each filter one takes a local receptive field of the input matrix, applies element-wise multiplication, moves the filter by a stride and repeats the process for the entire input matrix. Padding can be applied around the input matrix to ensure that the output from the convolutional layer does not get smaller [34]. The next layer in a CNN is the pooling layer which reduces the size of the input matrix and therefore, reduces the total number of computations required to train a NN. In Figure 9 max pooling is applied to a matrix which simply takes the maximum value from a patch of the matrix and stores it in a new matrix while the rest of the values in the patch are discarded. No learning takes place in a pooling layer, it only identifies and stores the locations that have the strongest correlation with a given feature [34]. Figure 9: Max pooling on the output obtained from some convolutional layer. Different patches are represented by a different color. After the convolutional layer(s), a CNN can have one or more fully connected layers that are similar to the layers in a regular multilayered neural network [34] Training a neural network Like any supervised learning model, NN is implemented such that it approximates the unknown target function h(x) well by ensuring that the objective function is as small as possible: y ŷ 2 (1) The weights in a NN are responsible for amplifying the input signal and dampen the noise. A large weight signifies higher correlation between the signal and the NN s output [33], therefore, a NN has to learn these weights such that the objective function is minimized. This derivative could be set to zero if it s a closed form solution or one can perform gradient descent in which the objective function is iteratively decreased until convergence. Following the procedure of other learning algorithms, one has to take the derivative of the objective function with respect to the parameters, the weights in this case, that are to be optimized. A NN is trained using backpropagation which is a way to take the derivative of the objective function. Backpropagation is a chain rule that helps to find the derivative of the objective function. The derivation in section A.1 shows that the

35 3.3 neural networks 19 Figure 10: Part of the neural network considered for the derivation of backpropagation. See section A.1 in appendix for the whole derivation. basis of backpropagation is to find the error of each node in some layer i which simply is the weighted average of the output errors: δ i = σ (a i ) j δ j.u ji (2) where σ is some activation function, a i is the weighted sum of inputs from the previous layer and u ji is the weight connecting a node in layer i to a node in layer j. The error of the last layer (k) is simply the derivative of the objective function: δ k = E ŷ = y ŷ 2 = 2 y ŷ (3) Equation (2) shows that the error of some layer n depends on layer n + 1. The way backpropagation works is that one initializes the weights randomly because if the weights are the same then the nodes in the NN will follow the same gradient and thus become identical. The network is given the first datapoint, each node in a layer receives a weighted signal from the previous layer and if this signal exceeds some threshold then that node in a given layer is activated. This way the signal is propagated forward which allows the network to get an output in the last layer. That output can be used to compute the δ (derivative of the objective function) of the last layer which can be used to compute the δ of the previous layer and so on. Once one has these δs, the derivative for all of the weights can be taken: err u il = δ i.z l (4)

36 20 machine learning approaches to predict protein secondary structure Having derivatives of all of the weights, gradient descent can be performed in which the weights are updated: u il = u il lr E u il (5) This process is repeated until convergence Commonly used PSSP tools utilizing neural networks In this section we describe three protein secondary structure prediction tools utilizing neural networks - PHD, PSIPRED and JPred. PHD PHD is an automatic server presented by Rost, Sander and Schneider in [35]. The algorithm it is based on is described in detail in [36]. This method incorporates evolutionary information coming from multiple sequence alignments serving as inputs to the system of networks consisting of 3 levels. The first level is a sequenceto-structure net, using sliding windows of 13 residues and classifying each central residue into the 3 secondary structure classes (helix, strand and loop). The second level is a structure-to-structure net, taking into account the correlations between consecutive patterns. In this network, a window of 17 basic cells is used, each cell corresponding to the 3 output values for the secondary structure prediction of the central residue. In both of these networks, the target output is the secondary structure of the central residue. 2*2 different architectures were created and trained independently in an effort to reduce noise and improve accuracy, one quartet trained with a real coding of sequences and the other by adding conservation weights introducing additional evolutionary information. As a third level, a jury decision step is implemented, which takes the arithmetic averages for alpha, beta and loop based on the outputs coming from the different networks. An outline of the network system is shown by Figure 11. According to the authors, the method achieves a three-state accuracy of 70.8%. The original server tool is inaccessible, as we attempted to submit a protein for prediction to the address given by [36] and we got an error message back about the being undeliverable. PSIPRED PSIPRED [37] is a cascaded system of two neural networks, used to predict protein secondary structure based on the position specific scoring matrices generated by PSI-BLAST. First, a sequence profile is created using the profiles PSI-BLAST generates as an intermediate

37 3.3 neural networks 21 Figure 11: The outline of the PHD method, as give by Rost and Sander in [36] step during its search process. As PSI-BLAST is sensitive to biases in the sequence data banks, a a custom sequence data bank was created for PSIPRED. The final PSSM, which is a 20*M matrix where M is the length of the target sequence, is used as input to the neural network. In the first neural network a standard feed-forward architecture with one hidden layer is used, with a window of 15 amino acid residues. The final input layer is made up of 15 groups of 21 units, where the extra unit per amino acid is used to take into account where the window spans either the N or C terminus of the protein. The hidden layer contains 75 units and the output layer includes three units, corresponding to the three secondary structure elements - helix, strand or coil. The second neural network also uses a feed-forward architecture with a window size of 15, and it is used to filter the outputs of the main network. This network comprises 15 groups of 4 input units, where 3 input units correspond to the secondary structure element and the 4th is used to indicate where the window spans an N or C terminus, similarly to the first network. The hidden layer is made up of 60 units. An outline of the method is shown by Figure 12. In order to train the network, an on-line backpropagation training procedure is used, meaning that the weights are updated after each time the network is presented with a pattern. Using a new testing set and three-way cross validation, the author claims to have achieved an average Q 3 score of 75.5% to 78.3%. JPred4 JPred4 [38] is a secondary structure prediction server providing predictions using the JNet algorithm, presented in [11]. The PSSP algo-

38 22 machine learning approaches to predict protein secondary structure Figure 12: The outline of the PSIPRED method, as give by Jones in [37]. rithm is trained with different types of multiple sequence alignment profiles originating from the same sequences, including PSIBLAST profile (PSSM), Multiple Sequence Alignment, HMMer2 GCG profile and PSIBLAST profile frequency. The outline of the method is shown by Figure 13. JNet uses a network ensemble consisting of two artificial neural networks. The first one utilizes a sliding window of 17 residues over each amino acid in the multiple sequence alignment and the addition of a conservation number. The network comprises 9 nodes in the hidden layer and 3 nodes in the output layer. The input for the second network is the output from the first network windowed into 19 residues, and a conservation number. The second network also has 9 nodes in the hidden layer and 3 nodes in the output layer. As Figure 13 shows, each type of alignment data trains a different neural network. At the end, the networks are combined to yield a consensus solution based on the average taken for each predicted state. The positions where the predictions given by all methods were identical ("jury agreement") were taken as final predictions, while the other ("no jury") positions were used to train a separate neural network and the final predictions were obtained by replacing the original "no jury" positions with these predictions. Cuff and Baron claim to have achieved an average accuracy of 76.4%.

39 3.3 neural networks 23 Figure 13: The outline of the JPred method, as give by Cuff and Barton in [11].

40

41 T O O L S A N D M E T H O D S 4 In this chapter we give an overview of four neural network models we implemented for our thesis. 4.1 simple neural network (jnn) Qian and Sejnowski [1] made the first attempt to predict protein secondary structure using a neural network. Their approach serves as the basis for jnn, our multi-layered feed-forward neural network implementation. The objective behind the method introduced was to use the information of known protein structures in the database to predict the structures of unresolved proteins for which no homologous structures are present. The proposed model consisted of 13 groups where each group had 21 units. 20 units corresponded to one of the amino acids and 1 unit was used as a spacer between the sliding windows. In the sliding window method, a window serves as a training pattern for predicting the structure of the amino acid at the center of the window. Qian and Sejnowski used local encoding to encode their dataset (consisting of 106 proteins) where only one unit corresponding to a particular amino acid was set to 1 and the rest of the units were set to 0 in a group. The network was trained using backpropagation with 40 units in the hidden layer resulting in a Q 3 accuracy of 62.7%. Qian and Sejnowski investigated the dependence of the test accuracy on the number of nodes in the hidden layer (0, 3, 5, 7, 10, 15, 20, 30, 40, 60). A neural network with 40 hidden units gave the optimal performance. They used these 40 hidden nodes to explore the dependence of the neural network s performance on the size of the windows. They observed peak performance when the window size reached

42 26 tools and methods Figure 14: The neural network architecture described by Qian and Sejnowski [1] 4.2 using multiple sequence alignments (jsnn) In this section we describe the approach we used to improve the last implementation by incorporating multiple sequence alignments. The basic topology of jsnn is similar to the one proposed by Qian and Sejnowski [1]. The input was still a single sequence, however, this sequence was generated by performing a majority vote on a multiple sequence alignment (shown in Figure 15). We did one hot encoding on the sequence generated from majority voting and used the sliding window method to generate input for the network. The input was propagated forward to the output layer via a hidden layer. The output layer had three nodes for helix, strand, and coil. This method gave us a prediction accuracy of 67.8%.

43 4.3 cascaded neural network (mnn) 27 Figure 15: Majority voting on a multiple sequence alignment 4.3 cascaded neural network (mnn) Rost and Sander [2] proposed a cascaded neural network, which served as a basis for our mnn implementation. The model makes use of multiple sequence alignments because unlike a single sequence, multiple alignments have more information as amino acid substitutions reflect the protein family s folding properties. Figure 16 shows the architecture of the first neural network of the proposed model. It is a regular sequence - to - structure neural network as proposed by Qian and Sejnowski [1]. The second neural network is a structure - to - structure neural network which refines the structural information obtained from the first neural network. For each protein in the dataset, a set of aligned homologous proteins was created. Instead of feeding a single sequence, an entire alignment was given to the network in the form of a profile table. Each sequence position in the profile table is represented by residue frequencies which is determined from the alignment. Therefore, the input to the network is a residue frequency vector for each residue in the sequence. The first neural network uses sliding windows of size 13 corresponding to 13 * 21 (273) input units. The input signal is passed through a network of one hidden layer and an output layer with three nodes. Three nodes in the output layer correspond to the three secondary structures: coil, α-helix, and β-sheet. The output values are between 0 and 1. The output from this network serves as an input to the second structure - to - structure neural network. For the second network, overlapping windows of size 17 are used. In addition to a spacer, the inputs to the second network are three real numbers where each number corresponds to one of the three secondary structure elements, therefore, the second network has 17 * 4 (68) input units. Like the first network, the signal is propagated through one hidden layer to the output layer which consists of three nodes for helix, sheet, and coil.

44 28 tools and methods Using the model described above, the authors claim to have obtained a Q 3 accuracy of 69.7%. Figure 16: Sequence to Structure neural network architecture described by Rost and Sander [2]. Structure to structure neural network is not shown. 4.4 convolutional neural network (snn) In this section we describe the approach proposed by Liu and Cheng in [3] which serves as the basis for snn, our convolutional neural network implementation. Liu and Cheng propose the usage of a 2D convolutional neural network architecture utilising position-specific scoring matrices (PSSMs). Improvement in the protein secondary structure prediction is given by the introduction of evolutionary information carried by the PSSM matrix. The PSSM was obtained by running the PSIBLAST software with the BLOSUM62 scoring matrix on multiple sequence alignments, giving a two dimensional matrix of size 20*N, where 20 is the number of different amino acid types and N is the protein length. In order to obtain information about the sequence interaction of the residue and to predict the secondary structure of the central residue, a consecutive sliding window of length 21 is used. The architecture of the convolutional neural network is given by Figure??. The first convolutional layer has 96 filters of size 5*5, followed by a max pooling layer of size 2*2 and a second convolutional layer with 24 filters of size 2*2. The features extracted by the second convolutional layer are used as the features for prediction. This

45 4.4 convolutional neural network (snn) 29 Figure 17: The convolutional neural network described by Liu and Cheng in [3] layer is followed by a fully connected layer with 3 units as this approach predicts three classes corresponding to three different secondary structure elements - H (α-helix), E (extended strand) and C (coil). This multi-class classification problem is solved using a softmax classifier. On the widely used benchmark dataset 25PDB, the authors claim to have obtained a Q 3 accuracy of 77.7%.

46

47 Part II P R A C T I C A L E X P E R I M E N T S

48

49 F N N - O U R C O M M A N D L I N E T O O L F O R S E C O N D A RY S T R U C T U R E P R E D I C T I O N 5 In this chapter we are going to give an overview and usage manual of fnn, the command line tool we created. fnn is a tool for predicting secondary structure of proteins using their primary sequence, multiple sequence alignment data (in FASTA format), or position-specific scoring matrix (PSSM). 5.1 dataset Two datasets were used in our work: the CB513 dataset [11] and the combination of the TMP166 and the SP1441 datasets [12] which we will call TSP1607 from now on. Cuff and Barton [11] used the CB513 dataset to train the Jnet neural networks. This dataset contains 513 non-redundant sequences which we used to train and test our neural network models. The CB513 dataset consists of 396 sequences from the 3Dee database of protein domains, 117 proteins from Rost, and 126 non-redundant proteins. All of the proteins were compared pairwise, and are non-redundant to a 5 standard deviation cut-off. Each file in the CB513 dataset contains secondary structure definitions from the DSSP, DEFINE and STRIDE definition methods. DSSP, DEFINE and STRIDE have 8 categories of secondary structure: G (3- turn helix), H (4-turn helix), I (5-turn helix), T (hydrogen bond turn), E (extended strand in parallel or anti-parallel β-sheet conformation), B (residue in isolated β-bridge), S (bend), and _ or C (coil). For our project, we utilized the widely used DSSP definition and carried out an 8-state to 3-state reduction on the data. H, G and I were translated to H, E and B to S, and all other states to C (represented as a blank space in our data). The TSP1607 dataset was used to train and test our convolutional neural network. This dataset was originally used to develop TM- SEG [12], a method to predict transmembrane helices. In addition to the sequences, this dataset has evolutionary information for each of the sequences in the form of Position-Specific Scoring Matrices (PSSM). These matrices were generated using PSI-BLAST. However, this dataset did not contain secondary structure information. We obtained 3-state structure prediction, helices (H), β-sheets (B), and Coil (C) from PSIPRED [37]. PSIPRED took a long time (approximately 2-3 hours) to predict the secondary structure of a given sequence and it only accepted 20 job submissions from a given IP address. Therefore, due to time constraints and the restriction to submit at most 33

50 34 fnn - our command line tool for secondary structure prediction 20 jobs at a time, we decided to work with 511 sequences from the TSP1607 dataset. This dataset contains helical transmembrane proteins and short signal peptides, serving as an overall easier target with less patterns to learn. 5.2 framework used Several machine learning libraries have been developed, for example TensorFlow, Keras, Theano or NumPy. We used Keras [39] for the construction of our neural networks. Keras is an open source neural network library in Python, capable of running on top of TensorFlow, CNTK or Theano. We used Keras with Theano as backend. Theano is a python library allowing the efficient definition, optimization and evaluation of mathematical expressions involving multi-dimensional arrays. [40] Keras is slow compared to other libraries mostly because it first constructs a computational graph using the backend infrastructure and then uses it to perform operations. We chose it because it is relatively easy to implement a neural network in Keras and it provides useful utilities such as data preprocessing, model compilation, result evaluation and graph visualization. 5.3 implemented algorithms Parsing We implemented our own parsers for PSSM and FASTA files. In the case of the secondary structures, we included a three state generator which performs the 8-state to 3-state reduction of the data as described in section Encoding As both the input and output sequences are strings, we needed to encode the variables before presenting them to the networks. In the case of the target sequences, each secondary structure symbol is integerencoded first, forming an N*1 array where N is the length of the protein and the possible values in the array are 0, 1 and 2, corresponding to coil, strand and helix. The elements of this array are then one hot encoded, yielding a matrix of N*3. Figure 18 shows this encoding process.

51 5.4 user manual 35 Figure 18: The process of encoding target sequences before presenting them to the neural networks Neural networks The core functionality of fnn is given by the four neural networks we implemented - jnn, jsnn, mnn and snn, as described in Chapter user manual In this section we explain how to use fnn System specifications For the tool to work properly, following packages are required: Python 3.6 : Keras : Scikit-learn :

52 36 fnn - our command line tool for secondary structure prediction Getting started In order to download fnn, you should clone the repository via the commands git clone cd fnn Once the above process has completed, you can run python tool.py -h to list all of the command-line options. If this command fails it means that something went wrong during the installation process Structure prediction In this section you will predict the secondary structure of a protein using its primary sequence. We will assume you have already followed the instructions from subsection and for downloading Python 3.6, Keras, Scikit-learn and fnn. The following command will allow you to predict the secondary structure of a protein: python tool.py <file_name>.fasta Protein structure prediction will take approximately 12 minutes if a single FASTA sequence is given as an input and will take approximately 27 minutes if the input is a multiple sequence alignment. The time is mostly spent on training the neural network. You can also specify which neural network you want to give your data to. In that case you can use one of the following flags: -j JNN This flag will run the neural network described by Qian and Sejnowski [1], which is a simple one hidden layer feed-forward neural network that requires a single sequence as an input. This neural network will run by default if you provide a single fasta sequence even without the above mentioned flag. The command is : python tool.py -j JNN <file_name>.fasta -js MSA This flag will run the neural network explained in section 4.2, which is a standard feed-forward neural network that also requires a single sequence as an input but that sequence is generated from a multiple sequence alignment by virtue of majority voting. This neural network is the one that will be used by default if a multiple sequence alignment is provided without a flag. The command is:

53 5.4 user manual 37 python tool.py -js MSA <file_name>.fasta -m mnn This flag will predict the secondary structure using the network similar to the one explained by Rost and Sander [2]. It is a cascaded neural network whereby the first neural network is a sequence - to - structure network and the second one is a structure - to - structure neural network. The command is as follows: python tool.py -m mnn <file_name>.fasta -s snn This flag runs a convolutional neural network based on the approach of Liu and Cheng [3]. This neural network will run by default if the user enters a PSSM as an input. The command to be entered is: python tool.py -s snn <file_name>.pssm You can also have the prediction written to a text file using the -o flag: python tool.py -o <file_name>.fasta

54

55 E X P E R I M E N T S 6 In this section we are going to discuss the experiments conducted to find the optimal neural networks for PSSP. Our search for finding optimal parameters was different for the different models, therefore, experiments for each of the models will be explained separately. That being said, we generally tweaked the same set of parameters in each model although the steps might differ between them. We experimented with the following parameters: Number of nodes in a hidden layer Batch size L1 and L2 norm regularization Dropout Adam or Stochastic Gradient Descent (SGD) optimizer Number of hidden layers A major drawback of using an artificial neural network is that it takes a long time to converge to some minima compared to other machine learning techniques. There are various hyperparameters, such as the number of nodes, number of hidden layers, type of optimizer, type of regularization, that could help to optimize a neural network. SciKit-learn has a gridsearchcv function that allows the user to specify different hyperparameters and it in turn gives the most optimal set of parameters. However, gridsearchcv is a brute force method that runs the base model with different parameters that the user specifies. GridSearchCV could take a lot of time to run if a large number of hyperparameters is to be tested and/or the model in question is generally slow. Therefore, it was not feasible for us to do a complete grid search to find the best combination of hyperparameters because we had a large number of hyperparameter combinations to be tested coupled with the fact that training a neural network is a time-consuming process overall, thus making the entire process really slow. 6.1 workbench All experiments were conducted on the GenomeDK HPC cluster and two regular laptops. The specifications of each of the laptops and the cluster are as follows: 39

56 40 experiments GenomeDK HPC Cluster, 190 nodes (3384 compute cores) connected with 10GigE/Infiniband, each node having 16 to 32 cores and either 64 GB, 128 GB, 256 GB, 512 GB or 1 TB of RAM, 3.5 PB storage capacity MacBook Pro Retina, 13-inch with macos High Sierra (version ), Processor 3.1 GHz Intel Core i7, and memory of 16GB MacBook Pro Retina, 15-inch with macos High Sierra (version ), Processor 2.3GHz Intel Core i7, and memory of 8GB We used Python 3.6 to implement the neural networks. We made sure that all other programs were closed while conducting the experiments. 6.2 preliminaries In this section we will briefly explain terms that will be used repeatedly in the next section (6.3) Batch size Batch size refers to the total number of training examples given to a neural network at a time. The dataset is divided into such batches/chunks because the entire dataset can not be given to a neural network at one time Epochs One epoch is when the whole dataset is forward propagated and backpropagated through the neural network once. One epoch is never enough to train a neural network - instead, we pass the complete dataset multiple times through a neural network. This is necessary because data is limited and the neural network is trained using gradient descent which is an iterative process in which weights are updated. Therefore, one epoch is not enough Dropout regularization Dropout is a technique in which random nodes are selected and dropped/ignored during training. It means that nodes that are ignored will not contribute to the signal for the next layer in a forward pass and no weight updates are applied to those nodes in backpropagation.

57 6.3 experiments L1/L2 regularization L1 and L2 regularization are the most common regularization techniques. Both techniques add a penalty to the objective function for each additional coefficient added to the model. What makes them different is the way they apply these penalties. L2 regularization adds squared magnitude 1 2 λw2 for every weight in the NN to the objective function. L1 regularization, on the other hand, adds absolute magnitude of the weight, λ w, as penalty to the objective function Q3 score Q3 is the most common performance measure used in PSSP. It is the percentage of correctly predicted residues for a sequence: Q 3 = i=h,s,c ( correctlypredicted i ) 100 (6) observed i 6.3 experiments In this section we are going to give a detailed overview of the experiments we did to obtain the optimal hyperparameters for jnn, jsnn, mnn and snn. In each case, we present cross-validation results to evaluate model performance using the network with the optimal hyperparameters, and we finish each implementation s experiment section with testing the prediction accuracy of the optimal network on unseen data JNN We started our experiments by investigating different number of nodes (10, 20, 30,..., 200) in the hidden layer and batch sizes (100, 200, 300, 400, 500) for overlapping window sizes 13, 17 and 21. We then finetuned the batch size and the number of nodes by doing a local search around the optimal batch size and the number of nodes found in the global search. After that we experimented with regularization methods (L1 and L2) in addition to a dropout layer (with a dropout rate of 0.2) and optimizers (Adam and SGD). Finally, we experimented with the number of hidden layers to see if we are able to observe any improvements in the implemented model. Figure 19 summarizes the steps we followed to find the optimal hyperparameters for jnn.

58 42 experiments Figure 19: Steps followed to find the optimal set of hyperparameters for jnn and jsnn Number of nodes and batch size First, we determined the optimal number of nodes and batch size (100, 200, 300, 400, 500) in jnn. For each batch size we considered 20 different number of nodes in the hidden layer (10, 20, 30,..., 200). We ran these experiments using three different window sizes: 13, 17, and 21. Figure 20 shows the validation accuracies obtained after training and validating NN using different batch sizes. NN trained with a batch size of 100 gave far superior results compared to the neural networks trained on other batch sizes.

59 6.3 experiments 43 Figure 20: Validation accuracies observed using five different batch sizes and twenty different number of nodes in the hidden layer of jnn. Windows sizes 13,17 and 21 were considered.

60 44 experiments Table 1: Mean validation accuracies for window sizes 13, 17 and 21 after experimenting with the number of nodes and batch size in jnn. Window size Mean validation accuracy (%) On average, we obtained the best accuracies with window size 17 as shown by Table 1, therefore, we are going to concentrate on this window size in our discussion. Because we got peak performance using a batch size of 100, we decided to do a local search around it and considered batch sizes of 50, 60, 70,..., 150 with the same number of nodes as before. Figure 21 shows that neural networks receiving inputs with batch sizes 50 and 60 gave better results overall. Therefore, we decided to test regularization methods using batch sizes 50 and 60. For both batch sizes, we chose the node numbers corresponding to the peaks (110 and 190, respectively), and we decided to keep another node numbers giving slightly, but not considerably smaller validation accuracies as well (180 with batch size 50 and 130 and 180 with batch size 60). Table 2 summarizes the hyperparameters selected so far. Figure 21: Validation accuracies obtained after doing local search around batch size of 100 in jnn

61 6.3 experiments 45 Table 2: Hyperparameters chosen to find the optimal regularization method and optimizer Window size Batch size Number of nodes , , 180, Regularization and optimizer Figure 22 shows that without regularization and early stopping, the network started overfitting the data after 30 epochs because the training accuracy kept increasing, whereas validation accuracy flattened out. Figure 22: Accuracy and loss plot for jnn. The neural network was trained without regularization.

62 46 experiments In order to overcome the problem of overfitting, we decided to explore L1 and L2 regularization methods in addition to a dropout layer (with a dropout rate of 0.2) and optimizers (Adam and SGD) on the neural network. We tested these parameters using the hyperparameters mentioned in Table 2. Table 22 (in appendix) summarizes the results obtained from the experiments we conducted in order to optimize the performance of our neural network. Regardless of the number of nodes in the hidden layer and batch size, L2 regularization gave higher validation accuracies with both Adam and SGD. However, the gap between the training and validation accuracies and losses was bigger. In contrast, L1 regularizer together with Adam optimizer gave lower accuracies but the gap between the training and validation accuracy and loss curves was small thus implying good generalization properties (see Figure 23). This is not what we expected to see. We know that L2 regularization works better in practice because it has a stable and analytical solution. However, since L1 regularization with Adam optimizer gave us better results, we proceeded with this set of hyperparameters. We believe this could be due to the greedy approach through which we are trying to find the optimal set of hyperparameters. We do not rule out the possibility that training the network using L2 regularizer and a different set of hyperparameters may yield a better result. Furthermore, when we used SGD optimizer with L1 regularization, the model did not learn anything. Moreover, using L1 regularizer and a dropout layer resulted in underfitting. Using L2 regularizer and a dropout layer gave us a higher validation accuracy, however, there were fluctuations in the learning curve due to which we decided not to use these hyperparameters in our neural network. We believe it is because jnn is a simple feed forward neural network with one hidden layer and therefore, does not have to be that strongly regularized.

63 6.3 experiments 47 Figure 23: Accuracy and loss plots of jnn with L1 regularizer and Adam optimization Table 3 summarizes the hyperparameters we decided to proceed with to experiment with the number of hidden layers in jnn. Table 3: Hyperparameters chosen to investigate the effect of the number of hidden layers on jnn Window size Batch size Regularization Optimizer Number of nodes L1 Adam Number of hidden layers Figure 24 suggests that adding more hidden layers did not enable the model to learn anything - instead, the model seems to suffer from diminishing gradient, meaning that as more layers are added, the

64 48 experiments gradient becomes so small that weights could no longer be updated during backpropagation. Figure 24: Validation and training accuracies obtained by iteratively increasing the number of hidden layers in jnn Performance of the neural network Table 4 shows the final architecture of jnn. We then cross validated our model using k-fold cross validation where k {2, 3,..., 15} Table 4: Final architecture of jnn Window Batch Regularization nodes hidden layers Number of Number of Optimizer size size L1 Adam Figure 25 shows that the performance of the neural network remained more or less the same regardless of the number of folds used. We got the highest cross-validation accuracy (62.2%) using 6 folds. However, 5 folds and 10 folds gave us an accuracy of 62.1% and 61.9%, respectively. These accuracies are not considerably inferior to what 6- fold cross-validation gave us. Since 5-fold and 10-fold cross validation are mostly used in practice, we decided to use 5-fold cross validation

65 6.3 experiments 49 since it is computationally less expensive than 10-fold cross validation. Lastly, since cross-validation accuracies did not change considerably, we are led to a conclusion that the model we created is robust and will generalize well to unseen data. Figure 25: Final cross-validation accuracies of jnn Prediction accuracy We used the most common measure, Q 3, to determine the prediction accuracy of jnn. Table 5 summarizes the test accuracies obtained when we gave the trained jnn different sequences it has not seen before. Based on these results, jnn can predict the secondary structure of sequences with an average accuracy of 62.6% which is comparable to what the authors [1] reported (64.3%) Conclusion The final validation accuracy we obtained was 62.1% which is comparable to what Qian and Sejnowski reported (64.3%). This difference in accuracies could be due to the difference in datasets. Our hyperparameters were different from the authors. Firstly, they used a window size of 13 whereas our neural network uses overlapping windows of size 17. Moreover, they used 40 nodes in the hidden layer and we had to use 110 nodes. Other hyperparameters, such as batch size and regularization methods, were not mentioned in the article.

66 50 experiments Table 5: Q 3 accuracy of jnn (trained on CB513 dataset) for each test sequence Test sequence Number of residues Q3C(%) Q3H(%) Q3S(%) Q3(%) VTC1_YEAST VPS55_SCHPO Y1176_CORGB XLF1_SCHPO XENO_XENLA WOX3_ORYSJ XKR3_HUMAN Y1796_SYNY Y2070_CORGB VSTM1_HUMAN Testing jnn on TSP1607 dataset We ran jnn on the TSP1607 dataset to see if we would get similar results as before. However, using the TSP1607 dataset gave us a higher validation accuracy (shown in Figure 26) i.e. jnn gave us a validation accuracy of 70% which is almost 7% higher than what Qian and Sejnowski [1] reported. We did not expect our validation accuracy to be this different from what the authors obtained. We believe this could be due to noise in the CB513 dataset. CB513 had certain residues for which there were no predictions, whereas the TSP1607 dataset did not have such problems. Secondly, the TSP1607 dataset is an easy dataset and has less patterns compared to the CB513 dataset.

67 6.3 experiments 51 Figure 26: Validation accuracy and loss obtained by jnn using the TSP1607 dataset. Table 6 summarizes the Q 3 score obtained for 10 test sequences after jnn was trained on the TSP1607 dataset. The average accuracy obtained is 75.8% which is considerably higher than what we got when we trained jnn with the CB513 dataset. Therefore, having less noise in the dataset could explain both the higher validation accuracy and the higher average Q 3 score than the ones reported. However, this needs to be looked into in more detail to be certain why training our model on the TSP1607 dataset yielded a better validation accuracy.

68 52 experiments Table 6: Q 3 accuracy of jnn (trained on the TSP1607 dataset) for each test sequence Test sequence Number of residues Q3C(%) Q3H(%) Q3S(%) Q3(%) VTC1_YEAST VPS55_SCHPO Y1176_CORGB XLF1_SCHPO XENO_XENLA WOX3_ORYSJ XKR3_HUMAN Y1796_SYNY Y2070_CORGB VSTM1_HUMAN JSNN First, we experimented with different number of nodes in the hidden layer and batch sizes. We carried out these experiments using overlapping windows of sizes 13, 17 and 21. Through this process we were able to find the optimal combination of window size, batch size, and number of hidden nodes after which we investigated the performance of the neural network using different regularization methods i.e. L1 and L2 regularization and/or dropout layer in addition to the optimizers. Next, we experimented with introducing more hidden layers in jsnn Number of nodes and batch size We first explored the best combination of the number of nodes and batch size using overlapping windows of 13, 17 and 21. This experiment is similar to the one carried out in jnn. First, we conducted a global search on batch sizes and number of nodes, then we finetuned our hyperparameters by doing a local search around the hyperparameters found in global search. Figure 27 shows that similarly to jnn, the batch size 100 gave superior results compared to other batch sizes. Moreover, using overlapping windows of size 17 and 21 gave relatively better results than window size 13. We believe that the performance of the neural network using overlapping windows of 13 might have been reduced because information outside the window was not available for prediction. Lastly, the neural network gave peak performance for input windows of 17 and 21 when using 170 nodes in the hidden layer.

69 6.3 experiments 53 Figure 27: Validation accuracies observed using five different batch sizes in jsnn. Windows sizes of 13, 17 and 21 were considered.

70 54 experiments Table 7: Mean validation accuracies for window sizes 13, 17 and 21 after experimenting with the number of nodes and batch size in jsnn. Window size Mean validation accuracy (%) On average, we obtained the best accuracies with window sizes 17 and 21 as shown by Table 7, therefore, we are going to concentrate on these windows in our discussion. Next, we did a local search on batch size 100 and we explored batch sizes from 50 to 190. Figure 28 indicates that in case of window size 17, the neural network s performance peaked using batch size of 90 and 190 nodes in the hidden layer, whereas batch size of 100 gave highest validation accuracy with 180 hidden nodes when using overlapping windows of size 21.

71 6.3 experiments 55 Figure 28: Validation accuracy using different batch sizes for window sizes of 17 and 21 in jsnn.

72 56 experiments Table 8 summarizes the hyperparameters we decided to use to investigate the regularization methods (explained in ) Table 8: Hyperparameters chosen to find the optimal regularization method and optimizer for jsnn Window size Batch size Number of nodes , 170, Regularization and optimizer To ensure that the neural network does not overfit or underfit, we explored a combination of regularization methods (L2/L1 and dropout layer) in addition to two optimizers: Adam and SGD. Table 23 (in appendix) summarizes the regularizers and optimizers we investigated using the hyperparameters found in section It shows that the neural network using window size 21, 180 nodes in hidden layer and batch size 100 gave the highest validation accuracy when using L2 regularizer with a dropout layer and Adam optimizer. Figure 29 shows that there was a small gap between training and validation accuracies and and a minimal gap between training and validation losses which implies good generalization properties. Furthermore, similarly to jnn, when we used SGD with L1 regularization, the test and training accuracy remained constant and NN did not learn any patterns in the dataset. The L1 regularized network with an addition of a dropout layer and Adam optimizer resulted in underfitting. The L2 regularized network with Adam optimizer resulted in a wider gap between training and validation accuracies and losses. Unlike jnn, using a combination of L2 regularization and a dropout layer with a dropout rate of 0.2 gave superior results. We believe that in jnn we had simpler and less noisy data, therefore, soft regularization was able to keep the balance between overfitting and underfitting. However, in the case of jsnn, the data was not as clean. We assumed that the secondary structure of the majority-voted sequence will be same as the secondary structure of the reference sequence. We believe that this overly simplistic assumption introduced some variation in the dataset which is why adding a dropout layer gave relatively better results.

73 6.3 experiments 57 Figure 29: Validation and loss plots for jsnn using L2 regularizer with a dropout layer and Adam optimizer Number of hidden layers Figure 30 shows that adding more hidden layers did not improve the performance of the neural network. With a 2 hidden layer feed forward neural network, the validation accuracy was similar to the one obtained from the one hidden layer neural network, which means that adding an extra hidden layer did not enable the network to learn more patterns in the training set. Moreover, as the number of hidden layers increased, the validation accuracy and training accuracy kept on decreasing. With 6 hidden layers or more, the neural network suffered from vanishing gradient as both the validation and training accuracy stopped changing.

74 58 experiments Figure 30: Validation accuracy and test accuracy obtained by iteratively increasing the number of hidden layers in jsnn Performance of the neural network Table 9 shows the final architecture of jsnn. We cross-validated our model using k-fold cross validation where k {2, 3,..., 15} Table 9: Final architecture of jsnn Window Batch Regularization nodes hidden layers Number of Number of Optimizer size size L2 + dropout Adam Figure 31 shows that the performance of our neural network remained roughly the same regardless of the number of folds we used. We got the highest cross validation accuracy (66.7%) using 14 folds. However, 5 folds and 10 folds gave us accuracies of 66.2% and 66.5%, respectively, which are not considerably different from what 14 fold cross-validation gave us. As mentioned previously, 5-fold and 10-fold cross validation are widely used in practice, therefore, we decided to use 5-fold cross validation since it is computationally less expensive than 10-fold cross validation. Lastly, our cross validation accuracies did not change considerably which means that the model we created is robust and generalizes well to unseen data.

75 6.3 experiments 59 Figure 31: Validation accuracy and test accuracy observed after performing K-fold cross-validation on jsnn Prediction accuracy Table 10 summarizes the Q 3 scores we obtained when we tested jsnn on sequences that the model have not seen before. We got an average Q 3 accuracy of 67.7% which is 5.1% higher than what we got using jnn in subsection Table 11 summarizes the results we obtained when we ran jnn on the same set of sequences we tested jsnn on. Using these sequences jnn gave us an average prediction accuracy of 63% which is 4.7% lower than jsnn.

76 60 experiments Table 10: Q 3 accuracy of jsnn (trained by virtue of majority voting) for each test sequence Test sequence Number of residues Q3C(%) Q3H(%) Q3S(%) Q3(%) 1comc-1-DOMAK bmv krca-1-AUTO qbb-3-AUTO add-1-AS mla-2-AS cei-1-GJB bds alkb-1-AS bsdb-1-DOMAK Table 11: Q 3 accuracy of jnn for each test sequence Test Number Sequence of residues Q3C(%) Q3H(%) Q3S(%) Q3(%) 1comc-1-DOMAK bmv krca-1-AUTO qbb-3-AUTO add-1-AS mla-2-AS cei-1-GJB bds alkb-1-AS bsdb-1-DOMAK

77 6.3 experiments 61 Figure 32: Steps followed to find optimal set of hyperparameters for mnn Conclusion The overall performance of jsnn was better than jnn which shows that multiple sequence alignments contain more information, therefore, yield better predictions compared to when a single sequence is fed to a neural network MNN In the case of the cascaded neural network mnn, we started our experiments examining different numbers of nodes in the first and second neural networks, as well as the batch size for overlapping window sizes of 7, 13, 17 and 21. After that, we explored different regularization methods (L1 and L2 norm, a combination of L1 and L2 and the addition of dropout layers with a dropout rate of 0.2) as well as the choice of optimizers (Adam or SGD). Finally, we experimented with the addition of multiple hidden layers (2 to 8) Number of nodes and batch size First, we determined the optimal number of nodes in the first and second neural networks (10, 50, 100, 200, 300, 400, 500 in each) as well as the optimal batch size (10, 50, 100, 200, 300, 400, 500), testing

78 62 experiments all combinations for overlapping windows of sizes 7, 13, 17 and 21. On average, we obtained the best validation accuracies with window size 7 as shown in Table 12, therefore, we are going to concentrate on this window size in our discussion. A window of size 7 is quite small compared to the window sizes we found in the literature, whether we are just strictly looking at the basis of our mnn implementation, the Rost and Sander approach presented in [2] using a window size of 13, or other approaches presented in Chapters 3 and 4. We are uncertain about the specifics as to why decreasing the window size seemingly increases the validation accuracy of our model. We suspect that it might be due to the noisy nature of the CB513 dataset where certain residues do not have predictions and the multiple sequence alignments might also carry alignment errors. Therefore, it is possible that a smaller window size captures all necessary information and increasing the window size would introduce noise and contamination overpowering the valuable information content, thus contributing to a deteriorating validation accuracy. Based on this consideration and due to our trust in our experiments, we continue to focus on window size 7. We also observed that the addition of the second, structure- to - structure network indeed brings improvement as seen in Table 12 (2.98% on average). This increase can be explained by the second network eliminating predictions that are biologically less plausible, such as segments that are too short to form a helix in nature but the first network predicted them as such. We aggregated the raw data according to the number of nodes in the first and second neural net and the batch size to determine the optimal set of hyperparameters. Table 12: Mean validation accuracies for window sizes 7, 13, 17 and 21 after experimenting with the number of nodes and batch size in mnn. Validation accuracy 1 indicates the results obtained with only the first neural network and validation accuracy 2 indicates the total accuracy obtained from the entire cascaded system. Window size Validation accuracy 1 (%) Validation accuracy 2 (%) Figures 33, 34 and 35 show that in the case of window size 7, mnn provides a balanced performance and the tested hyperparameters only cause small change in terms of the validation accuracy. Because of this observation, we decided to omit the local search step we performed in the case of jnn and jsnn as we believe that it would not give us considerable improvements. We decided to use 10 nodes in

79 6.3 experiments 63 Figure 33: Validation accuracy using different number of nodes in the first neural network of mnn. Figure 34: Validation accuracy using different number of nodes in the second neural network of mnn.

80 64 experiments Figure 35: Validation accuracy using different batch sizes in mnn. the first neural network, 100 nodes in the second neural network and a batch size of 200 as this combination gave us the second highest validation accuracy (65.4%) and a reasonably small gap between the training and validation accuracies and losses as shown in Figure 36. The best result, however, was given by a combination of 10 and 500 nodes in the first and second neural networks and a batch size of 500, however, this only yields a 0.1% improvement compared to the chosen combination, corresponding to a validation accuracy of 65.5%, therefore, we concluded that it seems unnecessary to increase the number of nodes by 400 for such a small improvement.

81 6.3 experiments 65 Figure 36: Accuracy and loss plots of mnn using 10 and 100 nodes in the first and second neural network and a batch size of Regularization and optimizers In the next step, we explored two optimizers, Adam and SGD, as well as different regularization options - L1 and L2 norm, a combination of L1 and L2 and the addition of dropout layers with a dropout rate of 0.2. All combinations were tested on a network having 10 and 100 nodes in the first and second neural network, respectively, and a batch size of 200. Table 13 shows that mnn severely underfits when using SGD, and choosing Adam with the addition of dropout layers gives superior results. We also observed that the network does not learn when utilizing L1, L2 or L1L2 regularizers, which might be due to the less noisy input profile emerging as a result of the usage of a small window size. Therefore, the addition of dropout layers provide

82 66 experiments enough regularization to keep the balance between underfitting and overfitting, while the more stringent L1, L2 and L1L2 regularizers diminish our model s flexibility. Table 13: The effect of the choice of regularizer and optimizer on the validation accuracy in mnn. Regularizer Optimizer Validation accuracy (%) no SGD 44.0 no Adam 65.1 Dropout (0.2) SGD 43.7 Dropout (0.2) Adam 65.7 L1 SGD 43.7 L1 Adam 43.7 L2 SGD 43.7 L2 Adam 43.7 L1L2 SGD 43.7 L1L2 Adam 43.7 Figure 37 shows that we were able to obtain reasonable accuracy and loss plots using this combination of optimizers and regularizers, and comparing this figure to Figure 36 also shows that the gap between the training and validation accuracy curves narrowed, suggesting an improvement in the generalization properties of the model.

83 6.3 experiments 67 Figure 37: Accuracy and loss plots of mnn using Adam optimizer and one dropout layer in each network with a dropout rate of Number of hidden layers Using the optimal hyperparameters found so far, we experimented with the addition of more hidden layers (2-8), each having the same parameters as the first one. Figure 38 shows that adding more hidden layers does not improve the performance of mnn - the training accuracy slightly increases when adding 3 or 4, however, the validation accuracy keeps decreasing. Based on these results, we decided to keep using one hidden layer.

84 68 experiments Figure 38: Training and validation accuracy using different number of hidden layers in mnn Performance of neural network Table 14 shows the final architecture of mnn. We cross-validated our model using k-fold cross validation where k {2, 3,..., 15} Table 14: Final hyperparameters chosen to be used in mnn. Nodes in Number of Window the first Batch size Regularizer Optimizer hidden size and second layers network Dropout 7 10, (dropout rate: 0.2) Adam 1

85 6.3 experiments 69 Figure 39: Validation accuracy of mnn using different number of folds in K-fold cross-validation. Figure 39 shows that regardless of the number of folds used, the performance of our model remained roughly the same. The highest validation accuracy, 66.5% was obtained using 3 folds, however, because 5-fold and 10-fold cross-validation are the most widely used in practice, we decided to use 5-fold cross-validation which gave us a validation accuracy of 66.2%, a result not considerably inferior to the one obtained using 3 folds. As our cross-validation results did not vary substantially, we conclude that our model is robust and generalizes well to new and unseen data.

86 70 experiments Table 15: Q 3 accuracy of mnn for each test sequence Test Number of sequence residues Q3H (%) Q3S (%) Q3C (%) Q3 (%) 1add-1-AS alkb-1-AS bds bmv bsdb-1-DOMAK cei-1-GJB comc-1-DOMAK krca-1-AUTO mla-2-AS qbb-3-AUTO Prediction accuracy We presented mnn with the same test sequences that we used to evaluate the prediction accuracy of jsnn, Table 15 shows the Q 3 scores obtained. We got an average Q 3 accuracy of 66.5%, which is 1.3% lower than the results we got from jsnn Conclusion These results obtained from mnn and jsnn are comparable, which is expected as both algorithms incorporate the idea of using multiple sequence alignment data in an attempt to increase the amount of information the neural network can learn about the sequence-structure relationship. Based on our experiments, we conclude that mnn, even though being more complex in architecture, is slightly inferior to jsnn in terms of results, however, we do not rule out the possibility that training the networks on a different dataset or using hyperparameters we have not explored may result in mnn surpassing the performance of jsnn SNN First, we explored different regularization choices for the convolutional neural network snn - L1, L2, a combination of L1 and L2 as well as the addition of dropout layers with a dropout rate of 0.2. After finding the best regularization method, we experimented with the number of filters in the first and second convolutional layers (20, 60, 96, 140 and 10, 24, 50, respectively) and the batch size (100, 500, 1000). In the next step, we examined the effect of different filter sizes in the

87 6.3 experiments 71 first and second convolutional layers (2*2, 5*5, 10*10), as well as the optimizers (SGD or Adam) used when compiling the model. Figure 40: Steps followed to find optimal set of hyperparameters for snn Regularization We decided to start our experiments with the exploration of regularizers because our model was severely overfitting before applying regularization. Figure 41 shows that the validation accuracy is slightly decreasing and the validation loss is increasing with the epochs, which indicates that our model generalizes poorly to unseen data.

88 72 experiments Figure 41: Accuracy and loss plots of snn without regularization These experiments were carried out using window sizes 13, 17 and 21. We found that using window size 21 yielded the best accuracies on average as Table 16 shows, therefore, we are going to report our results for this window size. The results for window sizes 13 and 17 can be seen in Table 24 (in Appendix). Table 16: Mean validation accuracy for window sizes 13, 17 and 21 after regularization experiments using snn. Window size Mean validation accuracy (%)

89 6.3 experiments 73 The hyperparameters described in [41] were used during this experiment (96 5*5 filters in the first convolutional layer, a 2*2 max-pooling layer, 24 2*2 filters in the second convolutional layer). We used a batch size of 500 and ran 100 epochs in each case. Table 17 summarizes the regularizers we experimented with and the accuracies we obtained. Table 17: The effect of different regularization methods on the accuracy of snn Regularizer Accuracy (%) Validation accuracy (%) No Dropout (0.2) L L1 + Dropout (0.2) L L2 + Dropout (0.2) L1L L1L2 + Dropout (0.2) In terms of validation accuracy, using L2 regularization with two dropout layers (dropout rate: 0.2) seems to be the best choice, however, figure 42 shows that the network is overfitting as the training accuracy keeps increasing while the validation accuracy remains relatively stagnant after epoch 10.

90 74 experiments Figure 42: Accuracy and loss plots of snn with L2 norm regularization and two dropout layers (one in each convolutional layer). Based on these results, we decided to use the L1 norm regularizer without dropout layers because this method yields satisfactory accuracies with a very small gap between the training and validation accuracies and losses as figure 43 shows, which implies good generalization properties.

91 6.3 experiments 75 Figure 43: Accuracy and loss plots of snn with L1 norm regularization Number of filters and batch size We used the L1 regularized neural network to experiment with the number of filters in the convolutional layers and the batch size. We carried out our tests using overlapping windows of 13, 17 and 21, however, we obtained the best accuracies on average with window size 21 as shown by table 18, therefore, we are going to report those results in this section.

92 76 experiments Table 18: Mean validation accuracy for window sizes 13, 17 and 21 after filter number and batch size experiments in snn. Window size Mean validation accuracy (%) The hyperparameters explored in this step are the following: 20, 60, 96, 140 and 10, 24, 50 filters in the first and second convolutional layers, respectively, and batch sizes of 100, 500, and All combinations of hyperparameters were tested. To obtain the optimal set of hyperparameters, we decided to aggregate the raw data according to batch size and the number of filters in the first and second convolutional layers and we examined the hyperparameter corresponding to the largest mean validation accuracy for each. Based on these results, the optimal combination seemed to be batch size 500 (figure 44), 20 filters in the first convolutional layer (figure 45) and 10 filters in the second convolutional layer (figure 46). If we look at the raw data for window size 21 as given by 25 (in appendix), however, we see that this is not the combination corresponding to the overall highest validation accuracy, as it is given by the combination 96, 10, 1000 (number of filters in layer 1 and 2 and batch size, respectively).

93 6.3 experiments 77 Figure 44: Mean validation accuracy for batch sizes 100, 500 and 1000 in snn. We looked at the accuracy and loss plots obtained using these hyperparameter combinations and based on those, we decided to use 96 filters in the first layer, 10 filters in the second layer and the batch size of 1000, as the other combination yielded an accuracy plot where the training and validation accuracies are rather tangled, suggesting mild overfitting as shown in Figure 62. Our chosen combination, however, shows that the validation accuracy tracks the training accuracy, as expected, with a sufficiently small gap between the training and validation accuracies (figure 47).

94 78 experiments Figure 45: Mean validation accuracy for number of filters in the first convolutional layer of snn. Figure 46: Mean validation accuracy for number of filters in the second convolutional layer of snn.

Neural Networks for Protein Structure Prediction Brown, JMB CS 466 Saurabh Sinha

Neural Networks for Protein Structure Prediction Brown, JMB CS 466 Saurabh Sinha Neural Networks for Protein Structure Prediction Brown, JMB 1999 CS 466 Saurabh Sinha Outline Goal is to predict secondary structure of a protein from its sequence Artificial Neural Network used for this

More information

Statistical Machine Learning Methods for Bioinformatics IV. Neural Network & Deep Learning Applications in Bioinformatics

Statistical Machine Learning Methods for Bioinformatics IV. Neural Network & Deep Learning Applications in Bioinformatics Statistical Machine Learning Methods for Bioinformatics IV. Neural Network & Deep Learning Applications in Bioinformatics Jianlin Cheng, PhD Department of Computer Science University of Missouri, Columbia

More information

Basics of protein structure

Basics of protein structure Today: 1. Projects a. Requirements: i. Critical review of one paper ii. At least one computational result b. Noon, Dec. 3 rd written report and oral presentation are due; submit via email to bphys101@fas.harvard.edu

More information

CAP 5510 Lecture 3 Protein Structures

CAP 5510 Lecture 3 Protein Structures CAP 5510 Lecture 3 Protein Structures Su-Shing Chen Bioinformatics CISE 8/19/2005 Su-Shing Chen, CISE 1 Protein Conformation 8/19/2005 Su-Shing Chen, CISE 2 Protein Conformational Structures Hydrophobicity

More information

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,

More information

From Amino Acids to Proteins - in 4 Easy Steps

From Amino Acids to Proteins - in 4 Easy Steps From Amino Acids to Proteins - in 4 Easy Steps Although protein structure appears to be overwhelmingly complex, you can provide your students with a basic understanding of how proteins fold by focusing

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

Protein Structure. W. M. Grogan, Ph.D. OBJECTIVES

Protein Structure. W. M. Grogan, Ph.D. OBJECTIVES Protein Structure W. M. Grogan, Ph.D. OBJECTIVES 1. Describe the structure and characteristic properties of typical proteins. 2. List and describe the four levels of structure found in proteins. 3. Relate

More information

Protein Secondary Structure Prediction

Protein Secondary Structure Prediction Protein Secondary Structure Prediction Doug Brutlag & Scott C. Schmidler Overview Goals and problem definition Existing approaches Classic methods Recent successful approaches Evaluating prediction algorithms

More information

Pattern Recognition and Machine Learning

Pattern Recognition and Machine Learning Christopher M. Bishop Pattern Recognition and Machine Learning ÖSpri inger Contents Preface Mathematical notation Contents vii xi xiii 1 Introduction 1 1.1 Example: Polynomial Curve Fitting 4 1.2 Probability

More information

Lecture 4: Feed Forward Neural Networks

Lecture 4: Feed Forward Neural Networks Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training

More information

Presentation Outline. Prediction of Protein Secondary Structure using Neural Networks at Better than 70% Accuracy

Presentation Outline. Prediction of Protein Secondary Structure using Neural Networks at Better than 70% Accuracy Prediction of Protein Secondary Structure using Neural Networks at Better than 70% Accuracy Burkhard Rost and Chris Sander By Kalyan C. Gopavarapu 1 Presentation Outline Major Terminology Problem Method

More information

Protein Structure Prediction II Lecturer: Serafim Batzoglou Scribe: Samy Hamdouche

Protein Structure Prediction II Lecturer: Serafim Batzoglou Scribe: Samy Hamdouche Protein Structure Prediction II Lecturer: Serafim Batzoglou Scribe: Samy Hamdouche The molecular structure of a protein can be broken down hierarchically. The primary structure of a protein is simply its

More information

Introduction to" Protein Structure

Introduction to Protein Structure Introduction to" Protein Structure Function, evolution & experimental methods Thomas Blicher, Center for Biological Sequence Analysis Learning Objectives Outline the basic levels of protein structure.

More information

SUPPLEMENTARY MATERIALS

SUPPLEMENTARY MATERIALS SUPPLEMENTARY MATERIALS Enhanced Recognition of Transmembrane Protein Domains with Prediction-based Structural Profiles Baoqiang Cao, Aleksey Porollo, Rafal Adamczak, Mark Jarrell and Jaroslaw Meller Contact:

More information

Contents. xiii. Preface v

Contents. xiii. Preface v Contents Preface Chapter 1 Biological Macromolecules 1.1 General PrincipIes 1.1.1 Macrornolecules 1.2 1.1.2 Configuration and Conformation Molecular lnteractions in Macromolecular Structures 1.2.1 Weak

More information

Feedforward Neural Nets and Backpropagation

Feedforward Neural Nets and Backpropagation Feedforward Neural Nets and Backpropagation Julie Nutini University of British Columbia MLRG September 28 th, 2016 1 / 23 Supervised Learning Roadmap Supervised Learning: Assume that we are given the features

More information

Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses about the label (Top-5 error) No Bounding Box

Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses about the label (Top-5 error) No Bounding Box ImageNet Classification with Deep Convolutional Neural Networks Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton Motivation Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses

More information

HMM applications. Applications of HMMs. Gene finding with HMMs. Using the gene finder

HMM applications. Applications of HMMs. Gene finding with HMMs. Using the gene finder HMM applications Applications of HMMs Gene finding Pairwise alignment (pair HMMs) Characterizing protein families (profile HMMs) Predicting membrane proteins, and membrane protein topology Gene finding

More information

Principles of Physical Biochemistry

Principles of Physical Biochemistry Principles of Physical Biochemistry Kensal E. van Hold e W. Curtis Johnso n P. Shing Ho Preface x i PART 1 MACROMOLECULAR STRUCTURE AND DYNAMICS 1 1 Biological Macromolecules 2 1.1 General Principles

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks 鮑興國 Ph.D. National Taiwan University of Science and Technology Outline Perceptrons Gradient descent Multi-layer networks Backpropagation Hidden layer representations Examples

More information

Artificial Neural Network and Fuzzy Logic

Artificial Neural Network and Fuzzy Logic Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks

More information

Bioinformatics III Structural Bioinformatics and Genome Analysis Part Protein Secondary Structure Prediction. Sepp Hochreiter

Bioinformatics III Structural Bioinformatics and Genome Analysis Part Protein Secondary Structure Prediction. Sepp Hochreiter Bioinformatics III Structural Bioinformatics and Genome Analysis Part Protein Secondary Structure Prediction Institute of Bioinformatics Johannes Kepler University, Linz, Austria Chapter 4 Protein Secondary

More information

Protein Structure Prediction Using Multiple Artificial Neural Network Classifier *

Protein Structure Prediction Using Multiple Artificial Neural Network Classifier * Protein Structure Prediction Using Multiple Artificial Neural Network Classifier * Hemashree Bordoloi and Kandarpa Kumar Sarma Abstract. Protein secondary structure prediction is the method of extracting

More information

Introduction to Convolutional Neural Networks (CNNs)

Introduction to Convolutional Neural Networks (CNNs) Introduction to Convolutional Neural Networks (CNNs) nojunk@snu.ac.kr http://mipal.snu.ac.kr Department of Transdisciplinary Studies Seoul National University, Korea Jan. 2016 Many slides are from Fei-Fei

More information

Neural Networks, Computation Graphs. CMSC 470 Marine Carpuat

Neural Networks, Computation Graphs. CMSC 470 Marine Carpuat Neural Networks, Computation Graphs CMSC 470 Marine Carpuat Binary Classification with a Multi-layer Perceptron φ A = 1 φ site = 1 φ located = 1 φ Maizuru = 1 φ, = 2 φ in = 1 φ Kyoto = 1 φ priest = 0 φ

More information

Examples of Protein Modeling. Protein Modeling. Primary Structure. Protein Structure Description. Protein Sequence Sources. Importing Sequences to MOE

Examples of Protein Modeling. Protein Modeling. Primary Structure. Protein Structure Description. Protein Sequence Sources. Importing Sequences to MOE Examples of Protein Modeling Protein Modeling Visualization Examination of an experimental structure to gain insight about a research question Dynamics To examine the dynamics of protein structures To

More information

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino Artificial Neural Networks Data Base and Data Mining Group of Politecnico di Torino Elena Baralis Politecnico di Torino Artificial Neural Networks Inspired to the structure of the human brain Neurons as

More information

Improved Protein Secondary Structure Prediction

Improved Protein Secondary Structure Prediction Improved Protein Secondary Structure Prediction Secondary Structure Prediction! Given a protein sequence a 1 a 2 a N, secondary structure prediction aims at defining the state of each amino acid ai as

More information

Neural Networks and Deep Learning

Neural Networks and Deep Learning Neural Networks and Deep Learning Professor Ameet Talwalkar November 12, 2015 Professor Ameet Talwalkar Neural Networks and Deep Learning November 12, 2015 1 / 16 Outline 1 Review of last lecture AdaBoost

More information

Numerical Learning Algorithms

Numerical Learning Algorithms Numerical Learning Algorithms Example SVM for Separable Examples.......................... Example SVM for Nonseparable Examples....................... 4 Example Gaussian Kernel SVM...............................

More information

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann Neural Networks with Applications to Vision and Language Feedforward Networks Marco Kuhlmann Feedforward networks Linear separability x 2 x 2 0 1 0 1 0 0 x 1 1 0 x 1 linearly separable not linearly separable

More information

Analysis and Prediction of Protein Structure (I)

Analysis and Prediction of Protein Structure (I) Analysis and Prediction of Protein Structure (I) Jianlin Cheng, PhD School of Electrical Engineering and Computer Science University of Central Florida 2006 Free for academic use. Copyright @ Jianlin Cheng

More information

Conditional Graphical Models

Conditional Graphical Models PhD Thesis Proposal Conditional Graphical Models for Protein Structure Prediction Yan Liu Language Technologies Institute University Thesis Committee Jaime Carbonell (Chair) John Lafferty Eric P. Xing

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

Final Examination CS 540-2: Introduction to Artificial Intelligence

Final Examination CS 540-2: Introduction to Artificial Intelligence Final Examination CS 540-2: Introduction to Artificial Intelligence May 7, 2017 LAST NAME: SOLUTIONS FIRST NAME: Problem Score Max Score 1 14 2 10 3 6 4 10 5 11 6 9 7 8 9 10 8 12 12 8 Total 100 1 of 11

More information

Protein Secondary Structure Prediction using Feed-Forward Neural Network

Protein Secondary Structure Prediction using Feed-Forward Neural Network COPYRIGHT 2010 JCIT, ISSN 2078-5828 (PRINT), ISSN 2218-5224 (ONLINE), VOLUME 01, ISSUE 01, MANUSCRIPT CODE: 100713 Protein Secondary Structure Prediction using Feed-Forward Neural Network M. A. Mottalib,

More information

Neural Networks. Nicholas Ruozzi University of Texas at Dallas

Neural Networks. Nicholas Ruozzi University of Texas at Dallas Neural Networks Nicholas Ruozzi University of Texas at Dallas Handwritten Digit Recognition Given a collection of handwritten digits and their corresponding labels, we d like to be able to correctly classify

More information

Artifical Neural Networks

Artifical Neural Networks Neural Networks Artifical Neural Networks Neural Networks Biological Neural Networks.................................. Artificial Neural Networks................................... 3 ANN Structure...........................................

More information

Introduction to Comparative Protein Modeling. Chapter 4 Part I

Introduction to Comparative Protein Modeling. Chapter 4 Part I Introduction to Comparative Protein Modeling Chapter 4 Part I 1 Information on Proteins Each modeling study depends on the quality of the known experimental data. Basis of the model Search in the literature

More information

Physiochemical Properties of Residues

Physiochemical Properties of Residues Physiochemical Properties of Residues Various Sources C N Cα R Slide 1 Conformational Propensities Conformational Propensity is the frequency in which a residue adopts a given conformation (in a polypeptide)

More information

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Neural Networks Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Brains as Computational Devices Brains advantages with respect to digital computers: Massively parallel Fault-tolerant Reliable

More information

Protein Structure Basics

Protein Structure Basics Protein Structure Basics Presented by Alison Fraser, Christine Lee, Pradhuman Jhala, Corban Rivera Importance of Proteins Muscle structure depends on protein-protein interactions Transport across membranes

More information

Introduction Biologically Motivated Crude Model Backpropagation

Introduction Biologically Motivated Crude Model Backpropagation Introduction Biologically Motivated Crude Model Backpropagation 1 McCulloch-Pitts Neurons In 1943 Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, published A logical calculus of the

More information

Lecture 7 Artificial neural networks: Supervised learning

Lecture 7 Artificial neural networks: Supervised learning Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

Lecture 4: Perceptrons and Multilayer Perceptrons

Lecture 4: Perceptrons and Multilayer Perceptrons Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons

More information

Convolutional Neural Networks

Convolutional Neural Networks Convolutional Neural Networks Books» http://www.deeplearningbook.org/ Books http://neuralnetworksanddeeplearning.com/.org/ reviews» http://www.deeplearningbook.org/contents/linear_algebra.html» http://www.deeplearningbook.org/contents/prob.html»

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

THE TANGO ALGORITHM: SECONDARY STRUCTURE PROPENSITIES, STATISTICAL MECHANICS APPROXIMATION

THE TANGO ALGORITHM: SECONDARY STRUCTURE PROPENSITIES, STATISTICAL MECHANICS APPROXIMATION THE TANGO ALGORITHM: SECONDARY STRUCTURE PROPENSITIES, STATISTICAL MECHANICS APPROXIMATION AND CALIBRATION Calculation of turn and beta intrinsic propensities. A statistical analysis of a protein structure

More information

Deep Feedforward Networks. Sargur N. Srihari

Deep Feedforward Networks. Sargur N. Srihari Deep Feedforward Networks Sargur N. srihari@cedar.buffalo.edu 1 Topics Overview 1. Example: Learning XOR 2. Gradient-Based Learning 3. Hidden Units 4. Architecture Design 5. Backpropagation and Other Differentiation

More information

How to do backpropagation in a brain

How to do backpropagation in a brain How to do backpropagation in a brain Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto & Google Inc. Prelude I will start with three slides explaining a popular type of deep

More information

Neural Networks biological neuron artificial neuron 1

Neural Networks biological neuron artificial neuron 1 Neural Networks biological neuron artificial neuron 1 A two-layer neural network Output layer (activation represents classification) Weighted connections Hidden layer ( internal representation ) Input

More information

Deep Learning Lab Course 2017 (Deep Learning Practical)

Deep Learning Lab Course 2017 (Deep Learning Practical) Deep Learning Lab Course 207 (Deep Learning Practical) Labs: (Computer Vision) Thomas Brox, (Robotics) Wolfram Burgard, (Machine Learning) Frank Hutter, (Neurorobotics) Joschka Boedecker University of

More information

An Artificial Neural Network Classifier for the Prediction of Protein Structural Classes

An Artificial Neural Network Classifier for the Prediction of Protein Structural Classes International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2017 INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article An Artificial

More information

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, 2012 Sasidharan Sreedharan www.sasidharan.webs.com 3/1/2012 1 Syllabus Artificial Intelligence Systems- Neural Networks, fuzzy logic,

More information

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks INFOB2KI 2017-2018 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Artificial Neural Networks Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html

More information

SGD and Deep Learning

SGD and Deep Learning SGD and Deep Learning Subgradients Lets make the gradient cheating more formal. Recall that the gradient is the slope of the tangent. f(w 1 )+rf(w 1 ) (w w 1 ) Non differentiable case? w 1 Subgradients

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

PROTEIN SECONDARY STRUCTURE PREDICTION USING NEURAL NETWORKS AND SUPPORT VECTOR MACHINES

PROTEIN SECONDARY STRUCTURE PREDICTION USING NEURAL NETWORKS AND SUPPORT VECTOR MACHINES PROTEIN SECONDARY STRUCTURE PREDICTION USING NEURAL NETWORKS AND SUPPORT VECTOR MACHINES by Lipontseng Cecilia Tsilo A thesis submitted to Rhodes University in partial fulfillment of the requirements for

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project

More information

Bioinformatics: Secondary Structure Prediction

Bioinformatics: Secondary Structure Prediction Bioinformatics: Secondary Structure Prediction Prof. David Jones d.jones@cs.ucl.ac.uk LMLSTQNPALLKRNIIYWNNVALLWEAGSD The greatest unsolved problem in molecular biology:the Protein Folding Problem? Entries

More information

Artificial Neural Networks

Artificial Neural Networks Introduction ANN in Action Final Observations Application: Poverty Detection Artificial Neural Networks Alvaro J. Riascos Villegas University of los Andes and Quantil July 6 2018 Artificial Neural Networks

More information

An overview of deep learning methods for genomics

An overview of deep learning methods for genomics An overview of deep learning methods for genomics Matthew Ploenzke STAT115/215/BIO/BIST282 Harvard University April 19, 218 1 Snapshot 1. Brief introduction to convolutional neural networks What is deep

More information

Nonlinear Classification

Nonlinear Classification Nonlinear Classification INFO-4604, Applied Machine Learning University of Colorado Boulder October 5-10, 2017 Prof. Michael Paul Linear Classification Most classifiers we ve seen use linear functions

More information

Molecular Modeling lecture 2

Molecular Modeling lecture 2 Molecular Modeling 2018 -- lecture 2 Topics 1. Secondary structure 3. Sequence similarity and homology 2. Secondary structure prediction 4. Where do protein structures come from? X-ray crystallography

More information

Nanobiotechnology. Place: IOP 1 st Meeting Room Time: 9:30-12:00. Reference: Review Papers. Grade: 40% midterm, 60% final report (oral + written)

Nanobiotechnology. Place: IOP 1 st Meeting Room Time: 9:30-12:00. Reference: Review Papers. Grade: 40% midterm, 60% final report (oral + written) Nanobiotechnology Place: IOP 1 st Meeting Room Time: 9:30-12:00 Reference: Review Papers Grade: 40% midterm, 60% final report (oral + written) Midterm: 5/18 Oral Presentation 1. 20 minutes each person

More information

Introduction to Neural Networks

Introduction to Neural Networks CUONG TUAN NGUYEN SEIJI HOTTA MASAKI NAKAGAWA Tokyo University of Agriculture and Technology Copyright by Nguyen, Hotta and Nakagawa 1 Pattern classification Which category of an input? Example: Character

More information

Artificial Neural Network

Artificial Neural Network Artificial Neural Network Contents 2 What is ANN? Biological Neuron Structure of Neuron Types of Neuron Models of Neuron Analogy with human NN Perceptron OCR Multilayer Neural Network Back propagation

More information

CS 1674: Intro to Computer Vision. Final Review. Prof. Adriana Kovashka University of Pittsburgh December 7, 2016

CS 1674: Intro to Computer Vision. Final Review. Prof. Adriana Kovashka University of Pittsburgh December 7, 2016 CS 1674: Intro to Computer Vision Final Review Prof. Adriana Kovashka University of Pittsburgh December 7, 2016 Final info Format: multiple-choice, true/false, fill in the blank, short answers, apply an

More information

#33 - Genomics 11/09/07

#33 - Genomics 11/09/07 BCB 444/544 Required Reading (before lecture) Lecture 33 Mon Nov 5 - Lecture 31 Phylogenetics Parsimony and ML Chp 11 - pp 142 169 Genomics Wed Nov 7 - Lecture 32 Machine Learning Fri Nov 9 - Lecture 33

More information

Protein 8-class Secondary Structure Prediction Using Conditional Neural Fields

Protein 8-class Secondary Structure Prediction Using Conditional Neural Fields 2010 IEEE International Conference on Bioinformatics and Biomedicine Protein 8-class Secondary Structure Prediction Using Conditional Neural Fields Zhiyong Wang, Feng Zhao, Jian Peng, Jinbo Xu* Toyota

More information

<Special Topics in VLSI> Learning for Deep Neural Networks (Back-propagation)

<Special Topics in VLSI> Learning for Deep Neural Networks (Back-propagation) Learning for Deep Neural Networks (Back-propagation) Outline Summary of Previous Standford Lecture Universal Approximation Theorem Inference vs Training Gradient Descent Back-Propagation

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!

Artificial Neural Networks and Nonparametric Methods CMPSCI 383 Nov 17, 2011! Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011! 1 Todayʼs lecture" How the brain works (!)! Artificial neural networks! Perceptrons! Multilayer feed-forward networks! Error

More information

Lecture 8: Introduction to Deep Learning: Part 2 (More on backpropagation, and ConvNets)

Lecture 8: Introduction to Deep Learning: Part 2 (More on backpropagation, and ConvNets) COS 402 Machine Learning and Artificial Intelligence Fall 2016 Lecture 8: Introduction to Deep Learning: Part 2 (More on backpropagation, and ConvNets) Sanjeev Arora Elad Hazan Recap: Structure of a deep

More information

Apprentissage, réseaux de neurones et modèles graphiques (RCP209) Neural Networks and Deep Learning

Apprentissage, réseaux de neurones et modèles graphiques (RCP209) Neural Networks and Deep Learning Apprentissage, réseaux de neurones et modèles graphiques (RCP209) Neural Networks and Deep Learning Nicolas Thome Prenom.Nom@cnam.fr http://cedric.cnam.fr/vertigo/cours/ml2/ Département Informatique Conservatoire

More information

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18 CSE 417T: Introduction to Machine Learning Final Review Henry Chai 12/4/18 Overfitting Overfitting is fitting the training data more than is warranted Fitting noise rather than signal 2 Estimating! "#$

More information

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function. Bayesian learning: Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function. Let y be the true label and y be the predicted

More information

1-D Predictions. Prediction of local features: Secondary structure & surface exposure

1-D Predictions. Prediction of local features: Secondary structure & surface exposure 1-D Predictions Prediction of local features: Secondary structure & surface exposure 1 Learning Objectives After today s session you should be able to: Explain the meaning and usage of the following local

More information

Giri Narasimhan. CAP 5510: Introduction to Bioinformatics. ECS 254; Phone: x3748

Giri Narasimhan. CAP 5510: Introduction to Bioinformatics. ECS 254; Phone: x3748 CAP 5510: Introduction to Bioinformatics Giri Narasimhan ECS 254; Phone: x3748 giri@cis.fiu.edu www.cis.fiu.edu/~giri/teach/bioinfs07.html 2/15/07 CAP5510 1 EM Algorithm Goal: Find θ, Z that maximize Pr

More information

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6 Machine Learning for Large-Scale Data Analysis and Decision Making 80-629-17A Neural Networks Week #6 Today Neural Networks A. Modeling B. Fitting C. Deep neural networks Today s material is (adapted)

More information

CSC242: Intro to AI. Lecture 21

CSC242: Intro to AI. Lecture 21 CSC242: Intro to AI Lecture 21 Administrivia Project 4 (homeworks 18 & 19) due Mon Apr 16 11:59PM Posters Apr 24 and 26 You need an idea! You need to present it nicely on 2-wide by 4-high landscape pages

More information

Graphical Models and Bayesian Methods in Bioinformatics: From Structural to Systems Biology

Graphical Models and Bayesian Methods in Bioinformatics: From Structural to Systems Biology Graphical Models and Bayesian Methods in Bioinformatics: From Structural to Systems Biology David L. Wild Keck Graduate Institute of Applied Life Sciences, Claremont, CA, USA October 3, 2005 Outline 1

More information

The Relative Importance of Input Encoding and Learning Methodology on Protein Secondary Structure Prediction

The Relative Importance of Input Encoding and Learning Methodology on Protein Secondary Structure Prediction Georgia State University ScholarWorks @ Georgia State University Computer Science Theses Department of Computer Science 6-9-2006 The Relative Importance of Input Encoding and Learning Methodology on Protein

More information

Protein Secondary Structure Prediction

Protein Secondary Structure Prediction part of Bioinformatik von RNA- und Proteinstrukturen Computational EvoDevo University Leipzig Leipzig, SS 2011 the goal is the prediction of the secondary structure conformation which is local each amino

More information

Deep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, Spis treści

Deep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, Spis treści Deep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, 2017 Spis treści Website Acknowledgments Notation xiii xv xix 1 Introduction 1 1.1 Who Should Read This Book?

More information

Online Videos FERPA. Sign waiver or sit on the sides or in the back. Off camera question time before and after lecture. Questions?

Online Videos FERPA. Sign waiver or sit on the sides or in the back. Off camera question time before and after lecture. Questions? Online Videos FERPA Sign waiver or sit on the sides or in the back Off camera question time before and after lecture Questions? Lecture 1, Slide 1 CS224d Deep NLP Lecture 4: Word Window Classification

More information

Final Examination CS540-2: Introduction to Artificial Intelligence

Final Examination CS540-2: Introduction to Artificial Intelligence Final Examination CS540-2: Introduction to Artificial Intelligence May 9, 2018 LAST NAME: SOLUTIONS FIRST NAME: Directions 1. This exam contains 33 questions worth a total of 100 points 2. Fill in your

More information

Neural Networks. David Rosenberg. July 26, New York University. David Rosenberg (New York University) DS-GA 1003 July 26, / 35

Neural Networks. David Rosenberg. July 26, New York University. David Rosenberg (New York University) DS-GA 1003 July 26, / 35 Neural Networks David Rosenberg New York University July 26, 2017 David Rosenberg (New York University) DS-GA 1003 July 26, 2017 1 / 35 Neural Networks Overview Objectives What are neural networks? How

More information

Motif Prediction in Amino Acid Interaction Networks

Motif Prediction in Amino Acid Interaction Networks Motif Prediction in Amino Acid Interaction Networks Omar GACI and Stefan BALEV Abstract In this paper we represent a protein as a graph where the vertices are amino acids and the edges are interactions

More information

Protein Structure Prediction using String Kernels. Technical Report

Protein Structure Prediction using String Kernels. Technical Report Protein Structure Prediction using String Kernels Technical Report Department of Computer Science and Engineering University of Minnesota 4-192 EECS Building 200 Union Street SE Minneapolis, MN 55455-0159

More information

Protein Structure Analysis and Verification. Course S Basics for Biosystems of the Cell exercise work. Maija Nevala, BIO, 67485U 16.1.

Protein Structure Analysis and Verification. Course S Basics for Biosystems of the Cell exercise work. Maija Nevala, BIO, 67485U 16.1. Protein Structure Analysis and Verification Course S-114.2500 Basics for Biosystems of the Cell exercise work Maija Nevala, BIO, 67485U 16.1.2008 1. Preface When faced with an unknown protein, scientists

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

STA 414/2104: Machine Learning

STA 414/2104: Machine Learning STA 414/2104: Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistics! rsalakhu@cs.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 9 Sequential Data So far

More information

Introduction to Convolutional Neural Networks 2018 / 02 / 23

Introduction to Convolutional Neural Networks 2018 / 02 / 23 Introduction to Convolutional Neural Networks 2018 / 02 / 23 Buzzword: CNN Convolutional neural networks (CNN, ConvNet) is a class of deep, feed-forward (not recurrent) artificial neural networks that

More information

Week 10: Homology Modelling (II) - HHpred

Week 10: Homology Modelling (II) - HHpred Week 10: Homology Modelling (II) - HHpred Course: Tools for Structural Biology Fabian Glaser BKU - Technion 1 2 Identify and align related structures by sequence methods is not an easy task All comparative

More information

Neural networks and optimization

Neural networks and optimization Neural networks and optimization Nicolas Le Roux INRIA 8 Nov 2011 Nicolas Le Roux (INRIA) Neural networks and optimization 8 Nov 2011 1 / 80 1 Introduction 2 Linear classifier 3 Convolutional neural networks

More information

From perceptrons to word embeddings. Simon Šuster University of Groningen

From perceptrons to word embeddings. Simon Šuster University of Groningen From perceptrons to word embeddings Simon Šuster University of Groningen Outline A basic computational unit Weighting some input to produce an output: classification Perceptron Classify tweets Written

More information

Deep Learning & Artificial Intelligence WS 2018/2019

Deep Learning & Artificial Intelligence WS 2018/2019 Deep Learning & Artificial Intelligence WS 2018/2019 Linear Regression Model Model Error Function: Squared Error Has no special meaning except it makes gradients look nicer Prediction Ground truth / target

More information

Supporting Information

Supporting Information Supporting Information Convolutional Embedding of Attributed Molecular Graphs for Physical Property Prediction Connor W. Coley a, Regina Barzilay b, William H. Green a, Tommi S. Jaakkola b, Klavs F. Jensen

More information