Short-term water demand forecast based on deep neural network ABSTRACT

Size: px
Start display at page:

Download "Short-term water demand forecast based on deep neural network ABSTRACT"

Transcription

1 Short-term water demand forecast based on deep neural network Guancheng Guo 1, Shuming Liu 2 1,2 School of Environment, Tsinghua University, , Beijing, China 2 shumingliu@tsinghua.edu.cn ABSTRACT Short-time water demand forecasting is essential for optimal control in water distribution system (WDS). The current methods (e.g., conventional artificial neural network) have limited power in practice due to the nonlinear nature of changes in water demand. In particular, 15-min time step forecasting may not be accurate when using conventional models. To tackle this problem, this paper investigates the potential of deep learning in short-term water demand forecasting, developing a gated recurrent unit network (GRUN) model to forecast water demand 15 minutes into the future and 24 hours into the future with a 15-min time step. The performance of GRUN was compared with a conventional artificial neural network (ANN) model. The results show that the deep neural network model like GRUN outperforms the ANN model for both 15 minute and 24 hour forecasts. These findings can provide more flexible and effective solutions for water demand forecasting. Keywords: demand forecast; artificial neural network; gated recurrent unit network; 1. Introduction Urban water demand forecasting, whether for the design, planning, operation or management of water distribution system (WDS), is essential for water utilities all over the world. Water demand forecasting as the basis of optimal scheduling plays an important role in the optimal operation of WDS. For instance, it helps water companies to make decisions about water allocation, water production, pricing policies, water use restrictions, pump station operation, and pipe network capacity [1]. However, forecasting water demand is a challenging task. Water demand at a given time in the future is usually related to past water demand, current operating conditions, and socioeconomic and meteorological factors such as relative humidity, air temperature, rainfall and pressure [2]. In the context of water demand forecasting, a wide variety of methods have been proposed which can be broadly classified into traditional methods and learning algorithms. Early works used traditional statistical models such as linear regression models and time series models [3, 4] to solve this question. However, change in water demand is nonlinear and may not be accurately predicted by linear methods. Learning algorithms belong to the group of nonlinear methods. The use of advanced data analysis, such as machine learning, enables learning algorithm models to achieve high accuracy. Support Vector Machine (SVM) is a popular method of machine learning which has been commonly used in water demand forecast [5]. Artificial Neural Network (ANN) and Extreme Learning Machine (ELM) [6] are all machine learning techniques that have been used to forecast water demand. In recent years, one remarkable and promising example of a learning algorithm is deep learning. It has produced state-of-the-art results in many fields such as sentiment classification and face recognition [7]. So far, deep learning methods are seldom used to forecast the water demand. This paper aims to 1) investigate the potential of deep learning methods in water demand forecasting; 2) compare the prediction performance with a conventional ANN model. To achieve these objectives, we developed a novel deep neural network, i.e., the gated recurrent unit network (GRUN) model and a conventional ANN model. The methodology is described in Section 2. In Section 3, using practical data from a district metering area (DMA) to evaluate the performance of these models.

2 2. Methodology 2.1 Research flowchart Figure 1 presents the research outline. Firstly, historical water demand data from a DMA in Changzhou city were collected and the features were extracted as the model inputs. In order to simulate a real-life situation, we use two predictive approaches: the first is a 15 minute prediction and the second is a 24 hour prediction with 15-min time steps. For 24 hour forecasts, the output of the previous moment will be used as one of the model inputs for the next moment until 96 values are forecasted. Adopting the same model to predict each value increases the efficiency of the prediction. Model performance was evaluated on the basis of prediction accuracy and model stability. Figure 1. Research outline. 2.2 GRUN model Recurrent neural networks (RNN) have shown promising results in some machine learning tasks, especially the recently introduced gated recurrent unit (GRU), which was proposed by Cho, et al. [8]. This has been successfully applied to deal with long sequences and been shown to have high processing efficiency [9]. GRU has a strong ability to deal with nonlinear data, especially for sequence processing. The use of memory modules rather than ordinary hidden units ensures that the gradient does not vanish or explode after a large number of iterations, which overcomes the difficulties encountered in traditional RNN training. This paper uses GRU as the core to build the GRUN model. Figure 2a illustrates the structure of the GRU. The process of calculating GRU can be briefly described by Eqs (1)-(4) [9]: r t =ReLU(W rx x t +W rh h t-1 +b r ) (1) z t =ReLU(W xz x t +W hz h t-1 +b z ) (2) H t =tanh(w xh x t +W H (r t h t-1 )+b H ) (3) h t =z t h t-1 +(1-z t ) H t (4) where r t and z t are the reset and update gates, and is an element multiplication. The tanh activation function ensures the output values are between -1 and 1. The ReLU activation function is f(x)=x for x>0, f(x)=0 otherwise. xt is the input, h t is the output, H t is the candidate output. W rx,w rh,w xz,w hz,w xh,w H are the related weight matrices. b r,b z,b H are the related biases.

3 Figure 2b presents the structure of the GRUN model. It is a deep neural network framework which has multiple processing layers to learn representations of data with multiple levels of abstractions. There are three GRU layers that represent water demand at different periods of time and these make up the first part of the network. The time axis can be divided into three fragments, denoting recent time, near time and distant time. The three GRU layers correspond to the aforementioned three fragments which can produce a memory state for the past water demand and establish dependencies between water demands of different periods of time. Then, the three GRU layers are connected to a merge layer (i.e., a layer that concatenates a list of inputs) by the tensors as the second part of the network. The merge layer integrates water demand information, assigns different weights according to the importance of water demand for different periods of time, and further deepens these relationships. The third part of the network consists of many layers (i.e., just regular fully-connected layer). These dense layers can further enhance the ability of the model to deal with nonlinear data. The fourth part is the output layer that directly outputs the prediction values. Figure 2. (a) Illustration of GRU; (b) Structure of gated GRUN model. 2.3 ANN model Over the past 20 years, artificial neural networks have been increasingly used in water demand prediction. Many studies use ANN models to predict hourly or daily water demand [10]. The most common ANN network is the feed-forward neural network with back-propagation learning algorithm. The ANN model used in this paper has three dense layers (i.e., input layer, hidden layer, output layer), which makes it a conventional neural network. We put the water demand for different periods of time into the ANN model and make some simple nonlinear transformations until the results are satisfactory. 3. Case study 3.1 Data description The data to be analyzed in this study is collected from Changzhou city. The DMA is in the northeast of Changzhou and has a population of about It is mainly the residential water and also contains

4 some commercial areas. The data ranges from February 1, 2016, to January 31, 2017, with a sampling interval of 15 minutes. The maximum value is 157 m 3 and the mean value is 82 m 3. The total dataset contains observations. 3.2 Feature extraction In order to better explore the characteristics of water demand time series, the timeline is divided into three fragments: recent time, near time and distant time. The first fragment is recent time. To model recent temporal dependence, we select i 15-min time steps of water demand data that are close to time t of today (Q t ). Let [Q t-1,q t-2,,q t-i ] be this recent dependent sequence, where t is predicted time, and i can be selected from between 1 and 12. The second fragment is near time. To model near temporal dependence, we select j 15-min time steps of water demand data that are close to time t of previous day (Q t-96 ). Let [Q t-96+j,,q t-96,,q t-96-j ] be this near dependent sequence, where t is predicted time, and j can be selected from between 0 and 6. The third fragment is distant time. To model distant temporal dependence, we select k 15-min time steps of water demand data that are close to time t of the day before yesterday (Q t-192 ). Let [Q t-192+k,,q t-192,,q t-192-k ] be this distant dependent sequence, where t is predicted time, and k can be selected from between 0 and 6. Parameter grid search is used to obtain the optimal value of i, j and k according to the minimum of mean square error on validation data. In this case, the value of i is 5, the value of j and k is 2. Both the ANN and GRUN models adopt the same features as model inputs, as shown in Table 1. Table 1. Features of model inputs. Model ANN model Feature extraction Qt-1, Qt-2, Qt-3, Qt-4, Qt-5,Qt-94, Qt-95, Qt-96, Qt-97, Qt-98, Qt-190, Qt-191, Qt-192, Qt-193, Qt-194 Recent sequence (GRU layer) Near sequence (GRU layer) Distant sequence (GRU layer) GRUN model Qt-1 Qt-2 Qt-94 Qt-95 Qt-190 Qt-191 Qt-3 Qt-96 Qt-192 Qt-4 Qt-97 Qt-193 Qt-5 Qt-98 Qt Development of prediction models The dataset is divided into training data (22500 samples), validation data (2500 samples) and testing data (2500 samples). Note that the training data and validation data are randomly selected. This paper used the mini-batch gradient decent method to train the model. The method divides the training data into several batches and updates the parameters by batches, so that the randomness of gradient descent in the training process is reduced. The validation data is used to select model parameters and earlystop our training algorithm for each model based on minimum validation loss. This efficiently avoids model over-fitting or under-fitting. At the beginning of every training epoch, the training/validation data will be shuffled. At the end of each training epochs, the state of the model is evaluated through the loss curve of training/validation data. The model parameters of each algorithm are optimized through parameter grid search, the results were shown in Table 2.

5 Table 2. Parameter optimization results. Parameter ANN model GRUN model dense layer GRU layer dense layer Number of layers Number of nodes 32,8,1 48,32,32 64,32,16,8,4,2,1 Learning rate Activation tanh, ReLU, Linear tanh, ReLU ReLU, Linear Optimizer Adam Adam Number of epochs Results Table 3 summarizes the forecasting performance obtained by applying the models to testing data. The results shown in Table 3 lead to the following observations. Firstly, for the 15 minute prediction, the best forecasting performances were obtained by using the GRUN model. It has a high prediction accuracy (e.g., MAPE value is 2.02%). Secondly, for the 24 hour prediction, the results suggest that more accurate forecasts can be obtained by using the GRUN model (e.g., MAPE value is 4.79%). So the GRUN model has a better performance than ANN model for both 15 minute and 24 hour predictions. Table 3. Performance indicators of prediction models Model Mean Absolute Error Root Mean Square Error Nash-Sutcliffe Model Efficiency Mean Absolute Percentage Error (%) ANN (15 minute) GRUN (15 minute) ANN (24 hour) GRUN (24 hour) A more significant evaluation of the performance of models is given in Figures 3 and 4. For the 15 minute prediction, Figure 3(a) shows the histogram of the relative error for ANN model. It indicates that 95% of the forecasted relative errors fall within the range of ±6.65%. Figure 3(b) shows the histogram of the relative error for GRUN model. It indicates that 95% of the forecasted relative errors fall within the range of ±5.48%. From the Figure 3(b), it seems that the relative errors are smaller than for the ANN model. As for the 24 hour prediction, Figure 4(a) shows the histogram of the relative error for ANN model. It indicates that 95% of the forecasted relative errors fall within the range of ±20.17%. Figure 4(b) shows the histogram of the relative error for GRUN model. It indicates that 95% of the forecasted relative errors fall within the range of ±12.65%. These results show that the bias of relative errors for the GRUN model is lower than for the ANN model, which implies that the GRUN model is more stable for both 15 minute and 24 hour predictions. The performance differences can be explained by methods itself since the GRUN model has eleven network layers to make complex nonlinear transformations of water demand data which can achieve a high prediction accuracy. And the GRU is the key to the GRUN model, it has a memory function that retains information on past water demand and establishes inter-dependencies in water demand

6 for different periods of time. By contrast, the ANN model only has three dense layers to make simple transformations which will result in less reliable forecasts. Figure 3. Relative errors for 15 minute prediction. Figure 4. Relative errors for 24 hour prediction. In addition, the computation load is evaluated by two indicators, the first is Akaike information criterion (AIC) and the second is computation time spent in model development. As summarized in Table 4, the GRUN model has a much larger AIC than the ANN model. This indicates that the GRUN model has a more complex structure so it has a higher computational load. The computation time of the ANN model is faster than the GRUN model. From the perspective of model complexity, the ANN model indeed has some advantages, but its prediction accuracy and model stability are not as good as GRUN model. Table 4. Computation load on training data. Model AIC Time (s) ANN GRUN Conclusions and future work This study investigates the potential of deep learning in short-term water demand prediction. We developed the GRUN model to forecast water demand for 15 minutes and 24 hours into the future and compared it with the conventional ANN model. The conclusions of this work and suggestions for future work are listed below:

7 1. The deep learning-based method proposed in this study can achieve accurate and reliable water demand prediction for 15 minutes and 24 hours. The GRUN model predicts more accurately and has lower bias of relative errors than the conventional ANN model. 2. The ANN model has a lower computation load, but does not predict as accurately and is less stable than the GRUN model. 3. Future work should involve further testing for the proposed models on large amounts of real-time monitoring data in different DMAs. Meanwhile, investigating other factors that can affect shortterm water demand forecasts is necessary. 5. References [1] M. Herrera, L. Torgo, J. Izquierdo, and R. Pérez-García, "Predictive models for forecasting hourly urban water demand," Journal of Hydrology, vol. 387, no. 1-2, pp , [2] E. A. Donkor, T. A. Mazzuchi, R. Soyer, and J. A. Roberson, "Urban Water Demand Forecasting: Review of Methods and Models," (in English), Journal of Water Resources Planning and Management, vol. 140, no. 2, pp , Feb [3] J. Bougadis, K. Adamowski, and R. Diduch, "Short-term municipal water demand forecasting," Hydrological Processes, vol. 19, no. 1, pp , [4] J. S. Wong, Q. Zhang, and Y. D. Chen, "Statistical modeling of daily urban water consumption in Hong Kong: Trend, changing patterns, and forecast," Water Resources Research, vol. 46, no. 3, [5] B. M. Brentan, E. Luvizotto Jr, M. Herrera, J. Izquierdo, and R. Pérez-García, "Hybrid regression model for near real-time urban water demand forecasting," Journal of Computational and Applied Mathematics, vol. 309, pp , [6] S. Mouatadid and J. Adamowski, "Using extreme learning machines for short-term urban water demand forecasting," Urban Water Journal, vol. 14, no. 6, pp , [7] Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," Nature, vol. 521, no. 7553, pp , May [8] K. Cho, B. Van Merrienboer, D. Bahdanau, and Y. Bengio, "On the Properties of Neural Machine Translation: Encoder-Decoder Approaches," Computer Science, [9] J. Chung, C. Gulcehre, K. H. Cho, and Y. Bengio, "Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling," arxiv preprint arxiv: , [10] M. Romano and Z. Kapelan, "Adaptive water demand forecasting for near real-time management of smart water distribution systems," Environmental Modelling & Software, vol. 60, pp , 2014.

EE-559 Deep learning LSTM and GRU

EE-559 Deep learning LSTM and GRU EE-559 Deep learning 11.2. LSTM and GRU François Fleuret https://fleuret.org/ee559/ Mon Feb 18 13:33:24 UTC 2019 ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE The Long-Short Term Memory unit (LSTM) by Hochreiter

More information

Recurrent Neural Networks with Flexible Gates using Kernel Activation Functions

Recurrent Neural Networks with Flexible Gates using Kernel Activation Functions 2018 IEEE International Workshop on Machine Learning for Signal Processing (MLSP 18) Recurrent Neural Networks with Flexible Gates using Kernel Activation Functions Authors: S. Scardapane, S. Van Vaerenbergh,

More information

High Order LSTM/GRU. Wenjie Luo. January 19, 2016

High Order LSTM/GRU. Wenjie Luo. January 19, 2016 High Order LSTM/GRU Wenjie Luo January 19, 2016 1 Introduction RNN is a powerful model for sequence data but suffers from gradient vanishing and explosion, thus difficult to be trained to capture long

More information

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6 Machine Learning for Large-Scale Data Analysis and Decision Making 80-629-17A Neural Networks Week #6 Today Neural Networks A. Modeling B. Fitting C. Deep neural networks Today s material is (adapted)

More information

Long-Short Term Memory and Other Gated RNNs

Long-Short Term Memory and Other Gated RNNs Long-Short Term Memory and Other Gated RNNs Sargur Srihari srihari@buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 Topics in Sequence Modeling

More information

Faster Training of Very Deep Networks Via p-norm Gates

Faster Training of Very Deep Networks Via p-norm Gates Faster Training of Very Deep Networks Via p-norm Gates Trang Pham, Truyen Tran, Dinh Phung, Svetha Venkatesh Center for Pattern Recognition and Data Analytics Deakin University, Geelong Australia Email:

More information

Recurrent Neural Networks (Part - 2) Sumit Chopra Facebook

Recurrent Neural Networks (Part - 2) Sumit Chopra Facebook Recurrent Neural Networks (Part - 2) Sumit Chopra Facebook Recap Standard RNNs Training: Backpropagation Through Time (BPTT) Application to sequence modeling Language modeling Applications: Automatic speech

More information

CSC321 Lecture 10 Training RNNs

CSC321 Lecture 10 Training RNNs CSC321 Lecture 10 Training RNNs Roger Grosse and Nitish Srivastava February 23, 2015 Roger Grosse and Nitish Srivastava CSC321 Lecture 10 Training RNNs February 23, 2015 1 / 18 Overview Last time, we saw

More information

arxiv: v3 [cs.lg] 14 Jan 2018

arxiv: v3 [cs.lg] 14 Jan 2018 A Gentle Tutorial of Recurrent Neural Network with Error Backpropagation Gang Chen Department of Computer Science and Engineering, SUNY at Buffalo arxiv:1610.02583v3 [cs.lg] 14 Jan 2018 1 abstract We describe

More information

Slide credit from Hung-Yi Lee & Richard Socher

Slide credit from Hung-Yi Lee & Richard Socher Slide credit from Hung-Yi Lee & Richard Socher 1 Review Recurrent Neural Network 2 Recurrent Neural Network Idea: condition the neural network on all previous words and tie the weights at each time step

More information

CSC321 Lecture 15: Exploding and Vanishing Gradients

CSC321 Lecture 15: Exploding and Vanishing Gradients CSC321 Lecture 15: Exploding and Vanishing Gradients Roger Grosse Roger Grosse CSC321 Lecture 15: Exploding and Vanishing Gradients 1 / 23 Overview Yesterday, we saw how to compute the gradient descent

More information

Introduction to Deep Neural Networks

Introduction to Deep Neural Networks Introduction to Deep Neural Networks Presenter: Chunyuan Li Pattern Classification and Recognition (ECE 681.01) Duke University April, 2016 Outline 1 Background and Preliminaries Why DNNs? Model: Logistic

More information

Stephen Scott.

Stephen Scott. 1 / 35 (Adapted from Vinod Variyam and Ian Goodfellow) sscott@cse.unl.edu 2 / 35 All our architectures so far work on fixed-sized inputs neural networks work on sequences of inputs E.g., text, biological

More information

A QUESTION ANSWERING SYSTEM USING ENCODER-DECODER, SEQUENCE-TO-SEQUENCE, RECURRENT NEURAL NETWORKS. A Project. Presented to

A QUESTION ANSWERING SYSTEM USING ENCODER-DECODER, SEQUENCE-TO-SEQUENCE, RECURRENT NEURAL NETWORKS. A Project. Presented to A QUESTION ANSWERING SYSTEM USING ENCODER-DECODER, SEQUENCE-TO-SEQUENCE, RECURRENT NEURAL NETWORKS A Project Presented to The Faculty of the Department of Computer Science San José State University In

More information

Electric Load Forecasting Using Wavelet Transform and Extreme Learning Machine

Electric Load Forecasting Using Wavelet Transform and Extreme Learning Machine Electric Load Forecasting Using Wavelet Transform and Extreme Learning Machine Song Li 1, Peng Wang 1 and Lalit Goel 1 1 School of Electrical and Electronic Engineering Nanyang Technological University

More information

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino Artificial Neural Networks Data Base and Data Mining Group of Politecnico di Torino Elena Baralis Politecnico di Torino Artificial Neural Networks Inspired to the structure of the human brain Neurons as

More information

Introduction to RNNs!

Introduction to RNNs! Introduction to RNNs Arun Mallya Best viewed with Computer Modern fonts installed Outline Why Recurrent Neural Networks (RNNs)? The Vanilla RNN unit The RNN forward pass Backpropagation refresher The RNN

More information

MODELLING ENERGY DEMAND FORECASTING USING NEURAL NETWORKS WITH UNIVARIATE TIME SERIES

MODELLING ENERGY DEMAND FORECASTING USING NEURAL NETWORKS WITH UNIVARIATE TIME SERIES MODELLING ENERGY DEMAND FORECASTING USING NEURAL NETWORKS WITH UNIVARIATE TIME SERIES S. Cankurt 1, M. Yasin 2 1&2 Ishik University Erbil, Iraq 1 s.cankurt@ishik.edu.iq, 2 m.yasin@ishik.edu.iq doi:10.23918/iec2018.26

More information

CS 229 Project Final Report: Reinforcement Learning for Neural Network Architecture Category : Theory & Reinforcement Learning

CS 229 Project Final Report: Reinforcement Learning for Neural Network Architecture Category : Theory & Reinforcement Learning CS 229 Project Final Report: Reinforcement Learning for Neural Network Architecture Category : Theory & Reinforcement Learning Lei Lei Ruoxuan Xiong December 16, 2017 1 Introduction Deep Neural Network

More information

Eve: A Gradient Based Optimization Method with Locally and Globally Adaptive Learning Rates

Eve: A Gradient Based Optimization Method with Locally and Globally Adaptive Learning Rates Eve: A Gradient Based Optimization Method with Locally and Globally Adaptive Learning Rates Hiroaki Hayashi 1,* Jayanth Koushik 1,* Graham Neubig 1 arxiv:1611.01505v3 [cs.lg] 11 Jun 2018 Abstract Adaptive

More information

Lecture 17: Neural Networks and Deep Learning

Lecture 17: Neural Networks and Deep Learning UVA CS 6316 / CS 4501-004 Machine Learning Fall 2016 Lecture 17: Neural Networks and Deep Learning Jack Lanchantin Dr. Yanjun Qi 1 Neurons 1-Layer Neural Network Multi-layer Neural Network Loss Functions

More information

Introduction to Neural Networks

Introduction to Neural Networks CUONG TUAN NGUYEN SEIJI HOTTA MASAKI NAKAGAWA Tokyo University of Agriculture and Technology Copyright by Nguyen, Hotta and Nakagawa 1 Pattern classification Which category of an input? Example: Character

More information

Recurrent Neural Networks. Jian Tang

Recurrent Neural Networks. Jian Tang Recurrent Neural Networks Jian Tang tangjianpku@gmail.com 1 RNN: Recurrent neural networks Neural networks for sequence modeling Summarize a sequence with fix-sized vector through recursively updating

More information

Recurrent Neural Networks. deeplearning.ai. Why sequence models?

Recurrent Neural Networks. deeplearning.ai. Why sequence models? Recurrent Neural Networks deeplearning.ai Why sequence models? Examples of sequence data The quick brown fox jumped over the lazy dog. Speech recognition Music generation Sentiment classification There

More information

A Hybrid Model of Wavelet and Neural Network for Short Term Load Forecasting

A Hybrid Model of Wavelet and Neural Network for Short Term Load Forecasting International Journal of Electronic and Electrical Engineering. ISSN 0974-2174, Volume 7, Number 4 (2014), pp. 387-394 International Research Publication House http://www.irphouse.com A Hybrid Model of

More information

Lecture 11 Recurrent Neural Networks I

Lecture 11 Recurrent Neural Networks I Lecture 11 Recurrent Neural Networks I CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor University of Chicago May 01, 2017 Introduction Sequence Learning with Neural Networks Some Sequence Tasks

More information

A Hybrid ARIMA and Neural Network Model to Forecast Particulate. Matter Concentration in Changsha, China

A Hybrid ARIMA and Neural Network Model to Forecast Particulate. Matter Concentration in Changsha, China A Hybrid ARIMA and Neural Network Model to Forecast Particulate Matter Concentration in Changsha, China Guangxing He 1, Qihong Deng 2* 1 School of Energy Science and Engineering, Central South University,

More information

Artificial Neural Networks

Artificial Neural Networks Introduction ANN in Action Final Observations Application: Poverty Detection Artificial Neural Networks Alvaro J. Riascos Villegas University of los Andes and Quantil July 6 2018 Artificial Neural Networks

More information

Deep Learning Architecture for Univariate Time Series Forecasting

Deep Learning Architecture for Univariate Time Series Forecasting CS229,Technical Report, 2014 Deep Learning Architecture for Univariate Time Series Forecasting Dmitry Vengertsev 1 Abstract This paper studies the problem of applying machine learning with deep architecture

More information

RECURRENT NEURAL NETWORKS WITH FLEXIBLE GATES USING KERNEL ACTIVATION FUNCTIONS

RECURRENT NEURAL NETWORKS WITH FLEXIBLE GATES USING KERNEL ACTIVATION FUNCTIONS 2018 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING, SEPT. 17 20, 2018, AALBORG, DENMARK RECURRENT NEURAL NETWORKS WITH FLEXIBLE GATES USING KERNEL ACTIVATION FUNCTIONS Simone Scardapane,

More information

Short Term Load Forecasting Based Artificial Neural Network

Short Term Load Forecasting Based Artificial Neural Network Short Term Load Forecasting Based Artificial Neural Network Dr. Adel M. Dakhil Department of Electrical Engineering Misan University Iraq- Misan Dr.adelmanaa@gmail.com Abstract Present study develops short

More information

Deep Learning. Recurrent Neural Network (RNNs) Ali Ghodsi. October 23, Slides are partially based on Book in preparation, Deep Learning

Deep Learning. Recurrent Neural Network (RNNs) Ali Ghodsi. October 23, Slides are partially based on Book in preparation, Deep Learning Recurrent Neural Network (RNNs) University of Waterloo October 23, 2015 Slides are partially based on Book in preparation, by Bengio, Goodfellow, and Aaron Courville, 2015 Sequential data Recurrent neural

More information

ESTIMATION OF HOURLY MEAN AMBIENT TEMPERATURES WITH ARTIFICIAL NEURAL NETWORKS 1. INTRODUCTION

ESTIMATION OF HOURLY MEAN AMBIENT TEMPERATURES WITH ARTIFICIAL NEURAL NETWORKS 1. INTRODUCTION Mathematical and Computational Applications, Vol. 11, No. 3, pp. 215-224, 2006. Association for Scientific Research ESTIMATION OF HOURLY MEAN AMBIENT TEMPERATURES WITH ARTIFICIAL NEURAL NETWORKS Ömer Altan

More information

Learning Long-Term Dependencies with Gradient Descent is Difficult

Learning Long-Term Dependencies with Gradient Descent is Difficult Learning Long-Term Dependencies with Gradient Descent is Difficult Y. Bengio, P. Simard & P. Frasconi, IEEE Trans. Neural Nets, 1994 June 23, 2016, ICML, New York City Back-to-the-future Workshop Yoshua

More information

Lecture 11 Recurrent Neural Networks I

Lecture 11 Recurrent Neural Networks I Lecture 11 Recurrent Neural Networks I CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor niversity of Chicago May 01, 2017 Introduction Sequence Learning with Neural Networks Some Sequence Tasks

More information

arxiv: v1 [cs.cl] 21 May 2017

arxiv: v1 [cs.cl] 21 May 2017 Spelling Correction as a Foreign Language Yingbo Zhou yingbzhou@ebay.com Utkarsh Porwal uporwal@ebay.com Roberto Konow rkonow@ebay.com arxiv:1705.07371v1 [cs.cl] 21 May 2017 Abstract In this paper, we

More information

Based on the original slides of Hung-yi Lee

Based on the original slides of Hung-yi Lee Based on the original slides of Hung-yi Lee Google Trends Deep learning obtains many exciting results. Can contribute to new Smart Services in the Context of the Internet of Things (IoT). IoT Services

More information

Introduction to Convolutional Neural Networks (CNNs)

Introduction to Convolutional Neural Networks (CNNs) Introduction to Convolutional Neural Networks (CNNs) nojunk@snu.ac.kr http://mipal.snu.ac.kr Department of Transdisciplinary Studies Seoul National University, Korea Jan. 2016 Many slides are from Fei-Fei

More information

EE-559 Deep learning Recurrent Neural Networks

EE-559 Deep learning Recurrent Neural Networks EE-559 Deep learning 11.1. Recurrent Neural Networks François Fleuret https://fleuret.org/ee559/ Sun Feb 24 20:33:31 UTC 2019 Inference from sequences François Fleuret EE-559 Deep learning / 11.1. Recurrent

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement

More information

Lecture 15: Exploding and Vanishing Gradients

Lecture 15: Exploding and Vanishing Gradients Lecture 15: Exploding and Vanishing Gradients Roger Grosse 1 Introduction Last lecture, we introduced RNNs and saw how to derive the gradients using backprop through time. In principle, this lets us train

More information

Improved Learning through Augmenting the Loss

Improved Learning through Augmenting the Loss Improved Learning through Augmenting the Loss Hakan Inan inanh@stanford.edu Khashayar Khosravi khosravi@stanford.edu Abstract We present two improvements to the well-known Recurrent Neural Network Language

More information

Demand and Trip Prediction in Bike Share Systems

Demand and Trip Prediction in Bike Share Systems Demand and Trip Prediction in Bike Share Systems Team members: Zhaonan Qu SUNet ID: zhaonanq December 16, 2017 1 Abstract 2 Introduction Bike Share systems are becoming increasingly popular in urban areas.

More information

NEURAL LANGUAGE MODELS

NEURAL LANGUAGE MODELS COMP90042 LECTURE 14 NEURAL LANGUAGE MODELS LANGUAGE MODELS Assign a probability to a sequence of words Framed as sliding a window over the sentence, predicting each word from finite context to left E.g.,

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

Prediction of Hourly Solar Radiation in Amman-Jordan by Using Artificial Neural Networks

Prediction of Hourly Solar Radiation in Amman-Jordan by Using Artificial Neural Networks Int. J. of Thermal & Environmental Engineering Volume 14, No. 2 (2017) 103-108 Prediction of Hourly Solar Radiation in Amman-Jordan by Using Artificial Neural Networks M. A. Hamdan a*, E. Abdelhafez b

More information

Reservoir Computing and Echo State Networks

Reservoir Computing and Echo State Networks An Introduction to: Reservoir Computing and Echo State Networks Claudio Gallicchio gallicch@di.unipi.it Outline Focus: Supervised learning in domain of sequences Recurrent Neural networks for supervised

More information

arxiv: v1 [cs.lg] 2 Feb 2018

arxiv: v1 [cs.lg] 2 Feb 2018 Short-term Memory of Deep RNN Claudio Gallicchio arxiv:1802.00748v1 [cs.lg] 2 Feb 2018 Department of Computer Science, University of Pisa Largo Bruno Pontecorvo 3-56127 Pisa, Italy Abstract. The extension

More information

Bearing fault diagnosis based on EMD-KPCA and ELM

Bearing fault diagnosis based on EMD-KPCA and ELM Bearing fault diagnosis based on EMD-KPCA and ELM Zihan Chen, Hang Yuan 2 School of Reliability and Systems Engineering, Beihang University, Beijing 9, China Science and Technology on Reliability & Environmental

More information

Artificial Neural Network Method of Rock Mass Blastability Classification

Artificial Neural Network Method of Rock Mass Blastability Classification Artificial Neural Network Method of Rock Mass Blastability Classification Jiang Han, Xu Weiya, Xie Shouyi Research Institute of Geotechnical Engineering, Hohai University, Nanjing, Jiangshu, P.R.China

More information

WEATHER DEPENENT ELECTRICITY MARKET FORECASTING WITH NEURAL NETWORKS, WAVELET AND DATA MINING TECHNIQUES. Z.Y. Dong X. Li Z. Xu K. L.

WEATHER DEPENENT ELECTRICITY MARKET FORECASTING WITH NEURAL NETWORKS, WAVELET AND DATA MINING TECHNIQUES. Z.Y. Dong X. Li Z. Xu K. L. WEATHER DEPENENT ELECTRICITY MARKET FORECASTING WITH NEURAL NETWORKS, WAVELET AND DATA MINING TECHNIQUES Abstract Z.Y. Dong X. Li Z. Xu K. L. Teo School of Information Technology and Electrical Engineering

More information

What Do Neural Networks Do? MLP Lecture 3 Multi-layer networks 1

What Do Neural Networks Do? MLP Lecture 3 Multi-layer networks 1 What Do Neural Networks Do? MLP Lecture 3 Multi-layer networks 1 Multi-layer networks Steve Renals Machine Learning Practical MLP Lecture 3 7 October 2015 MLP Lecture 3 Multi-layer networks 2 What Do Single

More information

CSC321 Lecture 16: ResNets and Attention

CSC321 Lecture 16: ResNets and Attention CSC321 Lecture 16: ResNets and Attention Roger Grosse Roger Grosse CSC321 Lecture 16: ResNets and Attention 1 / 24 Overview Two topics for today: Topic 1: Deep Residual Networks (ResNets) This is the state-of-the

More information

Short Term Load Forecasting Using Multi Layer Perceptron

Short Term Load Forecasting Using Multi Layer Perceptron International OPEN ACCESS Journal Of Modern Engineering Research (IJMER) Short Term Load Forecasting Using Multi Layer Perceptron S.Hema Chandra 1, B.Tejaswini 2, B.suneetha 3, N.chandi Priya 4, P.Prathima

More information

Convolutional Neural Networks II. Slides from Dr. Vlad Morariu

Convolutional Neural Networks II. Slides from Dr. Vlad Morariu Convolutional Neural Networks II Slides from Dr. Vlad Morariu 1 Optimization Example of optimization progress while training a neural network. (Loss over mini-batches goes down over time.) 2 Learning rate

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

Deep Learning: a gentle introduction

Deep Learning: a gentle introduction Deep Learning: a gentle introduction Jamal Atif jamal.atif@dauphine.fr PSL, Université Paris-Dauphine, LAMSADE February 8, 206 Jamal Atif (Université Paris-Dauphine) Deep Learning February 8, 206 / Why

More information

Neural Networks Language Models

Neural Networks Language Models Neural Networks Language Models Philipp Koehn 10 October 2017 N-Gram Backoff Language Model 1 Previously, we approximated... by applying the chain rule p(w ) = p(w 1, w 2,..., w n ) p(w ) = i p(w i w 1,...,

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) Human Brain Neurons Input-Output Transformation Input Spikes Output Spike Spike (= a brief pulse) (Excitatory Post-Synaptic Potential)

More information

SGD and Deep Learning

SGD and Deep Learning SGD and Deep Learning Subgradients Lets make the gradient cheating more formal. Recall that the gradient is the slope of the tangent. f(w 1 )+rf(w 1 ) (w w 1 ) Non differentiable case? w 1 Subgradients

More information

The Research of Urban Rail Transit Sectional Passenger Flow Prediction Method

The Research of Urban Rail Transit Sectional Passenger Flow Prediction Method Journal of Intelligent Learning Systems and Applications, 2013, 5, 227-231 Published Online November 2013 (http://www.scirp.org/journal/jilsa) http://dx.doi.org/10.4236/jilsa.2013.54026 227 The Research

More information

Recurrent Neural Network

Recurrent Neural Network Recurrent Neural Network Xiaogang Wang xgwang@ee..edu.hk March 2, 2017 Xiaogang Wang (linux) Recurrent Neural Network March 2, 2017 1 / 48 Outline 1 Recurrent neural networks Recurrent neural networks

More information

Supervised Learning. George Konidaris

Supervised Learning. George Konidaris Supervised Learning George Konidaris gdk@cs.brown.edu Fall 2017 Machine Learning Subfield of AI concerned with learning from data. Broadly, using: Experience To Improve Performance On Some Task (Tom Mitchell,

More information

Feed-forward Network Functions

Feed-forward Network Functions Feed-forward Network Functions Sargur Srihari Topics 1. Extension of linear models 2. Feed-forward Network Functions 3. Weight-space symmetries 2 Recap of Linear Models Linear Models for Regression, Classification

More information

Trajectory-based Radical Analysis Network for Online Handwritten Chinese Character Recognition

Trajectory-based Radical Analysis Network for Online Handwritten Chinese Character Recognition Trajectory-based Radical Analysis Network for Online Handwritten Chinese Character Recognition Jianshu Zhang, Yixing Zhu, Jun Du and Lirong Dai National Engineering Laboratory for Speech and Language Information

More information

Based on the original slides of Hung-yi Lee

Based on the original slides of Hung-yi Lee Based on the original slides of Hung-yi Lee New Activation Function Rectified Linear Unit (ReLU) σ z a a = z Reason: 1. Fast to compute 2. Biological reason a = 0 [Xavier Glorot, AISTATS 11] [Andrew L.

More information

Neural Networks Lecturer: J. Matas Authors: J. Matas, B. Flach, O. Drbohlav

Neural Networks Lecturer: J. Matas Authors: J. Matas, B. Flach, O. Drbohlav Neural Networks 30.11.2015 Lecturer: J. Matas Authors: J. Matas, B. Flach, O. Drbohlav 1 Talk Outline Perceptron Combining neurons to a network Neural network, processing input to an output Learning Cost

More information

Multi-wind Field Output Power Prediction Method based on Energy Internet and DBPSO-LSSVM

Multi-wind Field Output Power Prediction Method based on Energy Internet and DBPSO-LSSVM , pp.128-133 http://dx.doi.org/1.14257/astl.16.138.27 Multi-wind Field Output Power Prediction Method based on Energy Internet and DBPSO-LSSVM *Jianlou Lou 1, Hui Cao 1, Bin Song 2, Jizhe Xiao 1 1 School

More information

Deep Gate Recurrent Neural Network

Deep Gate Recurrent Neural Network JMLR: Workshop and Conference Proceedings 63:350 365, 2016 ACML 2016 Deep Gate Recurrent Neural Network Yuan Gao University of Helsinki Dorota Glowacka University of Helsinki gaoyuankidult@gmail.com glowacka@cs.helsinki.fi

More information

Normalization Techniques

Normalization Techniques Normalization Techniques Devansh Arpit Normalization Techniques 1 / 39 Table of Contents 1 Introduction 2 Motivation 3 Batch Normalization 4 Normalization Propagation 5 Weight Normalization 6 Layer Normalization

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning Lecture 9 Numerical optimization and deep learning Niklas Wahlström Division of Systems and Control Department of Information Technology Uppsala University niklas.wahlstrom@it.uu.se

More information

Memory-Augmented Attention Model for Scene Text Recognition

Memory-Augmented Attention Model for Scene Text Recognition Memory-Augmented Attention Model for Scene Text Recognition Cong Wang 1,2, Fei Yin 1,2, Cheng-Lin Liu 1,2,3 1 National Laboratory of Pattern Recognition Institute of Automation, Chinese Academy of Sciences

More information

Classification of Hand-Written Digits Using Scattering Convolutional Network

Classification of Hand-Written Digits Using Scattering Convolutional Network Mid-year Progress Report Classification of Hand-Written Digits Using Scattering Convolutional Network Dongmian Zou Advisor: Professor Radu Balan Co-Advisor: Dr. Maneesh Singh (SRI) Background Overview

More information

Sequence Modeling with Neural Networks

Sequence Modeling with Neural Networks Sequence Modeling with Neural Networks Harini Suresh y 0 y 1 y 2 s 0 s 1 s 2... x 0 x 1 x 2 hat is a sequence? This morning I took the dog for a walk. sentence medical signals speech waveform Successes

More information

Multilayer Perceptron

Multilayer Perceptron Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Single Perceptron 3 Boolean Function Learning 4

More information

The Deep Ritz method: A deep learning-based numerical algorithm for solving variational problems

The Deep Ritz method: A deep learning-based numerical algorithm for solving variational problems The Deep Ritz method: A deep learning-based numerical algorithm for solving variational problems Weinan E 1 and Bing Yu 2 arxiv:1710.00211v1 [cs.lg] 30 Sep 2017 1 The Beijing Institute of Big Data Research,

More information

ECE521 Lecture 7/8. Logistic Regression

ECE521 Lecture 7/8. Logistic Regression ECE521 Lecture 7/8 Logistic Regression Outline Logistic regression (Continue) A single neuron Learning neural networks Multi-class classification 2 Logistic regression The output of a logistic regression

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

An overview of deep learning methods for genomics

An overview of deep learning methods for genomics An overview of deep learning methods for genomics Matthew Ploenzke STAT115/215/BIO/BIST282 Harvard University April 19, 218 1 Snapshot 1. Brief introduction to convolutional neural networks What is deep

More information

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes CS 6501: Deep Learning for Computer Graphics Basics of Neural Networks Connelly Barnes Overview Simple neural networks Perceptron Feedforward neural networks Multilayer perceptron and properties Autoencoders

More information

Gated Recurrent Neural Tensor Network

Gated Recurrent Neural Tensor Network Gated Recurrent Neural Tensor Network Andros Tjandra, Sakriani Sakti, Ruli Manurung, Mirna Adriani and Satoshi Nakamura Faculty of Computer Science, Universitas Indonesia, Indonesia Email: andros.tjandra@gmail.com,

More information

Course 395: Machine Learning - Lectures

Course 395: Machine Learning - Lectures Course 395: Machine Learning - Lectures Lecture 1-2: Concept Learning (M. Pantic) Lecture 3-4: Decision Trees & CBC Intro (M. Pantic & S. Petridis) Lecture 5-6: Evaluating Hypotheses (S. Petridis) Lecture

More information

RECURRENT NETWORKS I. Philipp Krähenbühl

RECURRENT NETWORKS I. Philipp Krähenbühl RECURRENT NETWORKS I Philipp Krähenbühl RECAP: CLASSIFICATION conv 1 conv 2 conv 3 conv 4 1 2 tu RECAP: SEGMENTATION conv 1 conv 2 conv 3 conv 4 RECAP: DETECTION conv 1 conv 2 conv 3 conv 4 RECAP: GENERATION

More information

Overview, Comparative Assessment and Recommendations of Forecasting Models for Short-Term Water Demand Prediction

Overview, Comparative Assessment and Recommendations of Forecasting Models for Short-Term Water Demand Prediction water Review Overview, Comparative Assessment and Recommendations of Forecasting Models for Short-Term Water Demand Prediction Amos O. Anele 1, *, Yskandar Hamam 1, Adnan M. Abu-Mahfouz 1,2 ID and Ezio

More information

Recurrent Neural Networks

Recurrent Neural Networks Charu C. Aggarwal IBM T J Watson Research Center Yorktown Heights, NY Recurrent Neural Networks Neural Networks and Deep Learning, Springer, 218 Chapter 7.1 7.2 The Challenges of Processing Sequences Conventional

More information

ANN and Statistical Theory Based Forecasting and Analysis of Power System Variables

ANN and Statistical Theory Based Forecasting and Analysis of Power System Variables ANN and Statistical Theory Based Forecasting and Analysis of Power System Variables Sruthi V. Nair 1, Poonam Kothari 2, Kushal Lodha 3 1,2,3 Lecturer, G. H. Raisoni Institute of Engineering & Technology,

More information

Deep Learning Sequence to Sequence models: Attention Models. 17 March 2018

Deep Learning Sequence to Sequence models: Attention Models. 17 March 2018 Deep Learning Sequence to Sequence models: Attention Models 17 March 2018 1 Sequence-to-sequence modelling Problem: E.g. A sequence X 1 X N goes in A different sequence Y 1 Y M comes out Speech recognition:

More information

Journal of of Computer Applications Research Research and Development and Development (JCARD), ISSN (Print), ISSN

Journal of of Computer Applications Research Research and Development and Development (JCARD), ISSN (Print), ISSN JCARD Journal of of Computer Applications Research Research and Development and Development (JCARD), ISSN 2248-9304(Print), ISSN 2248-9312 (JCARD),(Online) ISSN 2248-9304(Print), Volume 1, Number ISSN

More information

Long Short- Term Memory (LSTM) M1 Yuichiro Sawai Computa;onal Linguis;cs Lab. January 15, Deep Lunch

Long Short- Term Memory (LSTM) M1 Yuichiro Sawai Computa;onal Linguis;cs Lab. January 15, Deep Lunch Long Short- Term Memory (LSTM) M1 Yuichiro Sawai Computa;onal Linguis;cs Lab. January 15, 2015 @ Deep Lunch 1 Why LSTM? OJen used in many recent RNN- based systems Machine transla;on Program execu;on Can

More information

Combined GIS, CFD and Neural Network Multi-Zone Model for Urban Planning and Building Simulation. Methods

Combined GIS, CFD and Neural Network Multi-Zone Model for Urban Planning and Building Simulation. Methods Combined GIS, CFD and Neural Network Multi-Zone Model for Urban Planning and Building Simulation Meng Kong 1, Mingshi Yu 2, Ning Liu 1, Peng Gao 2, Yanzhi Wang 1, Jianshun Zhang 1 1 School of Engineering

More information

Heat Load Forecasting of District Heating System Based on Numerical Weather Prediction Model

Heat Load Forecasting of District Heating System Based on Numerical Weather Prediction Model 2nd International Forum on Electrical Engineering and Automation (IFEEA 2) Heat Load Forecasting of District Heating System Based on Numerical Weather Prediction Model YANG Hongying, a, JIN Shuanglong,

More information

Reading, UK 1 2 Abstract

Reading, UK 1 2 Abstract , pp.45-54 http://dx.doi.org/10.14257/ijseia.2013.7.5.05 A Case Study on the Application of Computational Intelligence to Identifying Relationships between Land use Characteristics and Damages caused by

More information

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption ANDRÉ NUNES DE SOUZA, JOSÉ ALFREDO C. ULSON, IVAN NUNES

More information

Deep Learning Lab Course 2017 (Deep Learning Practical)

Deep Learning Lab Course 2017 (Deep Learning Practical) Deep Learning Lab Course 207 (Deep Learning Practical) Labs: (Computer Vision) Thomas Brox, (Robotics) Wolfram Burgard, (Machine Learning) Frank Hutter, (Neurorobotics) Joschka Boedecker University of

More information

<Special Topics in VLSI> Learning for Deep Neural Networks (Back-propagation)

<Special Topics in VLSI> Learning for Deep Neural Networks (Back-propagation) Learning for Deep Neural Networks (Back-propagation) Outline Summary of Previous Standford Lecture Universal Approximation Theorem Inference vs Training Gradient Descent Back-Propagation

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

Wind Speed Forecasting Using Back Propagation Artificial Neural Networks in North of Iran

Wind Speed Forecasting Using Back Propagation Artificial Neural Networks in North of Iran Research Article Journal of Energy Management and echnology (JEM) Vol. 1, Issue 3 21 Wind Speed Forecasting Using Back Propagation Artificial Neural Networks in North of Iran AMIN MASOUMI 1, FARKHONDEH

More information

A Bayesian Perspective on Residential Demand Response Using Smart Meter Data

A Bayesian Perspective on Residential Demand Response Using Smart Meter Data A Bayesian Perspective on Residential Demand Response Using Smart Meter Data Datong-Paul Zhou, Maximilian Balandat, and Claire Tomlin University of California, Berkeley [datong.zhou, balandat, tomlin]@eecs.berkeley.edu

More information

Analysis of the Learning Process of a Recurrent Neural Network on the Last k-bit Parity Function

Analysis of the Learning Process of a Recurrent Neural Network on the Last k-bit Parity Function Analysis of the Learning Process of a Recurrent Neural Network on the Last k-bit Parity Function Austin Wang Adviser: Xiuyuan Cheng May 4, 2017 1 Abstract This study analyzes how simple recurrent neural

More information