Control-oriented model learning with a recurrent neural network

Size: px
Start display at page:

Download "Control-oriented model learning with a recurrent neural network"

Transcription

1 Control-oriented model learning with a recurrent neural network M. A. Bucci O. Semeraro A. Allauzen L. Cordier G. Wisniewski L. Mathelin 20 November 2018, APS Atlanta

2 Kuramoto-Sivashinsky (KS) u t = 4 u 2 u 1 2 u 2 it models diffusive instabilities in flame front steady or chaotic solution the length of the domain is the critical parameter Numerically solved L = 22 Fourier spatial discretization: 64 points implicit time marching scheme: dt = π L time L L L M. A. Bucci!2

3 Model vs Model-Free Model based-control Training a model for the dynamics allows Model Predictive Control approaches and/or Opposite Control approach. Model free control Deep Reinforcement Learning algorithms (DQN, DDQN, DDPG, ) solve the Bellman equation to maximize the objective function. The solution of the Bellman equation is a necessary and sufficient condition for the optimality of the control policy if and only if the whole phase state is known. A model valid in the whole phase state can be used to explore effectively the phase state. Example of Kuramoto-Sivashinky control with DDPG from equilibrium solution of KS system E3 to E2 for L = 22. M. A. Bucci!3

4 Model vs Model-Free Model based-control Training a model for the dynamics allows Model Predictive Control approaches and/or Opposite Control approach. Model free control Deep Reinforcement Learning algorithms (DQN, DDQN, DDPG, ) solve the Bellman equation to maximize the objective function. The solution of the Bellman equation is a necessary and sufficient condition for the optimality of the control policy if and only if the whole phase state is known. A model valid in the whole phase state can be used to explore effectively the phase state. Example of Kuramoto-Sivashinky control with DDPG from equilibrium solution of KS system E3 to E2 for L = 22. M. A. Bucci!3

5 Long time horizon prediction Neural Networks models are extremely powerful to forecast chaotic dynamics. [1] Pathak, Jaideep, et al. (2018): [2] Vlachas, Pantelis R., et al. (2018). [3] Pathak, Jaideep, et al. (2017) Figure: prediction of Kuramoto-Sivashinsky chaotic dynamics. Picture from [1]. c t 1 Recurrent Neural Network (LSTM) + F t Ct I t O t σ σ tanh σ tanh h t h t 1 x t c t c t h t x t Layer Pointwise operation Copy memory state input data M. A. Bucci!4

6 Long time horizon prediction Neural Networks models are extremely powerful to forecast chaotic dynamics. [1] Pathak, Jaideep, et al. (2018): [2] Vlachas, Pantelis R., et al. (2018). [3] Pathak, Jaideep, et al. (2017) Figure: prediction of Kuramoto-Sivashinsky chaotic dynamics. Picture from [1]. Recurrent Neural Network (LSTM) c t 1 Forget gate + F t Ct I t O t σ σ tanh σ tanh h t h t 1 x t c t c t h t x t Layer Pointwise operation Copy memory state input data F t = σ(x t U f + h t 1 W f ) M. A. Bucci!4

7 Long time horizon prediction Neural Networks models are extremely powerful to forecast chaotic dynamics. [1] Pathak, Jaideep, et al. (2018): [2] Vlachas, Pantelis R., et al. (2018). [3] Pathak, Jaideep, et al. (2017) Figure: prediction of Kuramoto-Sivashinsky chaotic dynamics. Picture from [1]. Recurrent Neural Network (LSTM) c t 1 + F t Ct I t O t σ Update gate σ tanh σ tanh h t h t 1 x t c t c t h t x t Layer Pointwise operation Copy memory state input data F t = σ(x t U f + h t 1 W f ) C t = tanh(x t U c + h t 1 W c ) I t = σ(x t U i + h t 1 W i ) c t = F t c t 1 + I t C t M. A. Bucci!4

8 Long time horizon prediction Neural Networks models are extremely powerful to forecast chaotic dynamics. [1] Pathak, Jaideep, et al. (2018): [2] Vlachas, Pantelis R., et al. (2018). [3] Pathak, Jaideep, et al. (2017) Figure: prediction of Kuramoto-Sivashinsky chaotic dynamics. Picture from [1]. Recurrent Neural Network (LSTM) c t 1 + F t Ct I t O t σ σ tanh Output gate σ tanh h t h t 1 x t c t c t h t x t Layer Pointwise operation Copy memory state input data F t = σ(x t U f + h t 1 W f ) C t = tanh(x t U c + h t 1 W c ) I t = σ(x t U i + h t 1 W i ) O t = σ(x t U o + h t 1 W o ) c t = F t c t 1 + I t C t h t = O t tanh(c t ) M. A. Bucci!4

9 KS predictability Input: Output: u n, c n u n+1, c n+1 2 Layer of LSTM with 256 Neurons 1 Linear layer 64 neurons M. A. Bucci!5

10 KS predictability Input: Output: u n, c n u n+1, c n+1 2 Layer of LSTM with 256 Neurons 1 Linear layer 64 neurons M. A. Bucci!5

11 KS predictability Input: Output: u n, c n u n+1, c n+1 2 Layer of LSTM with 256 Neurons 1 Linear layer 64 neurons M. A. Bucci!5

12 KS predictability Input: Output: u n, c n u n+1, c n+1 2 Layer of LSTM with 256 Neurons 1 Linear layer 64 neurons M. A. Bucci!5

13 KS predictability Input: Output: u n, c n u n+1, c n+1 2 Layer of LSTM with 256 Neurons 1 Linear layer 64 neurons with few data NN can predict the solution for a long period Training Prediction M. A. Bucci!5

14 KS predictability Input: Output: u n, c n u n+1, c n+1 2 Layer of LSTM with 256 Neurons 1 Linear layer 64 neurons with few data NN can predict the solution for a long period prediction from an unseen initial condition fails Training Prediction initialize memory prediction M. A. Bucci!5

15 KS predictability Input: Output: u n, c n u n+1, c n+1 2 Layer of LSTM with 256 Neurons 1 Linear layer 64 neurons with few data NN can predict the solution for a long period prediction from an unseen initial condition fails artificial stable solutions might arise Training initialize memory Prediction prediction M. A. Bucci!5

16 KS predictability Input: un, cn Output: un+1, cn+1 2 Layer of LSTM with 256 Neurons 1 Linear layer 64 neurons with few data NN can predict the solution for a long period prediction from an unseen initial condition fails Training Prediction artificial stable solutions might arise Spurious correlation can be obtained even in presence of large dataset if taken along poorly chosen trajectories M. A. Bucci initialize memory!5 prediction

17 Robust learning: open questions f propagator of K-S system approximated by LSTM Neural Network architecture x n+1 = f(x n ) x n+1 = LSTM(x n ) M. A. Bucci!6

18 Robust learning: open questions f propagator of K-S system approximated by LSTM Neural Network architecture NN training: L = min x n+1 x n+1 x n+1 = f(x n ) x n+1 = LSTM(x n ) The neural network training minimizes the distance between the true chaotic trajectory and the predicted one M. A. Bucci!6

19 Robust learning: open questions f propagator of K-S system approximated by LSTM Neural Network architecture NN training: L = min x n+1 x n+1 x n+1 = f(x n ) x n+1 = LSTM(x n ) verify: f LSTM? The neural network training minimizes the distance between the true chaotic trajectory and the predicted one Is this procedure enough to recover a model that is statistically representative of the KS system? M. A. Bucci!6

20 Robust learning: open questions f propagator of K-S system approximated by LSTM Neural Network architecture NN training: L = min x n+1 x n+1 x n+1 = f(x n ) x n+1 = LSTM(x n ) verify: f LSTM? The neural network training minimizes the distance between the true chaotic trajectory and the predicted one Is this procedure enough to recover a model that is statistically representative of the KS system? What is the discriminant information that NN learns during the training? M. A. Bucci!6

21 Robust learning: open questions f propagator of K-S system approximated by LSTM Neural Network architecture NN training: L = min x n+1 x n+1 x n+1 = f(x n ) x n+1 = LSTM(x n ) verify: f LSTM? The neural network training minimizes the distance between the true chaotic trajectory and the predicted one Is this procedure enough to recover a model that is statistically representative of the KS system? What is the discriminant information that NN learns during the training? Can we introduce deterministic information in the data to achieve a statistically correct model? M. A. Bucci!6

22 Theoretical amount of data and ergodic measurement A chaotic system is well characterized by the correlation dimension: C(m, ε) = 2 (N m)(n m 1) N i=m N j=i+1 Φ(ε X i X j ) Grassberger-Procaccia (1987): C(m, ε) = ε D 2 Minimum data to converge D2: N > (D/ε) D 2 /2 Eckmann & Ruelle (1991) N > 2(D 2 + 1) D 2 Essex (1991) N > R(2 Q) 2(1 Q) 2D 2 +1 Baker & Gollub (1996) M. A. Bucci!7

23 Theoretical amount of data and ergodic measurement A chaotic system is well characterized by the correlation dimension: C(m, ε) = 2 (N m)(n m 1) N i=m N j=i+1 Φ(ε X i X j ) Grassberger-Procaccia (1987): C(m, ε) = ε D 2 Minimum data to converge D2: N > (D/ε) D 2 /2 Eckmann & Ruelle (1991) X = σ(y X) Y = XZ + ρx Y Z = XY βz D 2 = 2.06 N > 2(D 2 + 1) D 2 Essex (1991) R(2 Q) N > 2(1 Q) 2D 2 +1 Baker & Gollub (1996) dt = 0.01 m = 5 τ = 29 N = σ = 10 β = 8/3 ρ = 28 M. A. Bucci!7

24 Choice of the data 3 datasets to train LSTM neural network model: M. A. Bucci!8

25 Choice of the data 3 datasets to train LSTM neural network model: 1 trajectory on the chaotic attractor N = M. A. Bucci!8

26 Choice of the data 3 datasets to train LSTM neural network model: 1 trajectory on the chaotic attractor N = M. A. Bucci 9 trajectories randomly initialized on the chaotic attractor N = 3000!8

27 Choice of the data 3 datasets to train LSTM neural network model: 1 trajectory on the chaotic attractor N = M. A. Bucci 9 trajectories randomly initialized on the chaotic attractor N = 3000!8 Kawahara G., Uhlmann M., & Van Veen, L. (2012). The significance of simple invariant solutions in turbulent flows. Annual Review of Fluid Mechanics, 44,

28 Choice of the data 3 datasets to train LSTM neural network model: 1 trajectory on the chaotic attractor N = M. A. Bucci E0 = (0,0,0) 9 trajectories randomly initialized on the chaotic attractor N = 3000!8 9 trajectories from fixed points N = 3000 Kawahara G., Uhlmann M., & Van Veen, L. (2012). The significance of simple invariant solutions in turbulent flows. Annual Review of Fluid Mechanics, 44,

29 Choice of the data 3 datasets to train LSTM neural network model: 1 trajectory on the chaotic attractor N = M. A. Bucci E1 = ( β(ρ 1), 9 trajectories randomly initialized on the chaotic attractor N = 3000!8 β(ρ 1), (ρ 1)) 9 trajectories from fixed points N = 3000 Kawahara G., Uhlmann M., & Van Veen, L. (2012). The significance of simple invariant solutions in turbulent flows. Annual Review of Fluid Mechanics, 44,

30 Choice of the data 3 datasets to train LSTM neural network model: 1 trajectory on the chaotic attractor N = M. A. Bucci E2 = ( β(ρ 1), 9 trajectories randomly initialized on the chaotic attractor N = 3000!8 β(ρ 1), (ρ 1)) 9 trajectories from fixed points N = 3000 Kawahara G., Uhlmann M., & Van Veen, L. (2012). The significance of simple invariant solutions in turbulent flows. Annual Review of Fluid Mechanics, 44,

31 Learning with different strategies computational time 80% cheaper then the standard strategy D 2 = 2.06 ± 0.1 D 2 = 0.13 ± 0.81 D 2 = 2.17 ± 0.09 C(ε) C(ε) C(ε) ε ε ε Z n+1 Z n+1 Z n+1 Z n Z n Z n M. A. Bucci!9

32 KS results Training dataset composed by 64 trajectories leaves from each invariant solution in KS with L = 22: 4 equilibrium states plus 2 traveling waves plus symmetries. Cvitanović, P., Davidchack, R. L., & Siminos, E. (2010) KS LSTM Error CNN LSTM CNN x n x n+1 The instant solution is embedded by CNN to take into account the periodicity of the spatial solution. The achieved model can be used starting from any initial conditions without high loss of statistical properties of the dynamics. M. A. Bucci!10

33 Conclusions LSTM architecture is useful to extrapolate dynamics from a chaotic trajectory. The model trained with just one chaotic trajectory is no longer valid to predict dynamics onto an unseen chaotic trajectory. A statistically correct model can be recovered if physical informed dataset is used for the training. The approximation of Neural Networks allows to successfully design control policies for non-linear systems using reinforcement learning. Acknowledgment: ANR/DGA Flowcon project, ANR-17-ASTR-0022 M. A. Bucci!11

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

Abstract. I. Introduction

Abstract. I. Introduction Kuramoto-Sivashinsky weak turbulence, in the symmetry unrestricted space by Huaming Li School of Physics, Georgia Institute of Technology, Atlanta, 338 Oct 23, 23 Abstract I. Introduction Kuramoto-Sivashinsky

More information

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino Artificial Neural Networks Data Base and Data Mining Group of Politecnico di Torino Elena Baralis Politecnico di Torino Artificial Neural Networks Inspired to the structure of the human brain Neurons as

More information

Dueling Network Architectures for Deep Reinforcement Learning (ICML 2016)

Dueling Network Architectures for Deep Reinforcement Learning (ICML 2016) Dueling Network Architectures for Deep Reinforcement Learning (ICML 2016) Yoonho Lee Department of Computer Science and Engineering Pohang University of Science and Technology October 11, 2016 Outline

More information

CS 229 Project Final Report: Reinforcement Learning for Neural Network Architecture Category : Theory & Reinforcement Learning

CS 229 Project Final Report: Reinforcement Learning for Neural Network Architecture Category : Theory & Reinforcement Learning CS 229 Project Final Report: Reinforcement Learning for Neural Network Architecture Category : Theory & Reinforcement Learning Lei Lei Ruoxuan Xiong December 16, 2017 1 Introduction Deep Neural Network

More information

Recurrent Neural Networks Deep Learning Lecture 5. Efstratios Gavves

Recurrent Neural Networks Deep Learning Lecture 5. Efstratios Gavves Recurrent Neural Networks Deep Learning Lecture 5 Efstratios Gavves Sequential Data So far, all tasks assumed stationary data Neither all data, nor all tasks are stationary though Sequential Data: Text

More information

On the use of Long-Short Term Memory neural networks for time series prediction

On the use of Long-Short Term Memory neural networks for time series prediction On the use of Long-Short Term Memory neural networks for time series prediction Pilar Gómez-Gil National Institute of Astrophysics, Optics and Electronics ccc.inaoep.mx/~pgomez In collaboration with: J.

More information

Spatial Transformer. Ref: Max Jaderberg, Karen Simonyan, Andrew Zisserman, Koray Kavukcuoglu, Spatial Transformer Networks, NIPS, 2015

Spatial Transformer. Ref: Max Jaderberg, Karen Simonyan, Andrew Zisserman, Koray Kavukcuoglu, Spatial Transformer Networks, NIPS, 2015 Spatial Transormer Re: Max Jaderberg, Karen Simonyan, Andrew Zisserman, Koray Kavukcuoglu, Spatial Transormer Networks, NIPS, 2015 Spatial Transormer Layer CNN is not invariant to scaling and rotation

More information

Learning Control for Air Hockey Striking using Deep Reinforcement Learning

Learning Control for Air Hockey Striking using Deep Reinforcement Learning Learning Control for Air Hockey Striking using Deep Reinforcement Learning Ayal Taitler, Nahum Shimkin Faculty of Electrical Engineering Technion - Israel Institute of Technology May 8, 2017 A. Taitler,

More information

A New Look at Nonlinear Time Series Prediction with NARX Recurrent Neural Network. José Maria P. Menezes Jr. and Guilherme A.

A New Look at Nonlinear Time Series Prediction with NARX Recurrent Neural Network. José Maria P. Menezes Jr. and Guilherme A. A New Look at Nonlinear Time Series Prediction with NARX Recurrent Neural Network José Maria P. Menezes Jr. and Guilherme A. Barreto Department of Teleinformatics Engineering Federal University of Ceará,

More information

Recurrent Autoregressive Networks for Online Multi-Object Tracking. Presented By: Ishan Gupta

Recurrent Autoregressive Networks for Online Multi-Object Tracking. Presented By: Ishan Gupta Recurrent Autoregressive Networks for Online Multi-Object Tracking Presented By: Ishan Gupta Outline Multi Object Tracking Recurrent Autoregressive Networks (RANs) RANs for Online Tracking Other State

More information

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen Neural Networks - I Henrik I Christensen Robotics & Intelligent Machines @ GT Georgia Institute of Technology, Atlanta, GA 30332-0280 hic@cc.gatech.edu Henrik I Christensen (RIM@GT) Neural Networks 1 /

More information

RECURRENT NETWORKS I. Philipp Krähenbühl

RECURRENT NETWORKS I. Philipp Krähenbühl RECURRENT NETWORKS I Philipp Krähenbühl RECAP: CLASSIFICATION conv 1 conv 2 conv 3 conv 4 1 2 tu RECAP: SEGMENTATION conv 1 conv 2 conv 3 conv 4 RECAP: DETECTION conv 1 conv 2 conv 3 conv 4 RECAP: GENERATION

More information

Artificial Intelligence Hopfield Networks

Artificial Intelligence Hopfield Networks Artificial Intelligence Hopfield Networks Andrea Torsello Network Topologies Single Layer Recurrent Network Bidirectional Symmetric Connection Binary / Continuous Units Associative Memory Optimization

More information

Deep Reinforcement Learning SISL. Jeremy Morton (jmorton2) November 7, Stanford Intelligent Systems Laboratory

Deep Reinforcement Learning SISL. Jeremy Morton (jmorton2) November 7, Stanford Intelligent Systems Laboratory Deep Reinforcement Learning Jeremy Morton (jmorton2) November 7, 2016 SISL Stanford Intelligent Systems Laboratory Overview 2 1 Motivation 2 Neural Networks 3 Deep Reinforcement Learning 4 Deep Learning

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

Long-Short Term Memory

Long-Short Term Memory Long-Short Term Memory Sepp Hochreiter, Jürgen Schmidhuber Presented by Derek Jones Table of Contents 1. Introduction 2. Previous Work 3. Issues in Learning Long-Term Dependencies 4. Constant Error Flow

More information

Complex Dynamics of Microprocessor Performances During Program Execution

Complex Dynamics of Microprocessor Performances During Program Execution Complex Dynamics of Microprocessor Performances During Program Execution Regularity, Chaos, and Others Hugues BERRY, Daniel GRACIA PÉREZ, Olivier TEMAM Alchemy, INRIA, Orsay, France www-rocq.inria.fr/

More information

Deep Learning Recurrent Networks 2/28/2018

Deep Learning Recurrent Networks 2/28/2018 Deep Learning Recurrent Networks /8/8 Recap: Recurrent networks can be incredibly effective Story so far Y(t+) Stock vector X(t) X(t+) X(t+) X(t+) X(t+) X(t+5) X(t+) X(t+7) Iterated structures are good

More information

Tutorial on Machine Learning for Advanced Electronics

Tutorial on Machine Learning for Advanced Electronics Tutorial on Machine Learning for Advanced Electronics Maxim Raginsky March 2017 Part I (Some) Theory and Principles Machine Learning: estimation of dependencies from empirical data (V. Vapnik) enabling

More information

CSC321 Lecture 15: Exploding and Vanishing Gradients

CSC321 Lecture 15: Exploding and Vanishing Gradients CSC321 Lecture 15: Exploding and Vanishing Gradients Roger Grosse Roger Grosse CSC321 Lecture 15: Exploding and Vanishing Gradients 1 / 23 Overview Yesterday, we saw how to compute the gradient descent

More information

Financial Risk and Returns Prediction with Modular Networked Learning

Financial Risk and Returns Prediction with Modular Networked Learning arxiv:1806.05876v1 [cs.lg] 15 Jun 2018 Financial Risk and Returns Prediction with Modular Networked Learning Carlos Pedro Gonçalves June 18, 2018 University of Lisbon, Instituto Superior de Ciências Sociais

More information

EM-algorithm for Training of State-space Models with Application to Time Series Prediction

EM-algorithm for Training of State-space Models with Application to Time Series Prediction EM-algorithm for Training of State-space Models with Application to Time Series Prediction Elia Liitiäinen, Nima Reyhani and Amaury Lendasse Helsinki University of Technology - Neural Networks Research

More information

Contents. (75pts) COS495 Midterm. (15pts) Short answers

Contents. (75pts) COS495 Midterm. (15pts) Short answers Contents (75pts) COS495 Midterm 1 (15pts) Short answers........................... 1 (5pts) Unequal loss............................. 2 (15pts) About LSTMs........................... 3 (25pts) Modular

More information

Neural Networks Language Models

Neural Networks Language Models Neural Networks Language Models Philipp Koehn 10 October 2017 N-Gram Backoff Language Model 1 Previously, we approximated... by applying the chain rule p(w ) = p(w 1, w 2,..., w n ) p(w ) = i p(w i w 1,...,

More information

Long-Term Prediction, Chaos and Artificial Neural Networks. Where is the Meeting Point?

Long-Term Prediction, Chaos and Artificial Neural Networks. Where is the Meeting Point? Engineering Letters, 5:, EL_5 Long-Term Prediction, Chaos and Artificial Neural Networks. Where is the Meeting Point? Pilar Gómez-Gil Abstract This paper presents the advances of a research using a combination

More information

Reservoir Computing and Echo State Networks

Reservoir Computing and Echo State Networks An Introduction to: Reservoir Computing and Echo State Networks Claudio Gallicchio gallicch@di.unipi.it Outline Focus: Supervised learning in domain of sequences Recurrent Neural networks for supervised

More information

Using a Hopfield Network: A Nuts and Bolts Approach

Using a Hopfield Network: A Nuts and Bolts Approach Using a Hopfield Network: A Nuts and Bolts Approach November 4, 2013 Gershon Wolfe, Ph.D. Hopfield Model as Applied to Classification Hopfield network Training the network Updating nodes Sequencing of

More information

11 Chaos in Continuous Dynamical Systems.

11 Chaos in Continuous Dynamical Systems. 11 CHAOS IN CONTINUOUS DYNAMICAL SYSTEMS. 47 11 Chaos in Continuous Dynamical Systems. Let s consider a system of differential equations given by where x(t) : R R and f : R R. ẋ = f(x), The linearization

More information

Reconstruction Deconstruction:

Reconstruction Deconstruction: Reconstruction Deconstruction: A Brief History of Building Models of Nonlinear Dynamical Systems Jim Crutchfield Center for Computational Science & Engineering Physics Department University of California,

More information

Development of a Deep Recurrent Neural Network Controller for Flight Applications

Development of a Deep Recurrent Neural Network Controller for Flight Applications Development of a Deep Recurrent Neural Network Controller for Flight Applications American Control Conference (ACC) May 26, 2017 Scott A. Nivison Pramod P. Khargonekar Department of Electrical and Computer

More information

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks INFOB2KI 2017-2018 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Artificial Neural Networks Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html

More information

Introduction to Reinforcement Learning. CMPT 882 Mar. 18

Introduction to Reinforcement Learning. CMPT 882 Mar. 18 Introduction to Reinforcement Learning CMPT 882 Mar. 18 Outline for the week Basic ideas in RL Value functions and value iteration Policy evaluation and policy improvement Model-free RL Monte-Carlo and

More information

Long-Short Term Memory and Other Gated RNNs

Long-Short Term Memory and Other Gated RNNs Long-Short Term Memory and Other Gated RNNs Sargur Srihari srihari@buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 Topics in Sequence Modeling

More information

Introduction to Neural Networks

Introduction to Neural Networks Introduction to Neural Networks Philipp Koehn 3 October 207 Linear Models We used before weighted linear combination of feature values h j and weights λ j score(λ, d i ) = j λ j h j (d i ) Such models

More information

Recurrent Neural Network

Recurrent Neural Network Recurrent Neural Network Xiaogang Wang xgwang@ee..edu.hk March 2, 2017 Xiaogang Wang (linux) Recurrent Neural Network March 2, 2017 1 / 48 Outline 1 Recurrent neural networks Recurrent neural networks

More information

EE-559 Deep learning LSTM and GRU

EE-559 Deep learning LSTM and GRU EE-559 Deep learning 11.2. LSTM and GRU François Fleuret https://fleuret.org/ee559/ Mon Feb 18 13:33:24 UTC 2019 ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE The Long-Short Term Memory unit (LSTM) by Hochreiter

More information

Lecture 5: Recurrent Neural Networks

Lecture 5: Recurrent Neural Networks 1/25 Lecture 5: Recurrent Neural Networks Nima Mohajerin University of Waterloo WAVE Lab nima.mohajerin@uwaterloo.ca July 4, 2017 2/25 Overview 1 Recap 2 RNN Architectures for Learning Long Term Dependencies

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

(Deep) Reinforcement Learning

(Deep) Reinforcement Learning Martin Matyášek Artificial Intelligence Center Czech Technical University in Prague October 27, 2016 Martin Matyášek VPD, 2016 1 / 17 Reinforcement Learning in a picture R. S. Sutton and A. G. Barto 2015

More information

Machine Learning for Signal Processing Neural Networks Continue. Instructor: Bhiksha Raj Slides by Najim Dehak 1 Dec 2016

Machine Learning for Signal Processing Neural Networks Continue. Instructor: Bhiksha Raj Slides by Najim Dehak 1 Dec 2016 Machine Learning for Signal Processing Neural Networks Continue Instructor: Bhiksha Raj Slides by Najim Dehak 1 Dec 2016 1 So what are neural networks?? Voice signal N.Net Transcription Image N.Net Text

More information

CSCI 315: Artificial Intelligence through Deep Learning

CSCI 315: Artificial Intelligence through Deep Learning CSCI 315: Artificial Intelligence through Deep Learning W&L Winter Term 2017 Prof. Levy Recurrent Neural Networks (Chapter 7) Recall our first-week discussion... How do we know stuff? (MIT Press 1996)

More information

Autonomous learning algorithm for fully connected recurrent networks

Autonomous learning algorithm for fully connected recurrent networks Autonomous learning algorithm for fully connected recurrent networks Edouard Leclercq, Fabrice Druaux, Dimitri Lefebvre Groupe de Recherche en Electrotechnique et Automatique du Havre Université du Havre,

More information

COMP9444 Neural Networks and Deep Learning 11. Boltzmann Machines. COMP9444 c Alan Blair, 2017

COMP9444 Neural Networks and Deep Learning 11. Boltzmann Machines. COMP9444 c Alan Blair, 2017 COMP9444 Neural Networks and Deep Learning 11. Boltzmann Machines COMP9444 17s2 Boltzmann Machines 1 Outline Content Addressable Memory Hopfield Network Generative Models Boltzmann Machine Restricted Boltzmann

More information

The Research of Railway Coal Dispatched Volume Prediction Based on Chaos Theory

The Research of Railway Coal Dispatched Volume Prediction Based on Chaos Theory The Research of Railway Coal Dispatched Volume Prediction Based on Chaos Theory Hua-Wen Wu Fu-Zhang Wang Institute of Computing Technology, China Academy of Railway Sciences Beijing 00044, China, P.R.

More information

No. 6 Determining the input dimension of a To model a nonlinear time series with the widely used feed-forward neural network means to fit the a

No. 6 Determining the input dimension of a To model a nonlinear time series with the widely used feed-forward neural network means to fit the a Vol 12 No 6, June 2003 cfl 2003 Chin. Phys. Soc. 1009-1963/2003/12(06)/0594-05 Chinese Physics and IOP Publishing Ltd Determining the input dimension of a neural network for nonlinear time series prediction

More information

Introduction to Convolutional Neural Networks 2018 / 02 / 23

Introduction to Convolutional Neural Networks 2018 / 02 / 23 Introduction to Convolutional Neural Networks 2018 / 02 / 23 Buzzword: CNN Convolutional neural networks (CNN, ConvNet) is a class of deep, feed-forward (not recurrent) artificial neural networks that

More information

Reinforcement Learning and Deep Reinforcement Learning

Reinforcement Learning and Deep Reinforcement Learning Reinforcement Learning and Deep Reinforcement Learning Ashis Kumer Biswas, Ph.D. ashis.biswas@ucdenver.edu Deep Learning November 5, 2018 1 / 64 Outlines 1 Principles of Reinforcement Learning 2 The Q

More information

(2pts) What is the object being embedded (i.e. a vector representing this object is computed) when one uses

(2pts) What is the object being embedded (i.e. a vector representing this object is computed) when one uses Contents (75pts) COS495 Midterm 1 (15pts) Short answers........................... 1 (5pts) Unequal loss............................. 2 (15pts) About LSTMs........................... 3 (25pts) Modular

More information

Deep Feedforward Networks

Deep Feedforward Networks Deep Feedforward Networks Liu Yang March 30, 2017 Liu Yang Short title March 30, 2017 1 / 24 Overview 1 Background A general introduction Example 2 Gradient based learning Cost functions Output Units 3

More information

Learning Chaotic Dynamics using Tensor Recurrent Neural Networks

Learning Chaotic Dynamics using Tensor Recurrent Neural Networks Rose Yu 1 * Stephan Zheng 2 * Yan Liu 1 Abstract We present Tensor-RNN, a novel RNN architecture for multivariate forecasting in chaotic dynamical systems. Our proposed architecture captures highly nonlinear

More information

REINFORCEMENT LEARNING

REINFORCEMENT LEARNING REINFORCEMENT LEARNING Larry Page: Where s Google going next? DeepMind's DQN playing Breakout Contents Introduction to Reinforcement Learning Deep Q-Learning INTRODUCTION TO REINFORCEMENT LEARNING Contents

More information

The Homotopy Perturbation Method for Solving the Kuramoto Sivashinsky Equation

The Homotopy Perturbation Method for Solving the Kuramoto Sivashinsky Equation IOSR Journal of Engineering (IOSRJEN) e-issn: 2250-3021, p-issn: 2278-8719 Vol. 3, Issue 12 (December. 2013), V3 PP 22-27 The Homotopy Perturbation Method for Solving the Kuramoto Sivashinsky Equation

More information

Human-level control through deep reinforcement. Liia Butler

Human-level control through deep reinforcement. Liia Butler Humanlevel control through deep reinforcement Liia Butler But first... A quote "The question of whether machines can think... is about as relevant as the question of whether submarines can swim" Edsger

More information

INF 5860 Machine learning for image classification. Lecture 14: Reinforcement learning May 9, 2018

INF 5860 Machine learning for image classification. Lecture 14: Reinforcement learning May 9, 2018 Machine learning for image classification Lecture 14: Reinforcement learning May 9, 2018 Page 3 Outline Motivation Introduction to reinforcement learning (RL) Value function based methods (Q-learning)

More information

Christian Mohr

Christian Mohr Christian Mohr 20.12.2011 Recurrent Networks Networks in which units may have connections to units in the same or preceding layers Also connections to the unit itself possible Already covered: Hopfield

More information

Deep Learning. Recurrent Neural Network (RNNs) Ali Ghodsi. October 23, Slides are partially based on Book in preparation, Deep Learning

Deep Learning. Recurrent Neural Network (RNNs) Ali Ghodsi. October 23, Slides are partially based on Book in preparation, Deep Learning Recurrent Neural Network (RNNs) University of Waterloo October 23, 2015 Slides are partially based on Book in preparation, by Bengio, Goodfellow, and Aaron Courville, 2015 Sequential data Recurrent neural

More information

( t) Identification and Control of a Nonlinear Bioreactor Plant Using Classical and Dynamical Neural Networks

( t) Identification and Control of a Nonlinear Bioreactor Plant Using Classical and Dynamical Neural Networks Identification and Control of a Nonlinear Bioreactor Plant Using Classical and Dynamical Neural Networks Mehmet Önder Efe Electrical and Electronics Engineering Boðaziçi University, Bebek 80815, Istanbul,

More information

Neural Networks in Structured Prediction. November 17, 2015

Neural Networks in Structured Prediction. November 17, 2015 Neural Networks in Structured Prediction November 17, 2015 HWs and Paper Last homework is going to be posted soon Neural net NER tagging model This is a new structured model Paper - Thursday after Thanksgiving

More information

Chapter 11. Stochastic Methods Rooted in Statistical Mechanics

Chapter 11. Stochastic Methods Rooted in Statistical Mechanics Chapter 11. Stochastic Methods Rooted in Statistical Mechanics Neural Networks and Learning Machines (Haykin) Lecture Notes on Self-learning Neural Algorithms Byoung-Tak Zhang School of Computer Science

More information

Lecture 15: Exploding and Vanishing Gradients

Lecture 15: Exploding and Vanishing Gradients Lecture 15: Exploding and Vanishing Gradients Roger Grosse 1 Introduction Last lecture, we introduced RNNs and saw how to derive the gradients using backprop through time. In principle, this lets us train

More information

Neural Nets and Symbolic Reasoning Hopfield Networks

Neural Nets and Symbolic Reasoning Hopfield Networks Neural Nets and Symbolic Reasoning Hopfield Networks Outline The idea of pattern completion The fast dynamics of Hopfield networks Learning with Hopfield networks Emerging properties of Hopfield networks

More information

Neural Turing Machine. Author: Alex Graves, Greg Wayne, Ivo Danihelka Presented By: Tinghui Wang (Steve)

Neural Turing Machine. Author: Alex Graves, Greg Wayne, Ivo Danihelka Presented By: Tinghui Wang (Steve) Neural Turing Machine Author: Alex Graves, Greg Wayne, Ivo Danihelka Presented By: Tinghui Wang (Steve) Introduction Neural Turning Machine: Couple a Neural Network with external memory resources The combined

More information

1 Background BACKGROUND 1

1 Background BACKGROUND 1 BACKGROUND 1 PARABOLIC PDES AND THEIR NUMERICAL APPROXIMATION ON LARGE DOMAINS IN THE PRESENCE OF NOISE GRANT REPORT: GR/R29949/01 PI: G J Lord 1 Background Partial differential equations of parabolic

More information

Lecture 9. Time series prediction

Lecture 9. Time series prediction Lecture 9 Time series prediction Prediction is about function fitting To predict we need to model There are a bewildering number of models for data we look at some of the major approaches in this lecture

More information

Adaptive Predictive Observer Design for Class of Uncertain Nonlinear Systems with Bounded Disturbance

Adaptive Predictive Observer Design for Class of Uncertain Nonlinear Systems with Bounded Disturbance International Journal of Control Science and Engineering 2018, 8(2): 31-35 DOI: 10.5923/j.control.20180802.01 Adaptive Predictive Observer Design for Class of Saeed Kashefi *, Majid Hajatipor Faculty of

More information

Recurrent neural networks

Recurrent neural networks 12-1: Recurrent neural networks Prof. J.C. Kao, UCLA Recurrent neural networks Motivation Network unrollwing Backpropagation through time Vanishing and exploding gradients LSTMs GRUs 12-2: Recurrent neural

More information

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network.

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network. III. Recurrent Neural Networks A. The Hopfield Network 2/9/15 1 2/9/15 2 Typical Artificial Neuron Typical Artificial Neuron connection weights linear combination activation function inputs output net

More information

Artificial Neural Networks Examination, June 2005

Artificial Neural Networks Examination, June 2005 Artificial Neural Networks Examination, June 2005 Instructions There are SIXTY questions. (The pass mark is 30 out of 60). For each question, please select a maximum of ONE of the given answers (either

More information

Neural Architectures for Image, Language, and Speech Processing

Neural Architectures for Image, Language, and Speech Processing Neural Architectures for Image, Language, and Speech Processing Karl Stratos June 26, 2018 1 / 31 Overview Feedforward Networks Need for Specialized Architectures Convolutional Neural Networks (CNNs) Recurrent

More information

A new method for short-term load forecasting based on chaotic time series and neural network

A new method for short-term load forecasting based on chaotic time series and neural network A new method for short-term load forecasting based on chaotic time series and neural network Sajjad Kouhi*, Navid Taghizadegan Electrical Engineering Department, Azarbaijan Shahid Madani University, Tabriz,

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement

More information

Conditional Language modeling with attention

Conditional Language modeling with attention Conditional Language modeling with attention 2017.08.25 Oxford Deep NLP 조수현 Review Conditional language model: assign probabilities to sequence of words given some conditioning context x What is the probability

More information

How to do backpropagation in a brain

How to do backpropagation in a brain How to do backpropagation in a brain Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto & Google Inc. Prelude I will start with three slides explaining a popular type of deep

More information

Recurrent Neural Networks with Flexible Gates using Kernel Activation Functions

Recurrent Neural Networks with Flexible Gates using Kernel Activation Functions 2018 IEEE International Workshop on Machine Learning for Signal Processing (MLSP 18) Recurrent Neural Networks with Flexible Gates using Kernel Activation Functions Authors: S. Scardapane, S. Van Vaerenbergh,

More information

One dimensional Maps

One dimensional Maps Chapter 4 One dimensional Maps The ordinary differential equation studied in chapters 1-3 provide a close link to actual physical systems it is easy to believe these equations provide at least an approximate

More information

epochs epochs

epochs epochs Neural Network Experiments To illustrate practical techniques, I chose to use the glass dataset. This dataset has 214 examples and 6 classes. Here are 4 examples from the original dataset. The last values

More information

Introduction to Convolutional Neural Networks (CNNs)

Introduction to Convolutional Neural Networks (CNNs) Introduction to Convolutional Neural Networks (CNNs) nojunk@snu.ac.kr http://mipal.snu.ac.kr Department of Transdisciplinary Studies Seoul National University, Korea Jan. 2016 Many slides are from Fei-Fei

More information

Hopfield Neural Network

Hopfield Neural Network Lecture 4 Hopfield Neural Network Hopfield Neural Network A Hopfield net is a form of recurrent artificial neural network invented by John Hopfield. Hopfield nets serve as content-addressable memory systems

More information

FORECASTING ECONOMIC GROWTH USING CHAOS THEORY

FORECASTING ECONOMIC GROWTH USING CHAOS THEORY Article history: Received 22 April 2016; last revision 30 June 2016; accepted 12 September 2016 FORECASTING ECONOMIC GROWTH USING CHAOS THEORY Mihaela Simionescu Institute for Economic Forecasting of the

More information

Dynamical Systems and Chaos Part I: Theoretical Techniques. Lecture 4: Discrete systems + Chaos. Ilya Potapov Mathematics Department, TUT Room TD325

Dynamical Systems and Chaos Part I: Theoretical Techniques. Lecture 4: Discrete systems + Chaos. Ilya Potapov Mathematics Department, TUT Room TD325 Dynamical Systems and Chaos Part I: Theoretical Techniques Lecture 4: Discrete systems + Chaos Ilya Potapov Mathematics Department, TUT Room TD325 Discrete maps x n+1 = f(x n ) Discrete time steps. x 0

More information

arxiv: v3 [cs.lg] 14 Jan 2018

arxiv: v3 [cs.lg] 14 Jan 2018 A Gentle Tutorial of Recurrent Neural Network with Error Backpropagation Gang Chen Department of Computer Science and Engineering, SUNY at Buffalo arxiv:1610.02583v3 [cs.lg] 14 Jan 2018 1 abstract We describe

More information

Lecture 1: March 7, 2018

Lecture 1: March 7, 2018 Reinforcement Learning Spring Semester, 2017/8 Lecture 1: March 7, 2018 Lecturer: Yishay Mansour Scribe: ym DISCLAIMER: Based on Learning and Planning in Dynamical Systems by Shie Mannor c, all rights

More information

REINFORCE Framework for Stochastic Policy Optimization and its use in Deep Learning

REINFORCE Framework for Stochastic Policy Optimization and its use in Deep Learning REINFORCE Framework for Stochastic Policy Optimization and its use in Deep Learning Ronen Tamari The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (#67679) February 28, 2016 Ronen Tamari

More information

arxiv: v1 [cs.lg] 9 Mar 2018

arxiv: v1 [cs.lg] 9 Mar 2018 Hybrid Forecasting of Chaotic Processes: Using Machine Learning in Conjunction with a Knowledge-Based Model Jaideep Pathak, 1 Alexander Wikner, 2 Rebeckah Fussell, 3 Sarthak Chandra, 1 Brian R. Hunt, 1

More information

Long Short- Term Memory (LSTM) M1 Yuichiro Sawai Computa;onal Linguis;cs Lab. January 15, Deep Lunch

Long Short- Term Memory (LSTM) M1 Yuichiro Sawai Computa;onal Linguis;cs Lab. January 15, Deep Lunch Long Short- Term Memory (LSTM) M1 Yuichiro Sawai Computa;onal Linguis;cs Lab. January 15, 2015 @ Deep Lunch 1 Why LSTM? OJen used in many recent RNN- based systems Machine transla;on Program execu;on Can

More information

Lecture 5 Neural models for NLP

Lecture 5 Neural models for NLP CS546: Machine Learning in NLP (Spring 2018) http://courses.engr.illinois.edu/cs546/ Lecture 5 Neural models for NLP Julia Hockenmaier juliahmr@illinois.edu 3324 Siebel Center Office hours: Tue/Thu 2pm-3pm

More information

MS&E338 Reinforcement Learning Lecture 1 - April 2, Introduction

MS&E338 Reinforcement Learning Lecture 1 - April 2, Introduction MS&E338 Reinforcement Learning Lecture 1 - April 2, 2018 Introduction Lecturer: Ben Van Roy Scribe: Gabriel Maher 1 Reinforcement Learning Introduction In reinforcement learning (RL) we consider an agent

More information

Reinforcement. Function Approximation. Learning with KATJA HOFMANN. Researcher, MSR Cambridge

Reinforcement. Function Approximation. Learning with KATJA HOFMANN. Researcher, MSR Cambridge Reinforcement Learning with Function Approximation KATJA HOFMANN Researcher, MSR Cambridge Representation and Generalization in RL Focus on training stability Learning generalizable value functions Navigating

More information

Complex system approach to geospace and climate studies. Tatjana Živković

Complex system approach to geospace and climate studies. Tatjana Živković Complex system approach to geospace and climate studies Tatjana Živković 30.11.2011 Outline of a talk Importance of complex system approach Phase space reconstruction Recurrence plot analysis Test for

More information

Basics of reinforcement learning

Basics of reinforcement learning Basics of reinforcement learning Lucian Buşoniu TMLSS, 20 July 2018 Main idea of reinforcement learning (RL) Learn a sequential decision policy to optimize the cumulative performance of an unknown system

More information

Modelling Time Series with Neural Networks. Volker Tresp Summer 2017

Modelling Time Series with Neural Networks. Volker Tresp Summer 2017 Modelling Time Series with Neural Networks Volker Tresp Summer 2017 1 Modelling of Time Series The next figure shows a time series (DAX) Other interesting time-series: energy prize, energy consumption,

More information

Recurrent Neural Networks. Jian Tang

Recurrent Neural Networks. Jian Tang Recurrent Neural Networks Jian Tang tangjianpku@gmail.com 1 RNN: Recurrent neural networks Neural networks for sequence modeling Summarize a sequence with fix-sized vector through recursively updating

More information

Jakub Hajic Artificial Intelligence Seminar I

Jakub Hajic Artificial Intelligence Seminar I Jakub Hajic Artificial Intelligence Seminar I. 11. 11. 2014 Outline Key concepts Deep Belief Networks Convolutional Neural Networks A couple of questions Convolution Perceptron Feedforward Neural Network

More information

Lecture 11 Recurrent Neural Networks I

Lecture 11 Recurrent Neural Networks I Lecture 11 Recurrent Neural Networks I CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor niversity of Chicago May 01, 2017 Introduction Sequence Learning with Neural Networks Some Sequence Tasks

More information

Introduction to Neural Networks

Introduction to Neural Networks Introduction to Neural Networks Philipp Koehn 4 April 205 Linear Models We used before weighted linear combination of feature values h j and weights λ j score(λ, d i ) = j λ j h j (d i ) Such models can

More information

NEURAL LANGUAGE MODELS

NEURAL LANGUAGE MODELS COMP90042 LECTURE 14 NEURAL LANGUAGE MODELS LANGUAGE MODELS Assign a probability to a sequence of words Framed as sliding a window over the sentence, predicting each word from finite context to left E.g.,

More information

Chapter 6: Temporal Difference Learning

Chapter 6: Temporal Difference Learning Chapter 6: emporal Difference Learning Objectives of this chapter: Introduce emporal Difference (D) learning Focus first on policy evaluation, or prediction, methods hen extend to control methods R. S.

More information

Convolutional Neural Network Architecture

Convolutional Neural Network Architecture Convolutional Neural Network Architecture Zhisheng Zhong Feburary 2nd, 2018 Zhisheng Zhong Convolutional Neural Network Architecture Feburary 2nd, 2018 1 / 55 Outline 1 Introduction of Convolution Motivation

More information

Generating Sequences with Recurrent Neural Networks

Generating Sequences with Recurrent Neural Networks Generating Sequences with Recurrent Neural Networks Alex Graves University of Toronto & Google DeepMind Presented by Zhe Gan, Duke University May 15, 2015 1 / 23 Outline Deep recurrent neural network based

More information