Gated RNN & Sequence Generation. Hung-yi Lee 李宏毅

Size: px
Start display at page:

Download "Gated RNN & Sequence Generation. Hung-yi Lee 李宏毅"

Transcription

1 Gated RNN & Sequence Generation Hung-yi Lee 李宏毅

2 Outline RNN with Gated Mechanism Sequence Generation Conditional Sequence Generation Tips for Generation

3 RNN with Gated Mechanism

4 Recurrent Neural Network Given function f: h, y = f h, x h and h are vectors with the same dimension y 1 y 2 y 3 h 0 f h 1 f h 2 f h 3 x 1 x 2 x 3 No matter how long the input/output sequence is, we only need one function f

5 Deep RNN h, y = f 1 h, x b, c = f 2 b, y c 1 c 2 c 3 b 0 f 2 b 1 f 2 b 2 f 2 b 3 y 1 y 2 y 3 h 0 f 1 h 1 f 1 h 2 f 1 h 3 x 1 x 2 x 3

6 idirectional RNN h, a = f 1 h, x b, c = f 2 b, x x 1 x 2 x 3 b 0 f 2 b 1 f 2 b 2 f 2 b 3 c 1 c 2 c 3 y = f 3 a, c f 3 y 1 f 3 y 2 f 3 y 3 a 1 a 2 a 3 h 0 f 1 h 1 f 1 h 2 f 1 h 3 x 1 x 2 x 3

7 Naïve RNN Given function f: h, y = f h, x y h ' + = σ W h h W i x h f h ' x y = σ softmax W o h ' Ignore bias here

8 LSTM y t h t-1 y t Naive h t c t-1 h t-1 LSTM c t h t x t x t c changes slowly h changes faster c t is c t-1 added by something h t and h t-1 can be very different

9 z = tanh W x t h t-1 c t-1 z i = σ W i x t h t-1 z f z i z z o z f = σ W f x t h t-1 h t-1 x t z o = σ W o x t h t-1

10 x t z = tanh W h t-1 c t-1 c t-1 diagonal peephole z o z f z i obtained by the same way z f z i z z o h t-1 x t

11 y t c t-1 c t + c t = z f c t 1 +z i z h t = z o tanh c t z f z i z z o y t = σ W h t h t-1 x t h t

12 LSTM y t y t+1 c t-1 c t c t z f z i z z o z f z i z z o h t h t-1 x t h t x t+1

13 GRU h t = z h t z h y t h t-1 + h t 1- reset r update z h' h t-1 x t x t

14 LSTM: Search Space Odyssey

15 LSTM: Search Space Odyssey Standard LSTM works well Simply LSTM: coupling input and forget gate, removing peephole Forget gate is critical for performance Output gate activation function is critical

16 n Empirical Exploration of Recurrent Network rchitectures LSTM-f/i/o: removing forget/input/output gates LSTM-b: large bias Importance: forget > input > output Large bias for forget gate is helpful

17 n Empirical Exploration of Recurrent Network rchitectures

18 Neural rchitecture Search with Reinforcement Learning LSTM From Reinforcement Learning

19 Sequence Generation

20 Generation x: y: 你 我 他 是 很 Sentences are composed of characters/words Generating a character/word at each time by RNN h y f h ' Distribution over the token (sampling from the distribution to generate a token) x The token generated at the last time step. (represented by 1-of-N encoding)

21 Generation y 1 : P(w <OS>) y 2 : P(w <OS>, 床 ) y 3 : P(w <OS>, 床, 前 ) Sentences are composed of characters/words Generating a character/word at each time by RNN 床前明 ~ y 1 sample ~ ~ y 2 y 3 Until <EOS> is generated h 0 f h 1 f h 2 f h 3 x 1 <OS> x 2 床 x 3 前

22 Generation : minimizing cross-entropy Training Training data: 春眠不覺曉 春眠不 y 1 y 2 y 3 h 0 f h 1 f h 2 f h 3 x 1 <OS> x 2 春 x 3 眠

23 Generation Images are composed of pixels Generating a pixel at each time by RNN red blue green ~ y 1 sample ~ y 2 Consider as a sentence blue red yellow gray Train a RNN based on the sentences ~ y 3 h 0 f h 1 f h 2 f h 3 x 1 <OS> x 2 red x 3 blue

24 Generation 3 x 3 images Images are composed of pixels

25 Conditional Sequence Generation

26 Conditional Generation We don t want to simply generate some random sentences. Generate sentences based on conditions: Caption Generation Given condition: young girl is dancing. Chat-bot Given condition: Hello Hello. Nice to see you.

27 Conditional Generation Represent the input condition as a vector, and consider the vector as the input of RNN generator Image Caption Generation vector woman. (period) CNN <OS> Input image

28 Conditional Generation Sequence-tosequence learning Represent the input condition as a vector, and consider the vector as the input of RNN generator E.g. Machine translation / Chat-bot Information of the whole sentences machine learning. (period) 機 器 學 習 Encoder Jointly train Decoder

29 Conditional Generation M: Hello U: Hi M: Hi Need to consider longer context during chatting M: Hi M: Hello U: Hi Serban, Iulian V., lessandro Sordoni, Yoshua engio, aron Courville, and Joelle Pineau, 2015 "uilding End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models.

30 Dynamic Conditional Generation c 1 c 2 machine learning. (period) h 1 h 2 h 3 h 4 機 器 學 習 c 1 c 2 c 3 Encoder Decoder

31 Dynamic Conditional Generation Encoder Decoder Information of the whole sentences machine learning. (period) 機 器 學 習 c 1 c 2 c 3

32 Machine Translation ttention-based model α 0 1 match h 1 h 2 h 3 h 4 機 器 學 習 z 0 Jointly learned with other part of the network What is? match Design by yourself Cosine similarity of z and h Small NN whose input is z and h, output a scalar α = h T Wz match h α z

33 Machine Translation ttention-based model c α α α α 0 softmax 1 α 0 2 α 0 3 α 0 4 α 0 h 1 h 2 h 3 h 4 machine z 0 z 1 c 0 c 0 = α 0 i h i Decoder input 機 器 學 習 = 0.5h h 2

34 Machine Translation ttention-based model α 1 1 match h 1 h 2 h 3 h 4 z 0 machine z 1 c 0 機 器 學 習

35 Machine Translation ttention-based model c 1 machine learning α 1 α α α 1 softmax z 0 z 1 z 2 α 1 1 α 1 2 α 1 3 α 1 4 c 0 c 1 h 1 h 2 h 3 h 4 c 1 = α 1 i h i 機 器 學 習 = 0.5h h 4

36 Machine Translation ttention-based model machine learning α 2 1 match z 0 z 1 z 2 c 0 c 1 h 1 h 2 h 3 h 4 機 器 學 習 The same process repeat until generating <EOS>

37 Speech Recognition William Chan, Navdeep Jaitly, Quoc V. Le, Oriol Vinyals, Listen, ttend and Spell, ICSSP, 2016

38 Image Caption Generation vector for each region z 0 match 0.7 CNN filter filter filter filter filter filter filter filter filter filter filter filter

39 Image Caption Generation vector for each region z 0 Word 1 z 1 CNN filter filter filter filter filter filter weighted sum filter filter filter filter filter filter

40 Image Caption Generation vector for each region z 0 Word 1 z 1 Word 2 z 2 CNN filter filter filter filter filter filter weighted sum filter filter filter filter filter filter

41 Image Caption Generation Kelvin Xu, Jimmy a, Ryan Kiros, Kyunghyun Cho, aron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua engio, Show, ttend and Tell: Neural Image Caption Generation with Visual ttention, ICML, 2015

42 Image Caption Generation Kelvin Xu, Jimmy a, Ryan Kiros, Kyunghyun Cho, aron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua engio, Show, ttend and Tell: Neural Image Caption Generation with Visual ttention, ICML, 2015

43 Li Yao, tousa Torabi, Kyunghyun Cho, Nicolas allas, Christopher Pal, Hugo Larochelle, aron Courville, Describing Videos by Exploiting Temporal Structure, ICCV, 2015

44 Question nswering Given a document and a query, output an answer bbi: the answer is a word SQuD: the answer is a sequence of words (in the input document) MS MRCO: the answer is a sequence of words MovieQ: Multiple choice question (output a number)

45 Memory Network nswer Sentence to vector can be jointly trained. Extracted Information N = n=1 α n x n DNN Document α 1 x 1 α 2 α 3 α N x 2 x 3 x N vector Match q Query Sainbayar Sukhbaatar, rthur Szlam, Jason Weston, Rob Fergus, End-To-End Memory Networks, NIPS, 2015

46 Memory Network Extracted Information N = n=1 α n h n nswer DNN Jointly learned h 1 h 2 h 3 h N α 1 α 2 α 3 α N Hopping Document x 1 x 2 x 3 x N Match q Query

47 Memory Network Extract information Compute attention DNN a Extract information Compute attention q

48 Reading Comprehension End-To-End Memory Networks. S. Sukhbaatar,. Szlam, J. Weston, R. Fergus. NIPS, The position of reading head: Keras has example: bi_memnn.py

49 Wei Fang, Juei-Yang Hsu, Hungyi Lee, Lin-Shan Lee, "Hierarchical ttention Model for Improved Machine Comprehension of Spoken Content", SLT, 2016

50 Neural Turing Machine von Neumann architecture Neural Turing Machine not only read from memory lso modify the memory through attention

51 Neural Turing Machine y 1 h 0 f r 0 = α 0 i m 0 i α 0 1 α 0 2 α 0 3 α 0 4 x 1 Retrieval process r 0 m 0 1 m 0 2 m 0 3 m 0 4

52 Neural Turing Machine y 1 h 0 f r 0 = α 0 i m 0 i x 1 r 0 k 1 e 1 a 1 α 0 1 α 0 2 α 0 3 α 0 4 m 0 1 m 0 2 m 0 3 m 0 4 α 1 1 α 2 1 α α 1 softmax α 1 1 α 2 1 α α 1 α i 1 = cos m i 0, k 1

53 Neural Turing Machine m 1 i i i = m 0 α e 1 i i + α a 1 1 m 0 1 (element-wise) k 1 e 1 a 1 α 0 1 α 0 2 α 0 3 α 0 4 m 0 1 m 0 2 m 0 3 m 0 4 α 1 1 α 1 2 α 1 3 α 1 4 m 1 1 m 1 2 m 1 3 m ~ 1

54 Neural Turing Machine y 1 y 2 h 0 f h 1 f x 1 r 0 x 2 r 1 α 0 1 α 0 2 α 0 3 α 0 4 α 1 1 α 1 2 α 1 3 α 1 4 α 2 1 α 2 2 α 2 3 α 2 4 m 0 1 m 0 2 m 0 3 m 0 4 m 1 1 m 1 2 m 1 3 m 1 4 m 2 1 m 2 2 m 2 3 m 2 4

55 Neural Turing Machine Wei-Jen Ko, o-hsiang Tseng, Hung-yi Lee, Recurrent Neural Network based Language Modeling with Controllable External Memory, ICSSP, 2017

56 rmand Joulin, Tomas Mikolov, Inferring lgorithmic Patterns with Stack-ugmented Recurrent Nets, arxiv Pre-Print, 2015 Stack RNN y t Push Pop Nothing stack Information to store Push, Pop, Nothing f + + X0.7 X0.2 X0.1 x t

57 Tips for Generation

58 α t i ttention component time Kelvin Xu, Jimmy a, Ryan Kiros, Kyunghyun Cho, aron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua engio, Show, ttend and Tell: Neural Image Caption Generation with Visual ttention, ICML, 2015 ad ttention α 1 1 α 2 1 α 3 1 α 4 1 α 1 2 α 2 2 α 3 2 α 4 2 α 1 3 α 2 3 α 3 3 α 4 3 α 1 4 α 2 4 α 3 4 α 4 4 w 1 w 2 (woman) w 3 w 4 (woman) no cooking Good ttention: each input component has approximately the same attention weight 2 E.g. Regularization term: i τ α t i t For each component Over the generation

59 Mismatch between Train and Test Training Reference: C = t C t Minimizing cross-entropy of each component : condition <OS>

60 Mismatch between Train and Test Generation We do not know the reference Testing: The inputs are the outputs of the last time step. Training: The inputs are reference. <OS> Exposure ias

61 One step wrong May be totally wrong Never explore 一步錯, 步步錯

62 Modifying Training Process? When we try to decrease the loss for both steps 1 and 2.. Reference Training is matched to testing. In practice, it is hard to train in this way.

63 Scheduled Sampling Reference from model From reference from model From reference

64 Scheduled Sampling Caption generation on MSCOCO LEU-4 METEOR CIDER lways from reference lways from model Scheduled Sampling Samy engio, Oriol Vinyals, Navdeep Jaitly, Noam Shazeer, Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks, arxiv preprint, 2015

65 eam Search The green path has higher score. Not possible to check all the paths

66 eam Search Keep several best path at each step eam size = 2

67 eam Search The size of beam is 3 in this example.

68 etter Idea? I am You are I are You am I you you are high score you am are <OS> <OS> you I you

69 Object level v.s. Component level Minimizing the error defined on component level is not equivalent to improving the generated objects Ref: The dog is running fast C = t C t Cross-entropy of each step cat a a a The dog is is fast The dog is running fast Optimize object-level criterion instead of component-level crossentropy. object-level criterion: R y, y Gradient Descent? y: generated utterance, y: ground truth

70 Reinforcement learning? Start with observation s 1 Observation s 2 Observation s 3 Obtain reward r 1 = 0 ction a 1 : right Obtain reward r 2 = 5 ction a 2 : fire (kill an alien)

71 Reinforcement learning? reward: R(, reference) ction taken r = 0 r = 0 ctions set observation <OS> Marc'urelio Ranzato, Sumit Chopra, Michael uli, Wojciech Zaremba, Sequence Level Training with Recurrent Neural Networks, ICLR, 2016 The action we take influence the observation in the next step

72 Concluding Remarks RNN with Gated Mechanism Sequence Generation Conditional Sequence Generation Tips for Generation

Based on the original slides of Hung-yi Lee

Based on the original slides of Hung-yi Lee Based on the original slides of Hung-yi Lee New Activation Function Rectified Linear Unit (ReLU) σ z a a = z Reason: 1. Fast to compute 2. Biological reason a = 0 [Xavier Glorot, AISTATS 11] [Andrew L.

More information

Recurrent Neural Networks (RNN) and Long-Short-Term-Memory (LSTM) Yuan YAO HKUST

Recurrent Neural Networks (RNN) and Long-Short-Term-Memory (LSTM) Yuan YAO HKUST 1 Recurrent Neural Networks (RNN) and Long-Short-Term-Memory (LSTM) Yuan YAO HKUST Summary We have shown: Now First order optimization methods: GD (BP), SGD, Nesterov, Adagrad, ADAM, RMSPROP, etc. Second

More information

High Order LSTM/GRU. Wenjie Luo. January 19, 2016

High Order LSTM/GRU. Wenjie Luo. January 19, 2016 High Order LSTM/GRU Wenjie Luo January 19, 2016 1 Introduction RNN is a powerful model for sequence data but suffers from gradient vanishing and explosion, thus difficult to be trained to capture long

More information

Deep Reinforcement Learning. Scratching the surface

Deep Reinforcement Learning. Scratching the surface Deep Reinforcement Learning Scratching the surface Deep Reinforcement Learning Scenario of Reinforcement Learning Observation State Agent Action Change the environment Don t do that Reward Environment

More information

Recurrent Autoregressive Networks for Online Multi-Object Tracking. Presented By: Ishan Gupta

Recurrent Autoregressive Networks for Online Multi-Object Tracking. Presented By: Ishan Gupta Recurrent Autoregressive Networks for Online Multi-Object Tracking Presented By: Ishan Gupta Outline Multi Object Tracking Recurrent Autoregressive Networks (RANs) RANs for Online Tracking Other State

More information

Language Modeling. Hung-yi Lee 李宏毅

Language Modeling. Hung-yi Lee 李宏毅 Language Modeling Hung-yi Lee 李宏毅 Language modeling Language model: Estimated the probability of word sequence Word sequence: w 1, w 2, w 3,., w n P(w 1, w 2, w 3,., w n ) Application: speech recognition

More information

Deep Learning Tutorial. 李宏毅 Hung-yi Lee

Deep Learning Tutorial. 李宏毅 Hung-yi Lee Deep Learning Tutorial 李宏毅 Hung-yi Lee Deep learning attracts lots of attention. Google Trends Deep learning obtains many exciting results. The talks in this afternoon This talk will focus on the technical

More information

Deep Learning Tutorial. 李宏毅 Hung-yi Lee

Deep Learning Tutorial. 李宏毅 Hung-yi Lee Deep Learning Tutorial 李宏毅 Hung-yi Lee Outline Part I: Introduction of Deep Learning Part II: Why Deep? Part III: Tips for Training Deep Neural Network Part IV: Neural Network with Memory Part I: Introduction

More information

Long-Short Term Memory and Other Gated RNNs

Long-Short Term Memory and Other Gated RNNs Long-Short Term Memory and Other Gated RNNs Sargur Srihari srihari@buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 Topics in Sequence Modeling

More information

Recurrent Neural Networks (Part - 2) Sumit Chopra Facebook

Recurrent Neural Networks (Part - 2) Sumit Chopra Facebook Recurrent Neural Networks (Part - 2) Sumit Chopra Facebook Recap Standard RNNs Training: Backpropagation Through Time (BPTT) Application to sequence modeling Language modeling Applications: Automatic speech

More information

Recurrent Neural Networks with Flexible Gates using Kernel Activation Functions

Recurrent Neural Networks with Flexible Gates using Kernel Activation Functions 2018 IEEE International Workshop on Machine Learning for Signal Processing (MLSP 18) Recurrent Neural Networks with Flexible Gates using Kernel Activation Functions Authors: S. Scardapane, S. Van Vaerenbergh,

More information

Review Networks for Caption Generation

Review Networks for Caption Generation Review Networks for Caption Generation Zhilin Yang, Ye Yuan, Yuexin Wu, Ruslan Salakhutdinov, William W. Cohen School of Computer Science Carnegie Mellon University {zhiliny,yey1,yuexinw,rsalakhu,wcohen}@cs.cmu.edu

More information

Introduction to RNNs!

Introduction to RNNs! Introduction to RNNs Arun Mallya Best viewed with Computer Modern fonts installed Outline Why Recurrent Neural Networks (RNNs)? The Vanilla RNN unit The RNN forward pass Backpropagation refresher The RNN

More information

CS230: Lecture 8 Word2Vec applications + Recurrent Neural Networks with Attention

CS230: Lecture 8 Word2Vec applications + Recurrent Neural Networks with Attention CS23: Lecture 8 Word2Vec applications + Recurrent Neural Networks with Attention Today s outline We will learn how to: I. Word Vector Representation i. Training - Generalize results with word vectors -

More information

Machine Translation. 10: Advanced Neural Machine Translation Architectures. Rico Sennrich. University of Edinburgh. R. Sennrich MT / 26

Machine Translation. 10: Advanced Neural Machine Translation Architectures. Rico Sennrich. University of Edinburgh. R. Sennrich MT / 26 Machine Translation 10: Advanced Neural Machine Translation Architectures Rico Sennrich University of Edinburgh R. Sennrich MT 2018 10 1 / 26 Today s Lecture so far today we discussed RNNs as encoder and

More information

Neural Architectures for Image, Language, and Speech Processing

Neural Architectures for Image, Language, and Speech Processing Neural Architectures for Image, Language, and Speech Processing Karl Stratos June 26, 2018 1 / 31 Overview Feedforward Networks Need for Specialized Architectures Convolutional Neural Networks (CNNs) Recurrent

More information

Deep learning for Natural Language Processing and Machine Translation

Deep learning for Natural Language Processing and Machine Translation Deep learning for Natural Language Processing and Machine Translation 2015.10.16 Seung-Hoon Na Contents Introduction: Neural network, deep learning Deep learning for Natural language processing Neural

More information

Long Short- Term Memory (LSTM) M1 Yuichiro Sawai Computa;onal Linguis;cs Lab. January 15, Deep Lunch

Long Short- Term Memory (LSTM) M1 Yuichiro Sawai Computa;onal Linguis;cs Lab. January 15, Deep Lunch Long Short- Term Memory (LSTM) M1 Yuichiro Sawai Computa;onal Linguis;cs Lab. January 15, 2015 @ Deep Lunch 1 Why LSTM? OJen used in many recent RNN- based systems Machine transla;on Program execu;on Can

More information

EE-559 Deep learning LSTM and GRU

EE-559 Deep learning LSTM and GRU EE-559 Deep learning 11.2. LSTM and GRU François Fleuret https://fleuret.org/ee559/ Mon Feb 18 13:33:24 UTC 2019 ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE The Long-Short Term Memory unit (LSTM) by Hochreiter

More information

Recurrent Neural Networks. deeplearning.ai. Why sequence models?

Recurrent Neural Networks. deeplearning.ai. Why sequence models? Recurrent Neural Networks deeplearning.ai Why sequence models? Examples of sequence data The quick brown fox jumped over the lazy dog. Speech recognition Music generation Sentiment classification There

More information

Recurrent Neural Network

Recurrent Neural Network Recurrent Neural Network Xiaogang Wang xgwang@ee..edu.hk March 2, 2017 Xiaogang Wang (linux) Recurrent Neural Network March 2, 2017 1 / 48 Outline 1 Recurrent neural networks Recurrent neural networks

More information

Slide credit from Hung-Yi Lee & Richard Socher

Slide credit from Hung-Yi Lee & Richard Socher Slide credit from Hung-Yi Lee & Richard Socher 1 Review Recurrent Neural Network 2 Recurrent Neural Network Idea: condition the neural network on all previous words and tie the weights at each time step

More information

Deep Learning Sequence to Sequence models: Attention Models. 17 March 2018

Deep Learning Sequence to Sequence models: Attention Models. 17 March 2018 Deep Learning Sequence to Sequence models: Attention Models 17 March 2018 1 Sequence-to-sequence modelling Problem: E.g. A sequence X 1 X N goes in A different sequence Y 1 Y M comes out Speech recognition:

More information

Spatial Transformer. Ref: Max Jaderberg, Karen Simonyan, Andrew Zisserman, Koray Kavukcuoglu, Spatial Transformer Networks, NIPS, 2015

Spatial Transformer. Ref: Max Jaderberg, Karen Simonyan, Andrew Zisserman, Koray Kavukcuoglu, Spatial Transformer Networks, NIPS, 2015 Spatial Transormer Re: Max Jaderberg, Karen Simonyan, Andrew Zisserman, Koray Kavukcuoglu, Spatial Transormer Networks, NIPS, 2015 Spatial Transormer Layer CNN is not invariant to scaling and rotation

More information

arxiv: v1 [cs.cl] 21 May 2017

arxiv: v1 [cs.cl] 21 May 2017 Spelling Correction as a Foreign Language Yingbo Zhou yingbzhou@ebay.com Utkarsh Porwal uporwal@ebay.com Roberto Konow rkonow@ebay.com arxiv:1705.07371v1 [cs.cl] 21 May 2017 Abstract In this paper, we

More information

Minimal Gated Unit for Recurrent Neural Networks

Minimal Gated Unit for Recurrent Neural Networks International Journal of Automation and Computing X(X), X X, X-X DOI: XXX Minimal Gated Unit for Recurrent Neural Networks Guo-Bing Zhou Jianxin Wu Chen-Lin Zhang Zhi-Hua Zhou National Key Laboratory for

More information

CSC321 Lecture 16: ResNets and Attention

CSC321 Lecture 16: ResNets and Attention CSC321 Lecture 16: ResNets and Attention Roger Grosse Roger Grosse CSC321 Lecture 16: ResNets and Attention 1 / 24 Overview Two topics for today: Topic 1: Deep Residual Networks (ResNets) This is the state-of-the

More information

arxiv: v1 [cs.lg] 4 May 2015

arxiv: v1 [cs.lg] 4 May 2015 Reinforcement Learning Neural Turing Machines arxiv:1505.00521v1 [cs.lg] 4 May 2015 Wojciech Zaremba 1,2 NYU woj.zaremba@gmail.com Abstract Ilya Sutskever 2 Google ilyasu@google.com The expressive power

More information

Improved Learning through Augmenting the Loss

Improved Learning through Augmenting the Loss Improved Learning through Augmenting the Loss Hakan Inan inanh@stanford.edu Khashayar Khosravi khosravi@stanford.edu Abstract We present two improvements to the well-known Recurrent Neural Network Language

More information

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6 Machine Learning for Large-Scale Data Analysis and Decision Making 80-629-17A Neural Networks Week #6 Today Neural Networks A. Modeling B. Fitting C. Deep neural networks Today s material is (adapted)

More information

Lecture 9 Recurrent Neural Networks

Lecture 9 Recurrent Neural Networks Lecture 9 Recurrent Neural Networks I m glad that I m Turing Complete now Xinyu Zhou Megvii (Face++) Researcher zxy@megvii.com Nov 2017 Raise your hand and ask, whenever you have questions... We have a

More information

Regularizing RNNs for Caption Generation by Reconstructing The Past with The Present

Regularizing RNNs for Caption Generation by Reconstructing The Past with The Present Regularizing RNNs for Caption Generation by Reconstructing The Past with The Present Xinpeng Chen Lin Ma Wenhao Jiang Jian Yao Wei Liu Wuhan University Tencent AI Lab {jschenxinpeng, forest.linma, cswhjiang}@gmail.com

More information

Machine Learning for Signal Processing Neural Networks Continue. Instructor: Bhiksha Raj Slides by Najim Dehak 1 Dec 2016

Machine Learning for Signal Processing Neural Networks Continue. Instructor: Bhiksha Raj Slides by Najim Dehak 1 Dec 2016 Machine Learning for Signal Processing Neural Networks Continue Instructor: Bhiksha Raj Slides by Najim Dehak 1 Dec 2016 1 So what are neural networks?? Voice signal N.Net Transcription Image N.Net Text

More information

Introduction of Reinforcement Learning

Introduction of Reinforcement Learning Introduction of Reinforcement Learning Deep Reinforcement Learning Reference Textbook: Reinforcement Learning: An Introduction http://incompleteideas.net/sutton/book/the-book.html Lectures of David Silver

More information

Recurrent Neural Networks. Jian Tang

Recurrent Neural Networks. Jian Tang Recurrent Neural Networks Jian Tang tangjianpku@gmail.com 1 RNN: Recurrent neural networks Neural networks for sequence modeling Summarize a sequence with fix-sized vector through recursively updating

More information

CSC321 Lecture 15: Recurrent Neural Networks

CSC321 Lecture 15: Recurrent Neural Networks CSC321 Lecture 15: Recurrent Neural Networks Roger Grosse Roger Grosse CSC321 Lecture 15: Recurrent Neural Networks 1 / 26 Overview Sometimes we re interested in predicting sequences Speech-to-text and

More information

From perceptrons to word embeddings. Simon Šuster University of Groningen

From perceptrons to word embeddings. Simon Šuster University of Groningen From perceptrons to word embeddings Simon Šuster University of Groningen Outline A basic computational unit Weighting some input to produce an output: classification Perceptron Classify tweets Written

More information

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino Artificial Neural Networks Data Base and Data Mining Group of Politecnico di Torino Elena Baralis Politecnico di Torino Artificial Neural Networks Inspired to the structure of the human brain Neurons as

More information

Random Coattention Forest for Question Answering

Random Coattention Forest for Question Answering Random Coattention Forest for Question Answering Jheng-Hao Chen Stanford University jhenghao@stanford.edu Ting-Po Lee Stanford University tingpo@stanford.edu Yi-Chun Chen Stanford University yichunc@stanford.edu

More information

CSC321 Lecture 15: Exploding and Vanishing Gradients

CSC321 Lecture 15: Exploding and Vanishing Gradients CSC321 Lecture 15: Exploding and Vanishing Gradients Roger Grosse Roger Grosse CSC321 Lecture 15: Exploding and Vanishing Gradients 1 / 23 Overview Yesterday, we saw how to compute the gradient descent

More information

CSC321 Lecture 10 Training RNNs

CSC321 Lecture 10 Training RNNs CSC321 Lecture 10 Training RNNs Roger Grosse and Nitish Srivastava February 23, 2015 Roger Grosse and Nitish Srivastava CSC321 Lecture 10 Training RNNs February 23, 2015 1 / 18 Overview Last time, we saw

More information

Based on the original slides of Hung-yi Lee

Based on the original slides of Hung-yi Lee Based on the original slides of Hung-yi Lee Google Trends Deep learning obtains many exciting results. Can contribute to new Smart Services in the Context of the Internet of Things (IoT). IoT Services

More information

arxiv: v1 [cs.ne] 19 Dec 2016

arxiv: v1 [cs.ne] 19 Dec 2016 A RECURRENT NEURAL NETWORK WITHOUT CHAOS Thomas Laurent Department of Mathematics Loyola Marymount University Los Angeles, CA 90045, USA tlaurent@lmu.edu James von Brecht Department of Mathematics California

More information

Neural Networks Language Models

Neural Networks Language Models Neural Networks Language Models Philipp Koehn 10 October 2017 N-Gram Backoff Language Model 1 Previously, we approximated... by applying the chain rule p(w ) = p(w 1, w 2,..., w n ) p(w ) = i p(w i w 1,...,

More information

arxiv: v1 [stat.ml] 18 Nov 2017

arxiv: v1 [stat.ml] 18 Nov 2017 MinimalRNN: Toward More Interpretable and Trainable Recurrent Neural Networks arxiv:1711.06788v1 [stat.ml] 18 Nov 2017 Minmin Chen Google Mountain view, CA 94043 minminc@google.com Abstract We introduce

More information

arxiv: v2 [cs.ne] 7 Apr 2015

arxiv: v2 [cs.ne] 7 Apr 2015 A Simple Way to Initialize Recurrent Networks of Rectified Linear Units arxiv:154.941v2 [cs.ne] 7 Apr 215 Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton Google Abstract Learning long term dependencies

More information

Sequence Modeling with Neural Networks

Sequence Modeling with Neural Networks Sequence Modeling with Neural Networks Harini Suresh y 0 y 1 y 2 s 0 s 1 s 2... x 0 x 1 x 2 hat is a sequence? This morning I took the dog for a walk. sentence medical signals speech waveform Successes

More information

Deep Learning. Hung-yi Lee 李宏毅

Deep Learning. Hung-yi Lee 李宏毅 Deep Learning Hung-yi Lee 李宏毅 Deep learning attracts lots of attention. I believe you have seen lots of exciting results before. Deep learning trends at Google. Source: SIGMOD 206/Jeff Dean 958: Perceptron

More information

Deep Learning and Lexical, Syntactic and Semantic Analysis. Wanxiang Che and Yue Zhang

Deep Learning and Lexical, Syntactic and Semantic Analysis. Wanxiang Che and Yue Zhang Deep Learning and Lexical, Syntactic and Semantic Analysis Wanxiang Che and Yue Zhang 2016-10 Part 2: Introduction to Deep Learning Part 2.1: Deep Learning Background What is Machine Learning? From Data

More information

Deep Learning. Recurrent Neural Network (RNNs) Ali Ghodsi. October 23, Slides are partially based on Book in preparation, Deep Learning

Deep Learning. Recurrent Neural Network (RNNs) Ali Ghodsi. October 23, Slides are partially based on Book in preparation, Deep Learning Recurrent Neural Network (RNNs) University of Waterloo October 23, 2015 Slides are partially based on Book in preparation, by Bengio, Goodfellow, and Aaron Courville, 2015 Sequential data Recurrent neural

More information

Natural Language Understanding. Kyunghyun Cho, NYU & U. Montreal

Natural Language Understanding. Kyunghyun Cho, NYU & U. Montreal Natural Language Understanding Kyunghyun Cho, NYU & U. Montreal 2 Machine Translation NEURAL MACHINE TRANSLATION 3 Topics: Statistical Machine Translation log p(f e) =log p(e f) + log p(f) f = (La, croissance,

More information

Learning Deep Architectures

Learning Deep Architectures Learning Deep Architectures Yoshua Bengio, U. Montreal Microsoft Cambridge, U.K. July 7th, 2009, Montreal Thanks to: Aaron Courville, Pascal Vincent, Dumitru Erhan, Olivier Delalleau, Olivier Breuleux,

More information

Recurrent Neural Networks

Recurrent Neural Networks Charu C. Aggarwal IBM T J Watson Research Center Yorktown Heights, NY Recurrent Neural Networks Neural Networks and Deep Learning, Springer, 218 Chapter 7.1 7.2 The Challenges of Processing Sequences Conventional

More information

A QUESTION ANSWERING SYSTEM USING ENCODER-DECODER, SEQUENCE-TO-SEQUENCE, RECURRENT NEURAL NETWORKS. A Project. Presented to

A QUESTION ANSWERING SYSTEM USING ENCODER-DECODER, SEQUENCE-TO-SEQUENCE, RECURRENT NEURAL NETWORKS. A Project. Presented to A QUESTION ANSWERING SYSTEM USING ENCODER-DECODER, SEQUENCE-TO-SEQUENCE, RECURRENT NEURAL NETWORKS A Project Presented to The Faculty of the Department of Computer Science San José State University In

More information

Natural Language Processing and Recurrent Neural Networks

Natural Language Processing and Recurrent Neural Networks Natural Language Processing and Recurrent Neural Networks Pranay Tarafdar October 19 th, 2018 Outline Introduction to NLP Word2vec RNN GRU LSTM Demo What is NLP? Natural Language? : Huge amount of information

More information

Gate Activation Signal Analysis for Gated Recurrent Neural Networks and Its Correlation with Phoneme Boundaries

Gate Activation Signal Analysis for Gated Recurrent Neural Networks and Its Correlation with Phoneme Boundaries INTERSPEECH 2017 August 20 24, 2017, Stockholm, Sweden Gate Activation Signal Analysis for Gated Recurrent Neural Networks and Its Correlation with Phoneme Boundaries Yu-Hsuan Wang, Cheng-Tao Chung, Hung-yi

More information

Lecture 15: Exploding and Vanishing Gradients

Lecture 15: Exploding and Vanishing Gradients Lecture 15: Exploding and Vanishing Gradients Roger Grosse 1 Introduction Last lecture, we introduced RNNs and saw how to derive the gradients using backprop through time. In principle, this lets us train

More information

Lecture 15: Recurrent Neural Nets

Lecture 15: Recurrent Neural Nets Lecture 15: Recurrent Neural Nets Roger Grosse 1 Introduction Most of the prediction tasks we ve looked at have involved pretty simple kinds of outputs, such as real values or discrete categories. But

More information

Conditional Language modeling with attention

Conditional Language modeling with attention Conditional Language modeling with attention 2017.08.25 Oxford Deep NLP 조수현 Review Conditional language model: assign probabilities to sequence of words given some conditioning context x What is the probability

More information

Faster Training of Very Deep Networks Via p-norm Gates

Faster Training of Very Deep Networks Via p-norm Gates Faster Training of Very Deep Networks Via p-norm Gates Trang Pham, Truyen Tran, Dinh Phung, Svetha Venkatesh Center for Pattern Recognition and Data Analytics Deakin University, Geelong Australia Email:

More information

Structured deep models: Deep learning on graphs and beyond

Structured deep models: Deep learning on graphs and beyond Structured deep models: Deep learning on graphs and beyond Hidden layer Hidden layer Input Output ReLU ReLU, 25 May 2018 CompBio Seminar, University of Cambridge In collaboration with Ethan Fetaya, Rianne

More information

Self-Attention with Relative Position Representations

Self-Attention with Relative Position Representations Self-Attention with Relative Position Representations Peter Shaw Google petershaw@google.com Jakob Uszkoreit Google Brain usz@google.com Ashish Vaswani Google Brain avaswani@google.com Abstract Relying

More information

arxiv: v2 [cs.lg] 26 Jan 2016

arxiv: v2 [cs.lg] 26 Jan 2016 Stacked Attention Networks for Image Question Answering Zichao Yang 1, Xiaodong He 2, Jianfeng Gao 2, Li Deng 2, Alex Smola 1 1 Carnegie Mellon University, 2 Microsoft Research, Redmond, WA 98052, USA

More information

Introduction to Deep Neural Networks

Introduction to Deep Neural Networks Introduction to Deep Neural Networks Presenter: Chunyuan Li Pattern Classification and Recognition (ECE 681.01) Duke University April, 2016 Outline 1 Background and Preliminaries Why DNNs? Model: Logistic

More information

TTIC 31230, Fundamentals of Deep Learning David McAllester, April Sequence to Sequence Models and Attention

TTIC 31230, Fundamentals of Deep Learning David McAllester, April Sequence to Sequence Models and Attention TTIC 31230, Fundamentals of Deep Learning David McAllester, April 2017 Sequence to Sequence Models and Attention Encode-Decode Architectures for Machine Translation [Figure from Luong et al.] In Sutskever

More information

Presented By: Omer Shmueli and Sivan Niv

Presented By: Omer Shmueli and Sivan Niv Deep Speaker: an End-to-End Neural Speaker Embedding System Chao Li, Xiaokong Ma, Bing Jiang, Xiangang Li, Xuewei Zhang, Xiao Liu, Ying Cao, Ajay Kannan, Zhenyao Zhu Presented By: Omer Shmueli and Sivan

More information

IMPROVING STOCHASTIC GRADIENT DESCENT

IMPROVING STOCHASTIC GRADIENT DESCENT IMPROVING STOCHASTIC GRADIENT DESCENT WITH FEEDBACK Jayanth Koushik & Hiroaki Hayashi Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213, USA {jkoushik,hiroakih}@cs.cmu.edu

More information

arxiv: v1 [cs.ne] 3 Jun 2016

arxiv: v1 [cs.ne] 3 Jun 2016 Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations arxiv:1606.01305v1 [cs.ne] 3 Jun 2016 David Krueger 1, Tegan Maharaj 2, János Kramár 2,, Mohammad Pezeshki 1, Nicolas Ballas 1, Nan

More information

Deep Learning for Computer Vision

Deep Learning for Computer Vision Deep Learning for Computer Vision Spring 2018 http://vllab.ee.ntu.edu.tw/dlcv.html (primary) https://ceiba.ntu.edu.tw/1062dlcv (grade, etc.) FB: DLCV Spring 2018 Yu-Chiang Frank Wang 王鈺強, Associate Professor

More information

Recurrent Neural Networks Deep Learning Lecture 5. Efstratios Gavves

Recurrent Neural Networks Deep Learning Lecture 5. Efstratios Gavves Recurrent Neural Networks Deep Learning Lecture 5 Efstratios Gavves Sequential Data So far, all tasks assumed stationary data Neither all data, nor all tasks are stationary though Sequential Data: Text

More information

Today s Lecture. Dropout

Today s Lecture. Dropout Today s Lecture so far we discussed RNNs as encoder and decoder we discussed some architecture variants: RNN vs. GRU vs. LSTM attention mechanisms Machine Translation 1: Advanced Neural Machine Translation

More information

Tracking the World State with Recurrent Entity Networks

Tracking the World State with Recurrent Entity Networks Tracking the World State with Recurrent Entity Networks Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun Task At each timestep, get information (in the form of a sentence) about the

More information

RECURRENT NEURAL NETWORKS WITH FLEXIBLE GATES USING KERNEL ACTIVATION FUNCTIONS

RECURRENT NEURAL NETWORKS WITH FLEXIBLE GATES USING KERNEL ACTIVATION FUNCTIONS 2018 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING, SEPT. 17 20, 2018, AALBORG, DENMARK RECURRENT NEURAL NETWORKS WITH FLEXIBLE GATES USING KERNEL ACTIVATION FUNCTIONS Simone Scardapane,

More information

Deep learning attracts lots of attention.

Deep learning attracts lots of attention. Deep Learning Deep learning attracts lots of attention. I believe you have seen lots of exciting results before. Deep learning trends at Google. Source: SIGMOD/Jeff Dean Ups and downs of Deep Learning

More information

Length bias in Encoder Decoder Models and a Case for Global Conditioning

Length bias in Encoder Decoder Models and a Case for Global Conditioning Length bias in Encoder Decoder Models and a Case for Global Conditioning Pavel Sountsov Google siege@google.com Sunita Sarawagi IIT Bombay sunita@iitb.ac.in Abstract Encoder-decoder networks are popular

More information

Stephen Scott.

Stephen Scott. 1 / 35 (Adapted from Vinod Variyam and Ian Goodfellow) sscott@cse.unl.edu 2 / 35 All our architectures so far work on fixed-sized inputs neural networks work on sequences of inputs E.g., text, biological

More information

NEURAL LANGUAGE MODELS

NEURAL LANGUAGE MODELS COMP90042 LECTURE 14 NEURAL LANGUAGE MODELS LANGUAGE MODELS Assign a probability to a sequence of words Framed as sliding a window over the sentence, predicting each word from finite context to left E.g.,

More information

Sequence Models. Ji Yang. Department of Computing Science, University of Alberta. February 14, 2018

Sequence Models. Ji Yang. Department of Computing Science, University of Alberta. February 14, 2018 Sequence Models Ji Yang Department of Computing Science, University of Alberta February 14, 2018 This is a note mainly based on Prof. Andrew Ng s MOOC Sequential Models. I also include materials (equations,

More information

Conditional Language Modeling. Chris Dyer

Conditional Language Modeling. Chris Dyer Conditional Language Modeling Chris Dyer Unconditional LMs A language model assigns probabilities to sequences of words,. w =(w 1,w 2,...,w`) It is convenient to decompose this probability using the chain

More information

arxiv: v2 [cs.cl] 20 Jun 2017

arxiv: v2 [cs.cl] 20 Jun 2017 Po-Sen Huang * 1 Chong Wang * 1 Dengyong Zhou 1 Li Deng 2 arxiv:1706.05565v2 [cs.cl] 20 Jun 2017 Abstract In this paper, we propose Neural Phrase-based Machine Translation (NPMT). Our method explicitly

More information

Key-value Attention Mechanism for Neural Machine Translation

Key-value Attention Mechanism for Neural Machine Translation Key-value Attention Mechanism for Neural Machine Translation Hideya Mino 1,2 Masao Utiyama 3 Eiichiro Sumita 3 Takenobu Tokunaga 2 1 NHK Science & Technology Research Laboratories 2 Tokyo Institute of

More information

Modelling Time Series with Neural Networks. Volker Tresp Summer 2017

Modelling Time Series with Neural Networks. Volker Tresp Summer 2017 Modelling Time Series with Neural Networks Volker Tresp Summer 2017 1 Modelling of Time Series The next figure shows a time series (DAX) Other interesting time-series: energy prize, energy consumption,

More information

Reasoning with Memory Augmented Neural Networks for Language Comprehension, ICLR17

Reasoning with Memory Augmented Neural Networks for Language Comprehension, ICLR17 Reasoning with Memory Augmented Neural Networks for Language Comprehension, ICLR17 UMass Medical School ICLR 2017 Presenter: Jack Lanchantin ICLR 2017 Presenter: Jack Lanchantin 1 Outline 1 Intro 2 Hypothesis

More information

CS230: Lecture 10 Sequence models II

CS230: Lecture 10 Sequence models II CS23: Lecture 1 Sequence models II Today s outline We will learn how to: - Automatically score an NLP model I. BLEU score - Improve Machine II. Beam Search Translation results with Beam search III. Speech

More information

arxiv: v3 [cs.cl] 24 Feb 2018

arxiv: v3 [cs.cl] 24 Feb 2018 FACTORIZATION TRICKS FOR LSTM NETWORKS Oleksii Kuchaiev NVIDIA okuchaiev@nvidia.com Boris Ginsburg NVIDIA bginsburg@nvidia.com ABSTRACT arxiv:1703.10722v3 [cs.cl] 24 Feb 2018 We present two simple ways

More information

Feature Design. Feature Design. Feature Design. & Deep Learning

Feature Design. Feature Design. Feature Design. & Deep Learning Artificial Intelligence and its applications Lecture 9 & Deep Learning Professor Daniel Yeung danyeung@ieee.org Dr. Patrick Chan patrickchan@ieee.org South China University of Technology, China Appropriately

More information

Lecture 5 Neural models for NLP

Lecture 5 Neural models for NLP CS546: Machine Learning in NLP (Spring 2018) http://courses.engr.illinois.edu/cs546/ Lecture 5 Neural models for NLP Julia Hockenmaier juliahmr@illinois.edu 3324 Siebel Center Office hours: Tue/Thu 2pm-3pm

More information

Lecture 17: Neural Networks and Deep Learning

Lecture 17: Neural Networks and Deep Learning UVA CS 6316 / CS 4501-004 Machine Learning Fall 2016 Lecture 17: Neural Networks and Deep Learning Jack Lanchantin Dr. Yanjun Qi 1 Neurons 1-Layer Neural Network Multi-layer Neural Network Loss Functions

More information

arxiv: v1 [stat.ml] 18 Nov 2016

arxiv: v1 [stat.ml] 18 Nov 2016 VARIABLE COMPUTATION IN RECURRENT NEURAL NETWORKS Yacine Jernite Department of Computer Science New York University New York, NY 10012, USA jernite@cs.nyu.edu arxiv:1611.06188v1 [stat.ml] 18 Nov 2016 Edouard

More information

Stochastic Dropout: Activation-level Dropout to Learn Better Neural Language Models

Stochastic Dropout: Activation-level Dropout to Learn Better Neural Language Models Stochastic Dropout: Activation-level Dropout to Learn Better Neural Language Models Allen Nie Symbolic Systems Program Stanford University anie@stanford.edu Abstract Recurrent Neural Networks are very

More information

a) b) (Natural Language Processing; NLP) (Deep Learning) Bag of words White House RGB [1] IBM

a) b) (Natural Language Processing; NLP) (Deep Learning) Bag of words White House RGB [1] IBM c 1. (Natural Language Processing; NLP) (Deep Learning) RGB IBM 135 8511 5 6 52 yutat@jp.ibm.com a) b) 2. 1 0 2 1 Bag of words White House 2 [1] 2015 4 Copyright c by ORSJ. Unauthorized reproduction of

More information

Recurrent Neural Networks (RNNs) Lecture 9 - Networks for Sequential Data RNNs & LSTMs. RNN with no outputs. RNN with no outputs

Recurrent Neural Networks (RNNs) Lecture 9 - Networks for Sequential Data RNNs & LSTMs. RNN with no outputs. RNN with no outputs Recurrent Neural Networks (RNNs) RNNs are a family of networks for processing sequential data. Lecture 9 - Networks for Sequential Data RNNs & LSTMs DD2424 September 6, 2017 A RNN applies the same function

More information

arxiv: v2 [cs.ne] 19 May 2016

arxiv: v2 [cs.ne] 19 May 2016 arxiv:162.332v2 [cs.ne] 19 May 216 Ivo Danihelka Greg Wayne Benigno Uria Nal Kalchbrenner Alex Graves Google DeepMind Abstract We investigate a new method to augment recurrent neural networks with extra

More information

UNSUPERVISED LEARNING

UNSUPERVISED LEARNING UNSUPERVISED LEARNING Topics Layer-wise (unsupervised) pre-training Restricted Boltzmann Machines Auto-encoders LAYER-WISE (UNSUPERVISED) PRE-TRAINING Breakthrough in 2006 Layer-wise (unsupervised) pre-training

More information

WHY ARE DEEP NETS REVERSIBLE: A SIMPLE THEORY,

WHY ARE DEEP NETS REVERSIBLE: A SIMPLE THEORY, WHY ARE DEEP NETS REVERSIBLE: A SIMPLE THEORY, WITH IMPLICATIONS FOR TRAINING Sanjeev Arora, Yingyu Liang & Tengyu Ma Department of Computer Science Princeton University Princeton, NJ 08540, USA {arora,yingyul,tengyu}@cs.princeton.edu

More information

Diversifying Neural Conversation Model with Maximal Marginal Relevance

Diversifying Neural Conversation Model with Maximal Marginal Relevance Diversifying Neural Conversation Model with Maximal Marginal Relevance Yiping Song, 1 Zhiliang Tian, 2 Dongyan Zhao, 2 Ming Zhang, 1 Rui Yan 2 Institute of Network Computing and Information Systems, Peking

More information

BEYOND WORD IMPORTANCE: CONTEXTUAL DE-

BEYOND WORD IMPORTANCE: CONTEXTUAL DE- BEYOND WORD IMPORTANCE: CONTEXTUAL DE- COMPOSITION TO EXTRACT INTERACTIONS FROM LSTMS W. James Murdoch Department of Statistics University of California, Berkeley jmurdoch@berkeley.edu Peter J. Liu Google

More information

A TIME-RESTRICTED SELF-ATTENTION LAYER FOR ASR

A TIME-RESTRICTED SELF-ATTENTION LAYER FOR ASR A TIME-RESTRICTED SELF-ATTENTION LAYER FOR ASR Daniel Povey,2, Hossein Hadian, Pegah Ghahremani, Ke Li, Sanjeev Khudanpur,2 Center for Language and Speech Processing, Johns Hopkins University, Baltimore,

More information

Recurrent and Recursive Networks

Recurrent and Recursive Networks Neural Networks with Applications to Vision and Language Recurrent and Recursive Networks Marco Kuhlmann Introduction Applications of sequence modelling Map unsegmented connected handwriting to strings.

More information

Deep Gate Recurrent Neural Network

Deep Gate Recurrent Neural Network JMLR: Workshop and Conference Proceedings 63:350 365, 2016 ACML 2016 Deep Gate Recurrent Neural Network Yuan Gao University of Helsinki Dorota Glowacka University of Helsinki gaoyuankidult@gmail.com glowacka@cs.helsinki.fi

More information