Gated RNN & Sequence Generation. Hung-yi Lee 李宏毅

Similar documents
Based on the original slides of Hung-yi Lee

Recurrent Neural Networks (RNN) and Long-Short-Term-Memory (LSTM) Yuan YAO HKUST

High Order LSTM/GRU. Wenjie Luo. January 19, 2016

Deep Reinforcement Learning. Scratching the surface

Recurrent Autoregressive Networks for Online Multi-Object Tracking. Presented By: Ishan Gupta

Language Modeling. Hung-yi Lee 李宏毅

Deep Learning Tutorial. 李宏毅 Hung-yi Lee

Deep Learning Tutorial. 李宏毅 Hung-yi Lee

Long-Short Term Memory and Other Gated RNNs

Recurrent Neural Networks (Part - 2) Sumit Chopra Facebook

Recurrent Neural Networks with Flexible Gates using Kernel Activation Functions

Review Networks for Caption Generation

Introduction to RNNs!

CS230: Lecture 8 Word2Vec applications + Recurrent Neural Networks with Attention

Machine Translation. 10: Advanced Neural Machine Translation Architectures. Rico Sennrich. University of Edinburgh. R. Sennrich MT / 26

Neural Architectures for Image, Language, and Speech Processing

Deep learning for Natural Language Processing and Machine Translation

Long Short- Term Memory (LSTM) M1 Yuichiro Sawai Computa;onal Linguis;cs Lab. January 15, Deep Lunch

EE-559 Deep learning LSTM and GRU

Recurrent Neural Networks. deeplearning.ai. Why sequence models?

Recurrent Neural Network

Slide credit from Hung-Yi Lee & Richard Socher

Deep Learning Sequence to Sequence models: Attention Models. 17 March 2018

Spatial Transformer. Ref: Max Jaderberg, Karen Simonyan, Andrew Zisserman, Koray Kavukcuoglu, Spatial Transformer Networks, NIPS, 2015

arxiv: v1 [cs.cl] 21 May 2017

Minimal Gated Unit for Recurrent Neural Networks

CSC321 Lecture 16: ResNets and Attention

arxiv: v1 [cs.lg] 4 May 2015

Improved Learning through Augmenting the Loss

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

Lecture 9 Recurrent Neural Networks

Regularizing RNNs for Caption Generation by Reconstructing The Past with The Present

Machine Learning for Signal Processing Neural Networks Continue. Instructor: Bhiksha Raj Slides by Najim Dehak 1 Dec 2016

Introduction of Reinforcement Learning

Recurrent Neural Networks. Jian Tang

CSC321 Lecture 15: Recurrent Neural Networks

From perceptrons to word embeddings. Simon Šuster University of Groningen

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino

Random Coattention Forest for Question Answering

CSC321 Lecture 15: Exploding and Vanishing Gradients

CSC321 Lecture 10 Training RNNs

Based on the original slides of Hung-yi Lee

arxiv: v1 [cs.ne] 19 Dec 2016

Neural Networks Language Models

arxiv: v1 [stat.ml] 18 Nov 2017

arxiv: v2 [cs.ne] 7 Apr 2015

Sequence Modeling with Neural Networks

Deep Learning. Hung-yi Lee 李宏毅

Deep Learning and Lexical, Syntactic and Semantic Analysis. Wanxiang Che and Yue Zhang

Deep Learning. Recurrent Neural Network (RNNs) Ali Ghodsi. October 23, Slides are partially based on Book in preparation, Deep Learning

Natural Language Understanding. Kyunghyun Cho, NYU & U. Montreal

Learning Deep Architectures

Recurrent Neural Networks

A QUESTION ANSWERING SYSTEM USING ENCODER-DECODER, SEQUENCE-TO-SEQUENCE, RECURRENT NEURAL NETWORKS. A Project. Presented to

Natural Language Processing and Recurrent Neural Networks

Gate Activation Signal Analysis for Gated Recurrent Neural Networks and Its Correlation with Phoneme Boundaries

Lecture 15: Exploding and Vanishing Gradients

Lecture 15: Recurrent Neural Nets

Conditional Language modeling with attention

Faster Training of Very Deep Networks Via p-norm Gates

Structured deep models: Deep learning on graphs and beyond

Self-Attention with Relative Position Representations

arxiv: v2 [cs.lg] 26 Jan 2016

Introduction to Deep Neural Networks

TTIC 31230, Fundamentals of Deep Learning David McAllester, April Sequence to Sequence Models and Attention

Presented By: Omer Shmueli and Sivan Niv

IMPROVING STOCHASTIC GRADIENT DESCENT

arxiv: v1 [cs.ne] 3 Jun 2016

Deep Learning for Computer Vision

Recurrent Neural Networks Deep Learning Lecture 5. Efstratios Gavves

Today s Lecture. Dropout

Tracking the World State with Recurrent Entity Networks

RECURRENT NEURAL NETWORKS WITH FLEXIBLE GATES USING KERNEL ACTIVATION FUNCTIONS

Deep learning attracts lots of attention.

Length bias in Encoder Decoder Models and a Case for Global Conditioning

Stephen Scott.

NEURAL LANGUAGE MODELS

Sequence Models. Ji Yang. Department of Computing Science, University of Alberta. February 14, 2018

Conditional Language Modeling. Chris Dyer

arxiv: v2 [cs.cl] 20 Jun 2017

Key-value Attention Mechanism for Neural Machine Translation

Modelling Time Series with Neural Networks. Volker Tresp Summer 2017

Reasoning with Memory Augmented Neural Networks for Language Comprehension, ICLR17

CS230: Lecture 10 Sequence models II

arxiv: v3 [cs.cl] 24 Feb 2018

Feature Design. Feature Design. Feature Design. & Deep Learning

Lecture 5 Neural models for NLP

Lecture 17: Neural Networks and Deep Learning

arxiv: v1 [stat.ml] 18 Nov 2016

Stochastic Dropout: Activation-level Dropout to Learn Better Neural Language Models

a) b) (Natural Language Processing; NLP) (Deep Learning) Bag of words White House RGB [1] IBM

Recurrent Neural Networks (RNNs) Lecture 9 - Networks for Sequential Data RNNs & LSTMs. RNN with no outputs. RNN with no outputs

arxiv: v2 [cs.ne] 19 May 2016

UNSUPERVISED LEARNING

WHY ARE DEEP NETS REVERSIBLE: A SIMPLE THEORY,

Diversifying Neural Conversation Model with Maximal Marginal Relevance

BEYOND WORD IMPORTANCE: CONTEXTUAL DE-

A TIME-RESTRICTED SELF-ATTENTION LAYER FOR ASR

Recurrent and Recursive Networks

Deep Gate Recurrent Neural Network

Transcription:

Gated RNN & Sequence Generation Hung-yi Lee 李宏毅

Outline RNN with Gated Mechanism Sequence Generation Conditional Sequence Generation Tips for Generation

RNN with Gated Mechanism

Recurrent Neural Network Given function f: h, y = f h, x h and h are vectors with the same dimension y 1 y 2 y 3 h 0 f h 1 f h 2 f h 3 x 1 x 2 x 3 No matter how long the input/output sequence is, we only need one function f

Deep RNN h, y = f 1 h, x b, c = f 2 b, y c 1 c 2 c 3 b 0 f 2 b 1 f 2 b 2 f 2 b 3 y 1 y 2 y 3 h 0 f 1 h 1 f 1 h 2 f 1 h 3 x 1 x 2 x 3

idirectional RNN h, a = f 1 h, x b, c = f 2 b, x x 1 x 2 x 3 b 0 f 2 b 1 f 2 b 2 f 2 b 3 c 1 c 2 c 3 y = f 3 a, c f 3 y 1 f 3 y 2 f 3 y 3 a 1 a 2 a 3 h 0 f 1 h 1 f 1 h 2 f 1 h 3 x 1 x 2 x 3

Naïve RNN Given function f: h, y = f h, x y h ' + = σ W h h W i x h f h ' x y = σ softmax W o h ' Ignore bias here

LSTM y t h t-1 y t Naive h t c t-1 h t-1 LSTM c t h t x t x t c changes slowly h changes faster c t is c t-1 added by something h t and h t-1 can be very different

z = tanh W x t h t-1 c t-1 z i = σ W i x t h t-1 z f z i z z o z f = σ W f x t h t-1 h t-1 x t z o = σ W o x t h t-1

x t z = tanh W h t-1 c t-1 c t-1 diagonal peephole z o z f z i obtained by the same way z f z i z z o h t-1 x t

y t c t-1 c t + c t = z f c t 1 +z i z h t = z o tanh c t z f z i z z o y t = σ W h t h t-1 x t h t

LSTM y t y t+1 c t-1 c t c t+1 + + z f z i z z o z f z i z z o h t h t-1 x t h t x t+1

GRU h t = z h t 1 + 1 z h y t h t-1 + h t 1- reset r update z h' h t-1 x t x t

LSTM: Search Space Odyssey

LSTM: Search Space Odyssey Standard LSTM works well Simply LSTM: coupling input and forget gate, removing peephole Forget gate is critical for performance Output gate activation function is critical

n Empirical Exploration of Recurrent Network rchitectures LSTM-f/i/o: removing forget/input/output gates LSTM-b: large bias Importance: forget > input > output Large bias for forget gate is helpful

n Empirical Exploration of Recurrent Network rchitectures

Neural rchitecture Search with Reinforcement Learning LSTM From Reinforcement Learning

Sequence Generation

Generation x: y: 你 我 他 是 很 0 1 0 0 0 0 0 0 0 0.7 0.3 0 Sentences are composed of characters/words Generating a character/word at each time by RNN h y f h ' Distribution over the token (sampling from the distribution to generate a token) x The token generated at the last time step. (represented by 1-of-N encoding)

Generation y 1 : P(w <OS>) y 2 : P(w <OS>, 床 ) y 3 : P(w <OS>, 床, 前 ) Sentences are composed of characters/words Generating a character/word at each time by RNN 床前明 ~ y 1 sample ~ ~ y 2 y 3 Until <EOS> is generated h 0 f h 1 f h 2 f h 3 x 1 <OS> x 2 床 x 3 前

Generation : minimizing cross-entropy Training Training data: 春眠不覺曉 春眠不 y 1 y 2 y 3 h 0 f h 1 f h 2 f h 3 x 1 <OS> x 2 春 x 3 眠

Generation Images are composed of pixels Generating a pixel at each time by RNN red blue green ~ y 1 sample ~ y 2 Consider as a sentence blue red yellow gray Train a RNN based on the sentences ~ y 3 h 0 f h 1 f h 2 f h 3 x 1 <OS> x 2 red x 3 blue

Generation 3 x 3 images Images are composed of pixels

Conditional Sequence Generation

Conditional Generation We don t want to simply generate some random sentences. Generate sentences based on conditions: Caption Generation Given condition: young girl is dancing. Chat-bot Given condition: Hello Hello. Nice to see you.

Conditional Generation Represent the input condition as a vector, and consider the vector as the input of RNN generator Image Caption Generation vector woman. (period) CNN <OS> Input image

Conditional Generation Sequence-tosequence learning Represent the input condition as a vector, and consider the vector as the input of RNN generator E.g. Machine translation / Chat-bot Information of the whole sentences machine learning. (period) 機 器 學 習 Encoder Jointly train Decoder

Conditional Generation M: Hello U: Hi M: Hi Need to consider longer context during chatting https://www.youtube.com/watch?v=e2mpomyqjw4 M: Hi M: Hello U: Hi Serban, Iulian V., lessandro Sordoni, Yoshua engio, aron Courville, and Joelle Pineau, 2015 "uilding End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models.

Dynamic Conditional Generation c 1 c 2 machine learning. (period) h 1 h 2 h 3 h 4 機 器 學 習 c 1 c 2 c 3 Encoder Decoder

Dynamic Conditional Generation Encoder Decoder Information of the whole sentences machine learning. (period) 機 器 學 習 c 1 c 2 c 3

Machine Translation ttention-based model α 0 1 match h 1 h 2 h 3 h 4 機 器 學 習 z 0 Jointly learned with other part of the network What is? match Design by yourself Cosine similarity of z and h Small NN whose input is z and h, output a scalar α = h T Wz match h α z

Machine Translation ttention-based model c 0 0.5 1 α 0 0.5 2 α 0 3 0.0 α 0 4 0.0 α 0 softmax 1 α 0 2 α 0 3 α 0 4 α 0 h 1 h 2 h 3 h 4 machine z 0 z 1 c 0 c 0 = α 0 i h i Decoder input 機 器 學 習 = 0.5h 1 + 0.5h 2

Machine Translation ttention-based model α 1 1 match h 1 h 2 h 3 h 4 z 0 machine z 1 c 0 機 器 學 習

Machine Translation ttention-based model c 1 machine learning 0.0 1 0.0 2 3 4 α 1 α 1 0.5 α 1 0.5 α 1 softmax z 0 z 1 z 2 α 1 1 α 1 2 α 1 3 α 1 4 c 0 c 1 h 1 h 2 h 3 h 4 c 1 = α 1 i h i 機 器 學 習 = 0.5h 3 + 0.5h 4

Machine Translation ttention-based model machine learning α 2 1 match z 0 z 1 z 2 c 0 c 1 h 1 h 2 h 3 h 4 機 器 學 習 The same process repeat until generating <EOS>

Speech Recognition William Chan, Navdeep Jaitly, Quoc V. Le, Oriol Vinyals, Listen, ttend and Spell, ICSSP, 2016

Image Caption Generation vector for each region z 0 match 0.7 CNN filter filter filter filter filter filter filter filter filter filter filter filter

Image Caption Generation vector for each region z 0 Word 1 z 1 CNN filter filter filter filter filter filter weighted sum 0.7 0.1 0.1 0.1 0.0 0.0 filter filter filter filter filter filter

Image Caption Generation vector for each region z 0 Word 1 z 1 Word 2 z 2 CNN filter filter filter filter filter filter 0.0 0.8 0.2 0.0 0.0 0.0 weighted sum filter filter filter filter filter filter

Image Caption Generation Kelvin Xu, Jimmy a, Ryan Kiros, Kyunghyun Cho, aron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua engio, Show, ttend and Tell: Neural Image Caption Generation with Visual ttention, ICML, 2015

Image Caption Generation Kelvin Xu, Jimmy a, Ryan Kiros, Kyunghyun Cho, aron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua engio, Show, ttend and Tell: Neural Image Caption Generation with Visual ttention, ICML, 2015

Li Yao, tousa Torabi, Kyunghyun Cho, Nicolas allas, Christopher Pal, Hugo Larochelle, aron Courville, Describing Videos by Exploiting Temporal Structure, ICCV, 2015

Question nswering Given a document and a query, output an answer bbi: the answer is a word https://research.fb.com/downloads/babi/ SQuD: the answer is a sequence of words (in the input document) https://rajpurkar.github.io/squd-explorer/ MS MRCO: the answer is a sequence of words http://www.msmarco.org MovieQ: Multiple choice question (output a number) http://movieqa.cs.toronto.edu/home/

Memory Network nswer Sentence to vector can be jointly trained. Extracted Information N = n=1 α n x n DNN Document α 1 x 1 α 2 α 3 α N x 2 x 3 x N vector Match q Query Sainbayar Sukhbaatar, rthur Szlam, Jason Weston, Rob Fergus, End-To-End Memory Networks, NIPS, 2015

Memory Network Extracted Information N = n=1 α n h n nswer DNN Jointly learned h 1 h 2 h 3 h N α 1 α 2 α 3 α N Hopping Document x 1 x 2 x 3 x N Match q Query

Memory Network Extract information Compute attention DNN a Extract information Compute attention q

Reading Comprehension End-To-End Memory Networks. S. Sukhbaatar,. Szlam, J. Weston, R. Fergus. NIPS, 2015. The position of reading head: Keras has example: https://github.com/fchollet/keras/blob/master/examples/ba bi_memnn.py

Wei Fang, Juei-Yang Hsu, Hungyi Lee, Lin-Shan Lee, "Hierarchical ttention Model for Improved Machine Comprehension of Spoken Content", SLT, 2016

Neural Turing Machine von Neumann architecture Neural Turing Machine not only read from memory lso modify the memory through attention https://www.quora.com/how-does-the-von-neumann-architectureprovide-flexibility-for-program-development

Neural Turing Machine y 1 h 0 f r 0 = α 0 i m 0 i α 0 1 α 0 2 α 0 3 α 0 4 x 1 Retrieval process r 0 m 0 1 m 0 2 m 0 3 m 0 4

Neural Turing Machine y 1 h 0 f r 0 = α 0 i m 0 i x 1 r 0 k 1 e 1 a 1 α 0 1 α 0 2 α 0 3 α 0 4 m 0 1 m 0 2 m 0 3 m 0 4 α 1 1 α 2 1 α 3 4 1 α 1 softmax α 1 1 α 2 1 α 3 4 1 α 1 α i 1 = cos m i 0, k 1

Neural Turing Machine m 1 i i i = m 0 α e 1 i i + α a 1 1 m 0 1 (element-wise) k 1 e 1 a 1 α 0 1 α 0 2 α 0 3 α 0 4 m 0 1 m 0 2 m 0 3 m 0 4 α 1 1 α 1 2 α 1 3 α 1 4 m 1 1 m 1 2 m 1 3 m 1 4 0 ~ 1

Neural Turing Machine y 1 y 2 h 0 f h 1 f x 1 r 0 x 2 r 1 α 0 1 α 0 2 α 0 3 α 0 4 α 1 1 α 1 2 α 1 3 α 1 4 α 2 1 α 2 2 α 2 3 α 2 4 m 0 1 m 0 2 m 0 3 m 0 4 m 1 1 m 1 2 m 1 3 m 1 4 m 2 1 m 2 2 m 2 3 m 2 4

Neural Turing Machine Wei-Jen Ko, o-hsiang Tseng, Hung-yi Lee, Recurrent Neural Network based Language Modeling with Controllable External Memory, ICSSP, 2017

rmand Joulin, Tomas Mikolov, Inferring lgorithmic Patterns with Stack-ugmented Recurrent Nets, arxiv Pre-Print, 2015 Stack RNN y t Push Pop Nothing stack Information to store Push, Pop, Nothing 0.7 0.2 0.1 f + + X0.7 X0.2 X0.1 x t

Tips for Generation

α t i ttention component time Kelvin Xu, Jimmy a, Ryan Kiros, Kyunghyun Cho, aron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua engio, Show, ttend and Tell: Neural Image Caption Generation with Visual ttention, ICML, 2015 ad ttention α 1 1 α 2 1 α 3 1 α 4 1 α 1 2 α 2 2 α 3 2 α 4 2 α 1 3 α 2 3 α 3 3 α 4 3 α 1 4 α 2 4 α 3 4 α 4 4 w 1 w 2 (woman) w 3 w 4 (woman) no cooking Good ttention: each input component has approximately the same attention weight 2 E.g. Regularization term: i τ α t i t For each component Over the generation

Mismatch between Train and Test Training Reference: C = t C t Minimizing cross-entropy of each component : condition <OS>

Mismatch between Train and Test Generation We do not know the reference Testing: The inputs are the outputs of the last time step. Training: The inputs are reference. <OS> Exposure ias

One step wrong May be totally wrong Never explore 一步錯, 步步錯

Modifying Training Process? When we try to decrease the loss for both steps 1 and 2.. Reference Training is matched to testing. In practice, it is hard to train in this way.

Scheduled Sampling Reference from model From reference from model From reference

Scheduled Sampling Caption generation on MSCOCO LEU-4 METEOR CIDER lways from reference 28.8 24.2 89.5 lways from model 11.2 15.7 49.7 Scheduled Sampling 30.6 24.3 92.1 Samy engio, Oriol Vinyals, Navdeep Jaitly, Noam Shazeer, Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks, arxiv preprint, 2015

eam Search The green path has higher score. Not possible to check all the paths 0.4 0.6 0.9 0.4 0.6 0.9 0.6 0.4

eam Search Keep several best path at each step eam size = 2

eam Search The size of beam is 3 in this example. https://github.com/tensorflow/tensorflow/issues/654#issuecomment-169009989

etter Idea? I am You are I are You am I you you are high score you am are <OS> <OS> you I you

Object level v.s. Component level Minimizing the error defined on component level is not equivalent to improving the generated objects Ref: The dog is running fast C = t C t Cross-entropy of each step cat a a a The dog is is fast The dog is running fast Optimize object-level criterion instead of component-level crossentropy. object-level criterion: R y, y Gradient Descent? y: generated utterance, y: ground truth

Reinforcement learning? Start with observation s 1 Observation s 2 Observation s 3 Obtain reward r 1 = 0 ction a 1 : right Obtain reward r 2 = 5 ction a 2 : fire (kill an alien)

Reinforcement learning? reward: R(, reference) ction taken r = 0 r = 0 ctions set observation <OS> Marc'urelio Ranzato, Sumit Chopra, Michael uli, Wojciech Zaremba, Sequence Level Training with Recurrent Neural Networks, ICLR, 2016 The action we take influence the observation in the next step

Concluding Remarks RNN with Gated Mechanism Sequence Generation Conditional Sequence Generation Tips for Generation