Statistical NLP Spring HW2: PNP Classification

Similar documents
HW2: PNP Classification. Statistical NLP Spring Levels of Transfer. Phrasal / Syntactic MT: Examples. Lecture 16: Word Alignment

Machine Translation: Examples. Statistical NLP Spring Levels of Transfer. Corpus-Based MT. World-Level MT: Examples

Statistical NLP Spring Corpus-Based MT

Corpus-Based MT. Statistical NLP Spring Unsupervised Word Alignment. Alignment Error Rate. IBM Models 1/2. Problems with Model 1

Algorithms for NLP. Machine Translation II. Taylor Berg-Kirkpatrick CMU Slides: Dan Klein UC Berkeley

CSP 517 Natural Language Processing Winter 2015

statistical machine translation

Overview (Fall 2007) Machine Translation Part III. Roadmap for the Next Few Lectures. Phrase-Based Models. Learning phrases from alignments

COMS 4705, Fall Machine Translation Part III

A Syntax-based Statistical Machine Translation Model. Alexander Friedl, Georg Teichtmeister

Lexical Translation Models 1I

Decoding Revisited: Easy-Part-First & MERT. February 26, 2015

Theory of Alignment Generators and Applications to Statistical Machine Translation

Word Alignment for Statistical Machine Translation Using Hidden Markov Models

Out of GIZA Efficient Word Alignment Models for SMT

IBM Model 1 and the EM Algorithm

Lexical Translation Models 1I. January 27, 2015

Machine Translation 1 CS 287

TALP Phrase-Based System and TALP System Combination for the IWSLT 2006 IWSLT 2006, Kyoto

Natural Language Processing (CSEP 517): Machine Translation

Statistical Machine Translation. Part III: Search Problem. Complexity issues. DP beam-search: with single and multi-stacks

IBM Model 1 for Machine Translation

Algorithms for NLP. Classification II. Taylor Berg-Kirkpatrick CMU Slides: Dan Klein UC Berkeley

Variational Decoding for Statistical Machine Translation

Latent Variable Models in NLP

N-grams. Motivation. Simple n-grams. Smoothing. Backoff. N-grams L545. Dept. of Linguistics, Indiana University Spring / 24

ACS Introduction to NLP Lecture 2: Part of Speech (POS) Tagging

Theory of Alignment Generators and Applications to Statistical Machine Translation

Speech Translation: from Singlebest to N-Best to Lattice Translation. Spoken Language Communication Laboratories

Machine Translation. CL1: Jordan Boyd-Graber. University of Maryland. November 11, 2013

EM with Features. Nov. 19, Sunday, November 24, 13

Statistical Methods for NLP

The Noisy Channel Model and Markov Models

Part A. P (w 1 )P (w 2 w 1 )P (w 3 w 1 w 2 ) P (w M w 1 w 2 w M 1 ) P (w 1 )P (w 2 w 1 )P (w 3 w 2 ) P (w M w M 1 )

Cross-Lingual Language Modeling for Automatic Speech Recogntion

A Probabilistic Forest-to-String Model for Language Generation from Typed Lambda Calculus Expressions

Sampling Alignment Structure under a Bayesian Translation Model

Midterm sample questions

Quasi-Synchronous Phrase Dependency Grammars for Machine Translation. lti

Learning to translate with neural networks. Michael Auli

COMS 4705 (Fall 2011) Machine Translation Part II

Algorithms for NLP. Classifica(on III. Taylor Berg- Kirkpatrick CMU Slides: Dan Klein UC Berkeley

Multiple System Combination. Jinhua Du CNGL July 23, 2008

Machine Translation without Words through Substring Alignment

1 Evaluation of SMT systems: BLEU

Ranked Retrieval (2)

Sequences and Information

Like our alien system

CS230: Lecture 10 Sequence models II

Word Alignment via Quadratic Assignment

Word Alignment III: Fertility Models & CRFs. February 3, 2015

Google s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation

Word Alignment via Submodular Maximization over Matroids

Natural Language Processing. Classification. Margin. Linear Models: Perceptron. Issues with Perceptrons. Problems with Perceptrons.

Word Alignment via Quadratic Assignment

CS 188: Artificial Intelligence Spring 2009

P(t w) = arg maxp(t, w) (5.1) P(t,w) = P(t)P(w t). (5.2) The first term, P(t), can be described using a language model, for example, a bigram model:

Doctoral Course in Speech Recognition. May 2007 Kjell Elenius

CS460/626 : Natural Language Processing/Speech, NLP and the Web (Lecture 18 Alignment in SMT and Tutorial on Giza++ and Moses)

Algorithms for NLP. Language Modeling III. Taylor Berg-Kirkpatrick CMU Slides: Dan Klein UC Berkeley

Gappy Phrasal Alignment by Agreement

Expectation Maximization (EM)

Natural Language Processing : Probabilistic Context Free Grammars. Updated 5/09

Natural Language Processing Prof. Pawan Goyal Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

N-gram Language Modeling Tutorial

Lecture 5 Neural models for NLP

Evaluation. Brian Thompson slides by Philipp Koehn. 25 September 2018

Phrase-Based Statistical Machine Translation with Pivot Languages

Chapter 3: Basics of Language Modelling

Lecture 12: Algorithms for HMMs

Lecture 12: Algorithms for HMMs

N-gram Language Modeling

Machine Translation Evaluation

Tuning as Linear Regression

National Centre for Language Technology School of Computing Dublin City University

What s an HMM? Extraction with Finite State Machines e.g. Hidden Markov Models (HMMs) Hidden Markov Models (HMMs) for Information Extraction

The Geometry of Statistical Machine Translation

Conditional Language Modeling. Chris Dyer

STA 414/2104: Machine Learning

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Language Models. Tobias Scheffer

Word Alignment. Chris Dyer, Carnegie Mellon University

CSC401/2511 Spring CSC401/2511 Natural Language Computing Spring 2019 Lecture 5 Frank Rudzicz and Chloé Pou-Prom University of Toronto

Hidden Markov Models. x 1 x 2 x 3 x N

Empirical Methods in Natural Language Processing Lecture 11 Part-of-speech tagging and HMMs

10 : HMM and CRF. 1 Case Study: Supervised Part-of-Speech Tagging

6.864 (Fall 2007) Machine Translation Part II

Multiword Expression Identification with Tree Substitution Grammars

Deep Learning Sequence to Sequence models: Attention Models. 17 March 2018

Improved Decipherment of Homophonic Ciphers

Math 350: An exploration of HMMs through doodles.

ORANGE: a Method for Evaluating Automatic Evaluation Metrics for Machine Translation

Maximal Lattice Overlap in Example-Based Machine Translation

Hidden Markov Models

This kind of reordering is beyond the power of finite transducers, but a synchronous CFG can do this.

Homework 3 COMS 4705 Fall 2017 Prof. Kathleen McKeown

Hidden Markov Models

Advanced Natural Language Processing Syntactic Parsing

STA 4273H: Statistical Machine Learning

Better Conditional Language Modeling. Chris Dyer

Natural Language Processing (CSE 490U): Language Models

Transcription:

Statistical NLP Spring 2010 Lecture 16: Word Alignment Dan Klein UC Berkeley HW2: PNP Classification Overall: good work! Top results: 88.1: Matthew Can (word/phrase pre/suffixes) 88.1: Kurtis Heimerl (positional scaling) 88.1: Henry Milner (word/phrase length, word/phrase shapes) 88.2: James Ide (regularization search, dictionary, rhymes) 88.5: Michael Li (words and chars, pre/suffices, lengths) 97.0: Nick Boyd (IMDB gazeteer, search result features) Best generated items Nick s drug Hycodan Sergey s drug Waterbabies Expectorant 1

Machine Translation: Examples Corpus-Based MT Modeling correspondences between languages Sentence-aligned parallel corpus: Yo lo haré mañana I will do it tomorrow Hasta pronto See you soon Hasta pronto See you around Machine translation system: Yo lo haré pronto Model of translation I will do it soon I will do it around See you tomorrow 2

Levels of Transfer Phrasal / Syntactic MT: Examples 3

MT: Evaluation Human evaluations: subject measures, fluency/adequacy Automatic measures: n-gram match to references NIST measure: n-gram recall (worked poorly) BLEU: n-gram precision (no one really likes it, but everyone uses it) BLEU: P1 = unigram precision P2, P3, P4 = bi-, tri-, 4-gram precision Weighted geometric mean of P1-4 Brevity penalty (why?) Somewhat hard to game Automatic Metrics Work (?) 4

Today The components of a simple MT system You already know about the LM Word-alignment based TMs IBM models 1 and 2, HMM model A simple decoder Next few classes More complex word-level and phrase-level TMs Tree-to-tree and tree-to-string TMs More sophisticated decoders Word Alignment x What is the anticipated cost of collecting fees under the new proposal? En vertu des nouvelles propositions, quel est le coût prévu de perception des droits? What is the anticipated cost of collecting fees under the new proposal? z En vertu de les nouvelles propositions, quel est le coût prévu de perception de les droits? 5

Word Alignment Unsupervised Word Alignment Input: a bitext: pairs of translated sentences nous acceptons votre opinion. we accept your view. Output: alignments: pairs of translated words When words have unique sources, can represent as a (forward) alignment function a from French to English positions 6

1-to-Many Alignments Many-to-Many Alignments 7

IBM Model 1 (Brown 93) Alignments: a hidden vector called an alignment specifies which English source is responsible for each French target word. IBM Models 1/2 E: 1 2 3 4 5 6 7 8 9 Thank you, I shall do so gladly. A: 1 3 7 6 8 8 8 8 9 F: Gracias, lo haré de muy buen grado. Model Parameters Emissions: P( F1 = Gracias EA1 = Thank ) Transitions: P( A2 = 3) 8

Evaluating TMs How do we measure quality of a word-to-word model? Method 1: use in an end-to-end translation system Hard to measure translation quality Option: human judges Option: reference translations (NIST, BLEU) Option: combinations (HTER) Actually, no one uses word-to-word models alone as TMs Method 2: measure quality of the alignments produced Easy to measure Hard to know what the gold alignments should be Often does not correlate well with translation quality (like perplexity in LMs) Alignment Error Rate Alignment Error Rate = = Sure align. Possible align. = Predicted align. 9

Problems with Model 1 There s a reason they designed models 2-5! Problems: alignments jump around, align everything to rare words Experimental setup: Training data: 1.1M sentences of French-English text, Canadian Hansards Evaluation metric: alignment error Rate (AER) Evaluation data: 447 handaligned sentences Intersected Model 1 Post-intersection: standard practice to train models in each direction then intersect their predictions [Och and Ney, 03] Second model is basically a filter on the first Precision jumps, recall drops End up not guessing hard alignments Model P/R AER Model 1 E F 82/58 30.6 Model 1 F E 85/58 28.7 Model 1 AND 96/46 34.8 10

Joint Training? Overall: Similar high precision to post-intersection But recall is much higher More confident about positing non-null alignments Model P/R AER Model 1 E F 82/58 30.6 Model 1 F E 85/58 28.7 Model 1 AND 96/46 34.8 Model 1 INT 93/69 19.5 Monotonic Translation Japan shaken by two new quakes Le Japon secoué par deux nouveaux séismes 11

Local Order Change Japan is at the junction of four tectonic plates Le Japon est au confluent de quatre plaques tectoniques IBM Model 2 Alignments tend to the diagonal (broadly at least) Other schemes for biasing alignments towards the diagonal: Relative vs absolute alignment Asymmetric distances Learning a full multinomial over distances 12

EM for Models 1/2 Model 1 Parameters: Translation probabilities (1+2) Distortion parameters (2 only) Start with uniform, including For each sentence: For each French position j Calculate posterior over English positions (or just use best single alignment) Increment count of word f j with word e i by these amounts Also re-estimate distortion probabilities for model 2 Iterate until convergence Example 13

Phrase Movement On Tuesday Nov. 4, earthquakes rocked Japan once again Des tremblements de terre ont à nouveau touché le Japon jeudi 4 novembre. The HMM Model E: 1 2 3 4 5 6 7 8 9 Thank you, I shall do so gladly. A: 1 3 7 6 8 8 8 8 9 F: Gracias, lo haré de muy buen grado. Model Parameters Emissions: P( F1 = Gracias EA1 = Thank ) Transitions: P( A2 = 3 A1 = 1) 14

The HMM Model Model 2 preferred global monotonicity We want local monotonicity: Most jumps are small HMM model (Vogel 96) Re-estimate using the forward-backward algorithm Handling nulls requires some care What are we still missing? -2-1 0 1 2 3 HMM Examples 15

AER for HMMs Model AER Model 1 INT 19.5 HMM E F 11.4 HMM F E 10.8 HMM AND 7.1 HMM INT 4.7 GIZA M4 AND 6.9 IBM Models 3/4/5 Mary did not slap the green witch Mary not slap slap slap the green witch n(3 slap) P(NULL) Mary not slap slap slap NULL the green witch t(la the) Mary no daba una botefada a la verde bruja d(j i) Mary no daba una botefada a la bruja verde [from Al-Onaizan and Knight, 1998] 16

Examples: Translation and Fertility Example: Idioms he is nodding il hoche la tête 17

Example: Morphology [Och and Ney 03] Some Results 18

Decoding In these word-to-word models Finding best alignments is easy Finding translations is hard (why?) Bag Generation (Decoding) 19

Bag Generation as a TSP Imagine bag generation with a bigram LM Words are nodes Edge weights are P(w w ) Valid sentences are Hamiltonian paths Not the best news for word-based MT! not it clear is. IBM Decoding as a TSP 20

Decoding, Anyway Simplest possible decoder: Enumerate sentences, score each with TM and LM Greedy decoding: Assign each French word it s most likely English translation Operators: Change a translation Insert a word into the English (zero-fertile French) Remove a word from the English (null-generated French) Swap two adjacent English words Do hill-climbing (or annealing) Greedy Decoding 21

Stack Decoding Stack decoding: Beam search Usually A* estimates for completion cost One stack per candidate sentence length Other methods: Dynamic programming decoders possible if we make assumptions about the set of allowable permutations Stack Decoding Stack decoding: Beam search Usually A* estimates for completion cost One stack per candidate sentence length Other methods: Dynamic programming decoders possible if we make assumptions about the set of allowable permutations 22