Statistical NLP Spring Corpus-Based MT

Similar documents
Corpus-Based MT. Statistical NLP Spring Unsupervised Word Alignment. Alignment Error Rate. IBM Models 1/2. Problems with Model 1

Algorithms for NLP. Machine Translation II. Taylor Berg-Kirkpatrick CMU Slides: Dan Klein UC Berkeley

Machine Translation: Examples. Statistical NLP Spring Levels of Transfer. Corpus-Based MT. World-Level MT: Examples

Statistical NLP Spring HW2: PNP Classification

HW2: PNP Classification. Statistical NLP Spring Levels of Transfer. Phrasal / Syntactic MT: Examples. Lecture 16: Word Alignment

CSP 517 Natural Language Processing Winter 2015

Parts-of-Speech (English) Statistical NLP Spring Part-of-Speech Ambiguity. Why POS Tagging? Classic Solution: HMMs. Lecture 6: POS / Phrase MT

Overview (Fall 2007) Machine Translation Part III. Roadmap for the Next Few Lectures. Phrase-Based Models. Learning phrases from alignments

COMS 4705, Fall Machine Translation Part III

Latent Variable Models in NLP

statistical machine translation

IBM Model 1 and the EM Algorithm

Lexical Translation Models 1I

Lexical Translation Models 1I. January 27, 2015

Word Alignment for Statistical Machine Translation Using Hidden Markov Models

Statistical Machine Translation. Part III: Search Problem. Complexity issues. DP beam-search: with single and multi-stacks

Out of GIZA Efficient Word Alignment Models for SMT

EM with Features. Nov. 19, Sunday, November 24, 13

Decoding Revisited: Easy-Part-First & MERT. February 26, 2015

COMS 4705 (Fall 2011) Machine Translation Part II

A Syntax-based Statistical Machine Translation Model. Alexander Friedl, Georg Teichtmeister

Theory of Alignment Generators and Applications to Statistical Machine Translation

IBM Model 1 for Machine Translation

Machine Translation. CL1: Jordan Boyd-Graber. University of Maryland. November 11, 2013

Sampling Alignment Structure under a Bayesian Translation Model

Gappy Phrasal Alignment by Agreement

6.864 (Fall 2007) Machine Translation Part II

Quasi-Synchronous Phrase Dependency Grammars for Machine Translation. lti

Phrase-Based Statistical Machine Translation with Pivot Languages

Word Alignment via Quadratic Assignment

Natural Language Processing (CSEP 517): Machine Translation

Word Alignment via Quadratic Assignment

TALP Phrase-Based System and TALP System Combination for the IWSLT 2006 IWSLT 2006, Kyoto

Theory of Alignment Generators and Applications to Statistical Machine Translation

Word Alignment by Thresholded Two-Dimensional Normalization

Natural Language Processing Prof. Pawan Goyal Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Bayesian Learning of Non-compositional Phrases with Synchronous Parsing

CS460/626 : Natural Language Processing/Speech, NLP and the Web (Lecture 18 Alignment in SMT and Tutorial on Giza++ and Moses)

STA 414/2104: Machine Learning

Learning to translate with neural networks. Michael Auli

N-grams. Motivation. Simple n-grams. Smoothing. Backoff. N-grams L545. Dept. of Linguistics, Indiana University Spring / 24

Probabilistic Word Alignment under the L 0 -norm

Variational Decoding for Statistical Machine Translation

Lecture 12: EM Algorithm

Speech Translation: from Singlebest to N-Best to Lattice Translation. Spoken Language Communication Laboratories

Word Alignment. Chris Dyer, Carnegie Mellon University

Lecture 12: Algorithms for HMMs

6.864: Lecture 17 (November 10th, 2005) Machine Translation Part III

Lecture 12: Algorithms for HMMs

Machine Translation 1 CS 287

Part A. P (w 1 )P (w 2 w 1 )P (w 3 w 1 w 2 ) P (w M w 1 w 2 w M 1 ) P (w 1 )P (w 2 w 1 )P (w 3 w 2 ) P (w M w M 1 )

Decoding in Statistical Machine Translation. Mid-course Evaluation. Decoding. Christian Hardmeier

A Probabilistic Forest-to-String Model for Language Generation from Typed Lambda Calculus Expressions

Computing Optimal Alignments for the IBM-3 Translation Model

STA 4273H: Statistical Machine Learning

Statistical Methods for NLP

Word Alignment III: Fertility Models & CRFs. February 3, 2015

Midterm sample questions

CS 188: Artificial Intelligence Spring 2009

Conditional Language Modeling. Chris Dyer

Tuning as Linear Regression

Language Models. Data Science: Jordan Boyd-Graber University of Maryland SLIDES ADAPTED FROM PHILIP KOEHN

Like our alien system

Multiple System Combination. Jinhua Du CNGL July 23, 2008

LECTURER: BURCU CAN Spring

Machine Translation without Words through Substring Alignment

Multi-Task Word Alignment Triangulation for Low-Resource Languages

Empirical Methods in Natural Language Processing Lecture 11 Part-of-speech tagging and HMMs

Google s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation

Word Alignment via Submodular Maximization over Matroids

Conditional Random Fields

Better Conditional Language Modeling. Chris Dyer

Cross-Lingual Language Modeling for Automatic Speech Recogntion

CSC401/2511 Spring CSC401/2511 Natural Language Computing Spring 2019 Lecture 5 Frank Rudzicz and Chloé Pou-Prom University of Toronto

Deep Learning Sequence to Sequence models: Attention Models. 17 March 2018

Lecture 15. Probabilistic Models on Graph

Expectation Maximization (EM)

Language as a Stochastic Process

CSC321 Lecture 15: Recurrent Neural Networks

Minimum Error Rate Training Semiring

PAPER Bayesian Word Alignment and Phrase Table Training for Statistical Machine Translation

Data-Intensive Computing with MapReduce

Investigating Connectivity and Consistency Criteria for Phrase Pair Extraction in Statistical Machine Translation

Natural Language Processing : Probabilistic Context Free Grammars. Updated 5/09

Natural Language Processing (CSEP 517): Machine Translation (Continued), Summarization, & Finale

Triplet Lexicon Models for Statistical Machine Translation

Integrating Morphology in Probabilistic Translation Models

Maximum Likelihood (ML), Expectation Maximization (EM) Pieter Abbeel UC Berkeley EECS

CMSC 723: Computational Linguistics I Session #5 Hidden Markov Models. The ischool University of Maryland. Wednesday, September 30, 2009

Sequence Modeling with Neural Networks

What s an HMM? Extraction with Finite State Machines e.g. Hidden Markov Models (HMMs) Hidden Markov Models (HMMs) for Information Extraction

Language and Statistics II

Sequences and Information

Basic Text Analysis. Hidden Markov Models. Joakim Nivre. Uppsala University Department of Linguistics and Philology

10 : HMM and CRF. 1 Case Study: Supervised Part-of-Speech Tagging

ACS Introduction to NLP Lecture 2: Part of Speech (POS) Tagging

a. See the textbook for examples of proving logical equivalence using truth tables. b. There is a real number x for which f (x) < 0. (x 1) 2 > 0.

Chapter 3: Basics of Language Modelling

This kind of reordering is beyond the power of finite transducers, but a synchronous CFG can do this.

Decoding and Inference with Syntactic Translation Models

Transcription:

Statistical NLP Spring 2010 Lecture 17: Word / Phrase MT Dan Klein UC Berkeley Corpus-Based MT Modeling correspondences between languages Sentence-aligned parallel corpus: Yo lo haré mañana I will do it tomorrow Hasta pronto See you soon Hasta pronto See you around Machine translation system: Yo lo haré pronto Model of translation I will do it soon I will do it around See you tomorrow 1

Unsupervised Word Alignment Input: a bitext: pairs of translated sentences nous acceptons votre opinion. we accept your view. Output: alignments: pairs of translated words When words have unique sources, can represent as a (forward) alignment function a from French to English positions Alignment Error Rate Alignment Error Rate = = Sure align. Possible align. = Predicted align. 2

IBM Models 1/2 E: 1 2 3 4 5 6 7 8 9 Thank you, I shall do so gladly. A: 1 3 7 6 8 8 8 8 9 F: Gracias, lo haré de muy buen grado. Model Parameters Emissions: P( F1 = Gracias EA1 = Thank ) Transitions: P( A2 = 3) Problems with Model 1 There s a reason they designed models 2-5! Problems: alignments jump around, align everything to rare words Experimental setup: Training data: 1.1M sentences of French-English text, Canadian Hansards Evaluation metric: alignment error Rate (AER) Evaluation data: 447 handaligned sentences 3

Intersected Model 1 Post-intersection: standard practice to train models in each direction then intersect their predictions [Och and Ney, 03] Second model is basically a filter on the first Precision jumps, recall drops End up not guessing hard alignments Model P/R AER Model 1 E F 82/58 30.6 Model 1 F E 85/58 28.7 Model 1 AND 96/46 34.8 Joint Training? Overall: Similar high precision to post-intersection But recall is much higher More confident about positing non-null alignments Model P/R AER Model 1 E F 82/58 30.6 Model 1 F E 85/58 28.7 Model 1 AND 96/46 34.8 Model 1 INT 93/69 19.5 4

Monotonic Translation Japan shaken by two new quakes Le Japon secoué par deux nouveaux séismes Local Order Change Japan is at the junction of four tectonic plates Le Japon est au confluent de quatre plaques tectoniques 5

IBM Model 2 Alignments tend to the diagonal (broadly at least) Other schemes for biasing alignments towards the diagonal: Relative vs absolute alignment Asymmetric distances Learning a full multinomial over distances EM for Models 1/2 Model parameters: Translation probabilities (1+2) Distortion parameters (2 only) Start with uniform, including For each sentence: For each French position j Calculate posterior over English positions (or just use best single alignment) Increment count of word f j with word e i by these amounts Also re-estimate distortion probabilities for model 2 Iterate until convergence 6

Example: Model 2 Helps Phrase Movement On Tuesday Nov. 4, earthquakes rocked Japan once again Des tremblements de terre ont à nouveau touché le Japon jeudi 4 novembre. 7

The HMM Model E: 1 2 3 4 5 6 7 8 9 Thank you, I shall do so gladly. A: 1 3 7 6 8 8 8 8 9 F: Gracias, lo haré de muy buen grado. Model Parameters Emissions: P( F1 = Gracias EA1 = Thank ) Transitions: P( A2 = 3 A1 = 1) The HMM Model Model 2 preferred global monotonicity We want local monotonicity: Most jumps are small HMM model (Vogel 96) Re-estimate using the forward-backward algorithm Handling nulls requires some care What are we still missing? -2-1 0 1 2 3 8

HMM Examples AER for HMMs Model AER Model 1 INT 19.5 HMM E F 11.4 HMM F E 10.8 HMM AND 7.1 HMM INT 4.7 GIZA M4 AND 6.9 9

IBM Models 3/4/5 Mary did not slap the green witch Mary not slap slap slap the green witch n(3 slap) P(NULL) Mary not slap slap slap NULL the green witch t(la the) Mary no daba una botefada a la verde bruja d(j i) Mary no daba una botefada a la bruja verde [from Al-Onaizan and Knight, 1998] Examples: Translation and Fertility 10

Example: Idioms he is nodding il hoche la tête Example: Morphology 11

Some Results [Och and Ney 03] Decoding In these word-to-word models Finding best alignments is easy Finding translations is hard (why?) 12

Bag Generation (Decoding) Bag Generation as a TSP Imagine bag generation with a bigram LM Words are nodes Edge weights are P(w w ) Valid sentences are Hamiltonian paths Not the best news for word-based MT! not it clear is. 13

IBM Decoding as a TSP Greedy Decoding 14

Stack Decoding Stack decoding: Beam search Usually A* estimates for completion cost One stack per candidate sentence length Other methods: Dynamic programming decoders possible if we make assumptions about the set of allowable permutations Stack Decoding Stack decoding: Beam search Usually A* estimates for completion cost One stack per candidate sentence length Other methods: Dynamic programming decoders possible if we make assumptions about the set of allowable permutations 15

Phrase-Based Systems Sentence-aligned corpus Word alignments cat chat 0.9 the cat le chat 0.8 dog chien 0.8 house maison 0.6 my house ma maison 0.9 language langue 0.9 Phrase table (translation model) Phrase-Based Decoding 这 7 人中包括来自法国和俄罗斯的宇航员. Decoder design is important: [Koehn et al. 03] 16

The Pharaoh Model [Koehn et al, 2003] Segmentation Translation Distortion The Pharaoh Model Where do we get these counts? 17

Phrase Weights 18

Phrase Scoring Learning weights has been tried, several times: [Marcu and Wong, 02] [DeNero et al, 06] and others cats like aiment les chats poisson le frais. Seems not to work well, for a variety of partially understood reasons fresh fish.. Main issue: big chunks get all the weight, obvious priors don t help Though, [DeNero et al 08] Phrases do help But they don t need to be long Why should this be? Phrase Size 19

Lexical Weighting The Pharaoh Decoder Probabilities at each step include LM and TM 20

Hypotheis Lattices Pruning Problem: easy partial analyses are cheaper Solution 1: use beams per foreign subset Solution 2: estimate forward costs (A*-like) 21

WSD? Remember when we discussed WSD? Word-based MT systems rarely have a WSD step Why not? 22