IWSLT.2008 - Papers

| Total: 7

#1 Improving statistical machine translation by paraphrasing the training data. [PDF] [Copy] [Kimi1] [REL]

Authors: Francis Bond ; Eric Nichols ; Darren Scott Appling ; Michael Paul

Large amounts of training data are essential for training statistical machine translations systems. In this paper we show how training data can be expanded by paraphrasing one side. The new data is made by parsing then generating using a precise HPSG based grammar, which gives sentences with the same meaning, but minor variations in lexical choice and word order. In experiments with Japanese and English, we showed consistent gains on the Tanaka Corpus with less consistent improvement on the IWSLT 2005 evaluation data.

#2 Evaluating productivity gains of hybrid ASR-MT systems for translation dictation. [PDF] [Copy] [Kimi1] [REL]

Authors: Alain Désilets ; Marta Stojanovic ; Jean-François Lapointe ; Rick Rose ; Aarthi Reddy

This paper is about Translation Dictation with ASR, that is, the use of Automatic Speech Recognition (ASR) by human translators, in order to dictate translations. We are particularly interested in the productivity gains that this could provide over conventional keyboard input, and ways in which such gains might be increased through a combination of ASR and Statistical Machine Translation (SMT). In this hybrid technology, the source language text is presented to both the human translator and a SMT system. The latter produces N-best translations hypotheses, which are then used to fine tune the ASR language model and vocabulary towards utterances which are probable translations of source text sentences. We conducted an ergonomic experiment with eight professional translators dictating into French, using a top of the line off-the-shelf ASR system (Dragon NatuallySpeaking 8). We found that the ASR system had an average Word Error Rate (WER) of 11.7 percent, and that translation using this system did not provide statistically significant productivity increases over keyboard input, when following the manufacturer recommended procedure for error correction. However, we found indications that, even in its current imperfect state, French ASR might be beneficial to translators who are already used to dictation (either with ASR or a dictaphone), but more focused experiments are needed to confirm this. We also found that dictation using an ASR with WER of 4 percent or less would have resulted in statistically significant (p less than 0.6) productivity gains in the order of 25.1 percent to 44.9 percent Translated Words Per Minute. We also evaluated the extent to which the limited manufacturer provided Domain Adaptation features could be used to positively bias the ASR using SMT hypotheses. We found that the relative gains in WER were much lower than has been reported in the literature for tighter integration of SMT with ASR, pointing the advantages of tight integration approaches and the need for more research in that area.

#3 Rapid development of an English/Farsi speech-to-speech translation system. [PDF] [Copy] [Kimi1] [REL]

Authors: C.-L. Kao ; S. Saleem ; R. Prasad ; F. Choi ; P. Natarajan ; David Stallard ; K. Krstovski ; M. Kamali

Significant advances have been achieved in Speech-to-Speech (S2S) translation systems in recent years. However, rapid configuration of S2S systems for low-resource language pairs and domains remains a challenging problem due to lack of human translated bilingual training data. In this paper, we report on an effort to port our existing English/Iraqi S2S system to the English/Farsi language pair in just 90 days, using only a small amount of training data. This effort included developing acoustic models for Farsi, domain-relevant language models for English and Farsi, and translation models for English-to-Farsi and Farsi-to-English. As part of this work, we developed two novel techniques for expanding the training data, including the reuse of data from different language pairs, and directed collection of new data. In an independent evaluation, the resulting system achieved the highest performance of all systems.

#4 Simultaneous German-English lecture translation. [PDF] [Copy] [Kimi1] [REL]

Authors: Muntsin Kolss ; Matthias Wölfel ; Florian Kraft ; Jan Niehues ; Matthias Paulik ; Alex Waibel

In an increasingly globalized world, situations in which people of different native tongues have to communicate with each other become more and more frequent. In many such situations, human interpreters are prohibitively expensive or simply not available. Automatic spoken language translation (SLT), as a cost-effective solution to this dilemma, has received increased attention in recent years. For a broad number of applications, including live SLT of lectures and oral presentations, these automatic systems should ideally operate in real time and with low latency. Large and highly specialized vocabularies as well as strong variations in speaking style – ranging from read speech to free presentations suffering from spontaneous events – make simultaneous SLT of lectures a challenging task. This paper presents our progress in building a simultaneous German-English lecture translation system. We emphasize some of the challenges which are particular to this language pair and propose solutions to tackle some of the problems encountered.

#5 Investigations on large-scale lightly-supervised training for statistical machine translation. [PDF] [Copy] [Kimi1] [REL]

Author: Holger Schwenk

Sentence-aligned bilingual texts are a crucial resource to build statistical machine translation (SMT) systems. In this paper we propose to apply lightly-supervised training to produce additional parallel data. The idea is to translate large amounts of monolingual data (up to 275M words) with an SMT system, and to use those as additional training data. Results are reported for the translation from French into English. We consider two setups: first the intial SMT system is only trained with a very limited amount of human-produced translations, and then the case where we have more than 100 million words. In both conditions, lightly-supervised training achieves significant improvements of the BLEU score.

#6 Analysing soft syntax features and heuristics for hierarchical phrase based machine translation. [PDF] [Copy] [Kimi1] [REL]

Authors: David Vilar ; Daniel Stein ; Hermann Ney

Similar to phrase-based machine translation, hierarchical systems produce a large proportion of phrases, most of which are supposedly junk and useless for the actual translation. For the hierarchical case, however, the amount of extracted rules is an order of magnitude bigger. In this paper, we investigate several soft constraints in the extraction of hierarchical phrases and whether these help as additional scores in the decoding to prune unneeded phrases. We show the methods that help best.

#7 Improvements in dynamic programming beam search for phrase-based statistical machine translation. [PDF] [Copy] [Kimi1] [REL]

Authors: Richard Zens ; Hermann Ney

Search is a central component of any statistical machine translation system. We describe the search for phrase-based SMT in detail and show its importance for achieving good translation quality. We introduce an explicit distinction between reordering and lexical hypotheses and organize the pruning accordingly. We show that for the large Chinese-English NIST task already a small number of lexical alternatives is sufficient, whereas a large number of reordering hypotheses is required to achieve good translation quality. The resulting system compares favorably with the current stateof-the-art, in particular we perform a comparison with cube pruning as well as with Moses.