Language-independent hybrid MT with PRESEMT

Hits: 4961
Research areas: Year: 2013
Type of Publication: In Proceedings
  • 34, 27 46
Book title: Proceedings of the Second Workshop on Hybrid Approaches to Translation (HyTra)
Pages: 123-130
Address: Sofia, Bulgaria
Organization: HYTRA-2013 Workshop [held within the ACL2013 Conference] Month: August 8
ISBN: 978-1-937284-53-4
The present article focuses on improving the performance of a hybrid Machine Translation (MT) system, namely PRESEMT. The PRESEMT methodology is readily portable to new language pairs, and allows the creation of MT systems with minimal reliance on expensive resources. PRESEMT is phrase-based and uses a small parallel corpus from which to extract structural transformations from the source language (SL) to the target language (TL). On the other hand, the TL language model is extracted from large monolingual corpora. This article examines the task of maximising the amount of information extracted from a very limited parallel corpus. Hence, emphasis is placed on the module that learns to segment into phrases arbitrary input text in SL, by extrapolating information from a limited-size parsed TL text, alleviating the need for an SL parser. An established method based on Conditional Random Fields (CRF) is compared here to a much simpler template-matching algorithm to determine the most suitable approach for extracting an accurate model. Experimental results indicate that for a limited-size training set, template-matching generates a superior model leading to higher quality translations.