Summary: | In this paper we report on an experiment to gather quality analyses from several people, with a view to identifying problems and reaching consensus over (machine) translation from English to Portuguese. We start the paper by showing how this project is part of a larger framework of evaluation campaigns for Portuguese, and suggest the need for amassing consensual (or at least compatible) opinions. We describe the various tools (Metra, Boomerang, and TrAva) developed and explain the experiment, its results, shortcomings and lessons learned. We then present CorTA, a corpus of evaluated translations (English original, and several automatic translations into Portuguese) and make some remarks on how to use it for translation evaluation.
|