" ... what's the new paradigm that is being proposed (and used) by this tool? It's how machine-translated matches are optionally analyzed and potentially charged. If, they say, we can evaluate TM matches by the source (a perfect match is an identical source and a fuzzy match a similar source), we should look at MT matches by the target.What?Well, if there has been no change between a machine-translated target segment and a final target segment, it should be viewed as a perfect MT match and charged accordingly. If the changes are minor, it should be viewed as a fuzzy match, etc."
Oct 16, 2011
A proposed model for compensating MT-supported translation
The latest ToolKit newsletter by Jost Zetzsche contains a particularly interesting selection of free and premium content, the latter including what is probably the best overview I've seen yet of the innovations in Kilgray's memoQ 5. However, one of the most intriguing sections was the review of MemSource, in which a model for evaluating and compensating translation content produced with MT assistance was mentioned:
Up to now, the only really plausible proposals I've seen for compensating MT post-editing or MT-assisted translation have involved hourly work; those I've seen for piecework (word, page or line rates) have been exploitative at best. However, should one engage in the dubious practice of applying machine translation for language services, this paradigm is the most reasonable I have seen yet for assessing the "quality" of the output as seen by the degree of modification required. If for a particular purpose the MT output requires nearly a complete rewrite, it would essentially be paid as a completely new translation from scratch. If the MT output is in fact of somewhat useful quality, this provides a good quantitative means of assessing that and figuring the charges.
I will not say that I support drinking from the poisoned well of machine translation output and affecting the quality of one's other work in the same way that editing monkey work and reading trashy tests might do, but if one were to engage in such a foolish endeavor and join the lemmings surging toward the special interests' "future", this is at least an economic model worth discussing.
Of course, those who favor fixed rates for simple-minded budget planning will still want their straight word rates based on possibly untenable assumptions of quality. But the MemSource paradigm, which could be adopted easily by others or implemented with simple quantitative comparison tools (possibly even looking at an overall percentage change in structure, which might include rearrangements of blocks of text), is, as I see it, the first reasonable, practical suggestion for skipping over the nonsense of up-front "quality metrics", getting to work and letting the chips fall where they probably should.