When technology guru Jost Zetzsche (Twitter: @jermobot) recently reminded the world of the relative new memoQ plug-in for itranslate4.eu, I thought I would test and blog it for fun if not profit. Alas! The key validation for the free demo (10,000 characters) I "purchased" isn't working for me, so I decided to have fun another way until that gets sorted out. I plugged two sentences into their engine offering translations from six commercial MT tools, then I put the same two sentences in Google and Microsoft's MT engines. One of these sentences is fairly simple in structure, the other less so. I won't comment the results much; I leave judgment to experts and fools to make as they choose.
The results layouts for Google & Bing were modified for better viewing. I think there are plug-ins for all these engines in a number of tools if anyone is daft enough to really want to use them productively (thus unprofessionally ignoring many confidentiality requirements for clients quite aside from other possible issues).
PROMT actually got it right and idiomatic. Should I be impressed? Repeats with other simple sentences showed that accuracy is a real crapshoot for all six engines, though True Believers in MT can serve the machine by suggesting improvements. I did my part by submitting improvements for sentences related to the ongoing Vatican scandals about abuse by priests.
Go Google! Nice to know there are so many ProZ translators depending on this great engine :-)
Full points to Big Bill & Co. this time. Two out of eight got it right. Hmmmmmmmm.
Now for something a wee bit more, taken from the Wikipedia entry on photovoltaics...
Our friendZ at Google say:
Now Big BillZ BoyZ get their chance:
use the Dragon if I feel the need for speed. I can dictate fresh faster than I can analyze and fix even the simplest "close calls" for correctness.
Of course, sentence-by-sentence, results will differ in reliability with every engine. But isn't it simply better to use the most reliable engine of all: BAT* with a good translator?
A recent LinkedIn discussion went on at length about quality metrics and other bla bla for MT. And there are various discussions of "best practice". Given enough beer or other intoxicating substances, I'm not averse to engaging in such discussions, and the ten-year-old in me who still likes to fiddle with things IT does take an academic interest in the subject of MT and productivity, but if I want real work and real quality (and yes, Renato, Kirtee, et alia there is such a thing, though its definition will vary by domain and be ever disputed... try telling a customer trying to understand a user manual that it doesn't exist or watch the faces in an audience as a translated speech is delivered in various versions, or send your marketing brochure for the UK to Bangladesh or Russia because rates are lower) I will find it with methods that stay far away from MT. Let the competition do otherwise. PLEASE.
* "brain-assisted translation", a concept once common in the days before the IT devoloution of the profession and still favored by a few high-earning dinosaurs
An exploration of language technologies, translation education, practice and politics, ethical market strategies, workflow optimization, resource reviews, controversies,
coffee and other topics of possible interest to the language service community and those it serves.
Mar 10, 2012
Just for fun: 8 MT engines for DE>EN compared
Posted by Kevin Lossner at 12:20 PM
Labels: BAT, Bing, Google, human factor, iTranslate4eu, Kilgray, MemoQ, MT, MT post-editing, plug-in, quality
Subscribe to: Post Comments (Atom)
It's an interesting exercise that you do here, but there's a flow: you are using either not customizable generic engines (Google, Bing) or commercial software out-of-the-box without customization. That'd be similar to testing a general bilingual dictionary against a domain-specific web site: of course there are going to be plenty of terms missing.
The current trend in MT these days is to customize engines not on domain basis, but on client basis (e.g. a medical equipment company X wouldn't use an MT engine for medical domain, but a specific one for that company, customized with its preferred terminology). It's only when testing customized engines that you can tell what the opportunities and threats are. I heard of a translator that combined a customized Systran engine with Trados and Dragon. Now that was light speed. Of course, he had to invest some time in customizing Systran dictionaries (and training Dragon as well, by the same token) but for long-term clients it pays off.
This is fun, but while you may prove to yourself that generic MT sucks, I don't think you succeed in proving that MT in general is pointless. (Well to me anyway)
Business and professionally focused MT will tune the MT engine for the specific purpose. When this is done it is possible to get better output from the MT engine. The end effect is not much different from working with fuzzy matches in TM. Good MT systems produce high level "TM matches" on average.
I have just worked with an EN>PT system that produces "on average" 85% level matches on all new segments. (This also means that 20% of the segments can be characterized as 100% matches). This kind of a system does help to get work done faster. 3X faster in this case even though it is necessary to make sure the translation is good and accurate.
MT is just another computer tool that can be useful, or not, depending on the specifics.
For those willing to walk beyond the free generic engines there can be a payoff even for professional use and even for serious business product user documentation.
This approach is particularly useful for highly repetitive business content like user documentation and as Ruben points out it works best when it not only domain focused but also client focused.
I really enjoyed your post, thank you! :)ReplyDelete