started inauspiciously for me, with a TomTom navigation system determined to keep me from the day planned at Lisbon's New University to discuss SDL Trados Studio and its place in the translation technology ecosphere. When the fourth GPS location almost proved a charm, and I hiked the last kilometer on an arthritic foot, swearing furiously that this was my last visit to the Big City, I found the lecture hall at last, an hour and a half late, and managed to arrive just after Paul Filkin's presentation
of the SDL OpenExchange
, an underused, but rather interesting and helpful resource center for plug-ins and other resources for SDL Trados Studio victims to bridge the gap between its out-of-the-box configurations and what particular users or workflows might require. There are a lot of good things to be found there - the memoQ XLIFF definition
and Glossary Converter
are my particular favorites. Paul talked about many interesting things, I was told, and there is even a plug-in created for SDL Trados Studio by a major governmental organization which has functionality much like memoQ's LiveDocs (discussed afterward but not shown in the talk I missed, however). In the course of the day, Paul also disclosed an exciting new feature for SDL Trados Studio which many memoQ users have been missing in the latest version, memoQ 2014 R2 (see the video at the end).
I arrived just in time for the highlight of the day, the demonstration of Portuguese speech recognition by David Hardisty and two of his masters students, Isabel Rocha and Joana Bernardo. Speech recognition is perhaps one of the most interesting, useful and exciting technologies applied to translation today, but its application is limited to the languages available, which are not so many with the popular Dragon Naturally Speaking application from Nuance. Portuguese is curiously absent from the current offerings despite its far more important role in the world than minor languages like German or French.
Professor Hardisty led off with an overview of the equipment and software used and recommended (slides available here
); the solution for Portuguese uses the integrated voice recognition features of the Macintosh operating system. With Parallels Desktop 10 for Mac it can be used for Windows applications such as SDL Trados Studio and memoQ as well. Nuance provides the voice recognition technology to Apple, and Brazilian and European Portuguese are among the languages provided to Apple which are not part of Nuance's commercial products for consumers (Dragon Naturally Speaking and Dragon Dictate).
Information from the Apple web site states that
Dictation lets you talk where you would type — and it now works in over 40 languages. So you can reply to an email, search the web or write a report using just your voice. Navigate to any text field, activate Dictation, then say what you want to write. Dictation converts your words into text. OS X Yosemite also adds more than 50 editing and formatting commands to Dictation. So you can turn on Dictation and tell your Mac to bold a paragraph, delete a sentence or replace a word. You can also use Automator workflows to create your own Dictation commands.
Portuguese was among the languages added with OS X Yosemite.
Ms. Bernardo began her demonstration by showing her typing speed - somewhat less than optimal due to the effects of disability from cerebral palsy. I was told that this had led to some difficulties during a professional internship, where her typing speed was not sufficient to keep up with the expectations for translation output in the company. However, I saw for myself how the integrated speech recognition features enable her to lay down text in a word processor or translation environment tool as quickly as or faster than most of us can type. In Portuguese
, a language I had thought not available for work by my colleagues translating into that language.
A week before I had visited Professor Hardisty's evening class, where after my lecture on interoperability for CAT tools, Ms. Rocha had shown me how she works with Portuguese speech recognition as I do, in "mixed mode" using a fluid work style of dictation, typing, and pointing technology. She said that her own work is not much faster than when she types, but that the physical and mental strain of the work is far less than when she types and the quality of her translation tends to be better, because she is more focused on the text. This greater concentration on words, meaning and good communication matches my own experience, but I don't necessarily believe her about the speed. I don't think she has actually measured her throughput. My observation after the evening class and again at the event with SDL was that she works as fast as I do with dictation, and when I have a need for speed that can go to triple my typing rate or more per hour.
In any case, I am very excited that speech recognition is now available to a wider circle of professionals, and with integrated dictation features
in the upcoming Windows 10
(a free upgrade for Windows 8 users), I expect this situation will only improve. I cannot emphasize enough the importance of this technology for improving the ergonomics of our work. It's more than just leveling the field for gifted colleagues like Joana Bernardo, who can now bring to bear her linguistic skills and subject knowledge at a working speed on par with other professionals - or faster - but for someone like me who often works with pain and numbness in the hands from strain injuries, or all the rest of you banging away happily on keyboards, with an addiction to pain meds in your future perhaps, speech recognition offers a better future. Some are perhaps put off by the unhelpful, boastful emphasis of others on high output, which anyone familiar with speech recognition and human-assisted machine pseudo-translation (HAMPsTr) editing knows is faster and better than what any processes involving human revision of computer-generated linguistic sausage can produce, but it's really about working better and doing better work with better personal health. It's not about silly "Hendzel Units"
It has been pointed out a few times that Mac dictation or other speech recognition implementations lack the full range of command features found in an application like Dragon Naturally Speaking. That's really irrelevant. The most efficient speech recognition users I know do not use a lot of voice-controlled command for menu options, etc. I don't bother with that stuff at all but work instead very comfortably with a mix of voice, keyboard and mouse as I learned from a colleague who can knock off over 8,000 words of top-quality translation per short, restful day before taking the afternoon off to play with her cats or go shopping and spend some of that 6-figure translation income that she had even before learning to charge better rates.
Professor Hardisty also gave me a useful surprise in his talk - a well-articulated suggestion for a much more productive way to integrate machine translation in translation workflows:
|David Hardisty's "pre-editing" approach for MpT output|
The approach he suggested is actually one of the techniques I use with multiple TM matches in the working translation grid where I dictate - look at a match or TM suggestion displayed in a second pane and cherry-pick any useful phrases or sentence fragments and simply speak them along with selected term suggestions from glossaries, etc. and do it right the first time
, faster than post-editing. This does work, much better than the sort of nonsense pushed too often into university curricula now by the greedy technotwits and Linguistic Sausage Purveyors, who in their desire for better margins and general disrespect of human service providers and employees fail to understand that good people, well-treated and empowered with the right tools, will beat the software and hardware software of "MT" and its hamsterized process extensions every time. Hardisty's approach is the most credible suggestion I have seen yet for possibly useful application of machine pseudo-translation in good work. Don't dump the MpT sewage directly into the target text stream
like so many do as they inevitably and ignorantly diminish the level of achievable output quality.
After the lunch break, Paul Filkin gave an excellent Q&A clinic on Trados Studio features, showing solutions for challenges faced by users at all levels. It's always a pleasure to see him bring his encyclopedic knowledge of that difficult environment to bear in poised, useful ways to make it almost seem easy to work with the tools. I've sent many people to Paul and his team for help over the years, and none have been disappointed according to the feedback I have heard. The Trados Studio "clinic" at Universidade Nova reminded me why.
Finally, in the last hour of the day, I presented my perspective on how the SDL Trados Studio suite can integrate usefully in teamwork involving colleagues and customers with other technology and how over the years as a user of Déja Vu and later memoQ as my primary tool, the Trados suite has often made my work easier and significantly improved my earnings, for example with the excellent output management options for terminology in SDL Trados MultiTerm.
I spoke about the different levels of information exchange in interoperable translation workflows. I have done so often in the past from a memoQ perspective, but on this day I took the SDL Trados angle and showed very specifically, using screenshots from the latest build of SDL Trados Studio 2014, how this software can integrate beautifully and reliably as the hub or a spoke in the wheel of work collaboration.
The examples I presented using involved specifics of interoperability with memoQ or OmegaT, but they work with any good, professional tool. (Please note that Across
is neither good nor a professional translation tool.) Those present also left with interoperability knowledge that no others in the field of translation have as far as I know - a simple way to access all the data in a memoQ Handoff package for translation in other environments like SDL Trados Studio, including how to move bilingual LiveDocs content easily into the other tool's translation memory.
Working in a single translation environment for actual translation is ergonomically critical to productivity and full focus on producing good content of the best linguistic character and subject presentation without the time- and quality-killing distractions of "CAT hopping", switching between environments such as SDL Trados Studio, memoQ, Wordfast, memSource, etc.
Busy translators who learn the principles of interoperability and how to move the work in and out of their sole translation tool (using competitive tools for other tasks at which they may excel, such as preparing certain project types, extracting or outputting terminology, etc.) will very likely see a bigger increase in earnings than they can by price increases in the next decade. On those rare occasions where it might be desirable to use a different tool or to cope with the stress of change from one tool to another, harmonization of customizable features such as keyboard shortcuts can be very helpful.
I ended my talk with a demonstration of how translation files (SDLXLIFF) and project packages (SDLPPX) from SDL Trados Studio can be brought easily into memoQ for translation in that ergonomic environment, with all the TMs and terminology resources, returning exactly the content required in an SDLRPX file. Throughout the presentation there was some discussion of where SDL and its competitors can and should strive to go beyond the current and occasionally dubious levels of "compatibility" for even better collaboration between professionals and customers in the future.
One of the attendees, Steve Dyson, also published an interesting summary of the day
on his blog.