Showing posts with label Trados. Show all posts
Showing posts with label Trados. Show all posts

Oct 8, 2016

SDL Trados Roadshow in Lisbon on November 16th!

Next month on Wednesday, November 16th, the SDL roadshow featuring the latest release of SDL Trados Studio will be coming to Lisbon, Portugal. The all-day event is free of charge,but registration is required.

A full afternoon of training on the SDL Trados Studio translation environment is included in the day. Even if you live and work in a country other than Portugal, this is an excellent opportunity to be briefed on one of the leading technologies for efficient translation work and then take a very long weekend to enjoy Europe's capital of cuisine and culture.

See you there?

Aug 6, 2016

Approaching memSource Cloud


It has been interesting to see the behavior of my codornizes since I moved them from the confines of a rabbit hutch in a stall at my old quinta to the fenced, outdoor enclosures in the shade of a Quercus suber grove. In the hutch, they were fearful creatures,panicking each time I opened their prison to give water and food or to collect eggs. Their diet was also rather miserable; the German hunters who first introduced me to these birds for training very authoritatively told me that they ate "only wheat", and I felt bold to offer them anything different like cracked corn or rice. In the concentration camp-like conditions in which they lived, they also developed a serious case of mites and lost a lot of feathers. I thought about slaughtering and eating them as an act of mercy.

Then last spring I moved to a new place with a friend, who built a large enclosure for my goats and chickens. She didn't know about the quail. I brought them one day and hastily improvised an enclosure for them with a large circle of wire fence around a tree, because I was afraid the goats might trample them. There was far more space in this area than they had before, and real, dry dirt for taking dust baths. Soon the mite infestations improved (even before regular dunks in pyrethrin solution began), and the behavior of the birds began to change. They became less nervous, though sometimes when someone approached the enclosure they flew straight up in panic as quail sometimes do and bloodied themselves on the wire.

A few months later I built a much larger enclosure for a mother hen and her chicks to keep them out from under trampling feet or from wandering through the chain link fence of the enclosure into the hungry mouths of six dogs who watched the birds most of the day like Trump fans with a case of beer and an NFL game on the TV. The quail were moved in with the chickens as an afterthought. With nine square meters of sheltered space, the three little birds underwent further transformations, becoming much calmer, never flying in panic and allowing themselves to be approached and picked up with relative ease. They also exhibited a taste for quite a variety of foods, including fresh fruit and weeds such as purslane. Most astonishing of all, they began to lay eggs regularly in an overturned flower pot with a bit of dried grass. Nowhere else. All the reading I've done on quail on the Internet tells me that quail are stupid birds who drop their eggs anywhere, do not maintain nests and seem to have no maternal instincts whatsoever. I am beginning to doubt all that.

At various times in my life I have heard many statements made about the cultural proclivities of various ethnic minorities, but these assertions usually fail to take into account historical background and circumstances of poverty and prejudice, choosing instead to blame victims. In cases where I have seen people of this background offered the same opportunities I take for granted or far less than my cultural privilege has afforded me, I cannot see any result which would offer itself for objective negative commentary.

There are a lot of ignorant assumptions and assertions made about the class of digital sharecroppers known as translators. Some of the most offensive ones are heard from the linguistic equivalents of plantation owners, some of whom have long years of caring for these hapless, technophobic, unreliable "autistics" who simply could not survive without the patriarchal hand of their agencies.

Fortunately, technology continues to evolve in ways which make it ever easier to take up the White Man's Burden and extract value from these finicky, "artistic" human translation resources. The best of breed in this sense could make old King Leopold II envious with the civilization they have brought to us savage translators.

On many occasions, I have advocated the use of various server-based or shared online solutions for coordinating translation work with others. And I will continue to do so wherever that makes sense to me. However, I have observed a number of persistent, dangerous assumptions and practices which reduce or even eliminate the value to be obtained from this approach. It's not a matter of the platform per se, usually, unless it is Across to bear, but too often over the past decade, I have seen how the acquisition of a translation memory management server such as memoQ or memSource or a project management tool such as Plunet, OTM or home-rolled solutions has led to a serious deterioration in the business practices of an enterprise as they put their faith more in technology and less in the people who remain as cogs in their business engines.

As the emphasis has shifted more and more to technologies remote to the sharecroppers actually working the fields of words, a naive belief has established itself as the firm faith of many otherwise rational persons. This is expressed in many ways –  sometimes as a pronouncement that browser-based tools are truly the future of translation, often in the dubious, self-serving utterances of bottom-feeding brokers and tool vendors who proclaim the primacy of machine pseudo-translation while hiding behind the fig leaf argument that we need such things to master the mass of data now being generated. It is fortunate for them perhaps that this leaf is opaque enough to hide their true linguistic and intellectual potency from public view.

A related error which I see too often is the failure to distinguish between the convenience of process and project managers and the optimum environment for translating professionals. I don't think this mistake is malicious or deliberately ignores the real factors for optimal work as a wordworker; it's simply damned hard much of the time to understand the needs of someone in a different role. I could say the same for translators not understanding the needs of project managers or even translation consumers, and in fact I often do.

So indeed, the best tool for a project manager or a corporate process coordinator might not be the best tool for the results these people desire from their translators. Fortunately, this is usually a situation where, with a little understanding and testing, both sides can win and work with what works best for them. The mechanism to achieve this is often referred to by the nerdy term "interoperability".

Riccardo Schiaffino, an Italian translator and team leader based in the US, recently published a few articles (trouble and memoQ interoperability) about memSource, a cloud-based tool whose popularity among translation agencies and corporate or public entities with large translation needs continues to grow. High-octane translators like Riccardo and others have trouble sometimes understanding why these parties would choose a tool with such great technical limitations compared to some market leaders like SDL or memoQ, but the simplicity of getting started and the convenience of infrastructure managed elsewhere on secure, high-performance servers with sufficient capacity available for peak use is an understandably powerful draw.

And the support team of memSource and the tools developers are noted for their competence and responsiveness, which is equal in weight to a fat basket full of sexy technical options.

So I will not argue against the use of memSource by agencies and organizational users whose technical needs are not particularly complex and who do not have concerns about a tool almost entirely dependent on reliable, high bandwidth internet connectivity at all times to fulfill its key promises. In fact, it's a good and easy place to start for many, perhaps more so than the rival memoQ Cloud at present, which suffers sometimes from limited capacities (at the same data center used by memSource and others!) during peak use. Unlike the barbed-wire, unstable and unfriendly solution Across, which has achieved some popularity in its native Germany and elsewhere through sales tactics relying on fear, uncertainty and doubt regarding illusionary or delusional data security, memSource works, works well, and the data are portable elsewhere if a company or individual makes another choice some day.

But damn... it's just not very efficient for professional work, especially not for those of us who have amassed considerable personal work resources and become habituated to other tools like SDL Trados Studio, Déja Vu or memoQ like a carpenter is to his time- and work-tested favorite tools. Trading one of these for the memSource desktop editor or, God forbid, the browser-based translation interface feels worse than being forced to do carpentry with cheap Chinese tools cast from dodgy pot metal. Riccardo mentions a few of the disadvantages, and I could fill pages with a catalog of others. But compared to some other primitive tools, it's not so bad, and for those with little or no good experience with leading translation environment tools, it may seem perfectly OK. You don't miss a myriad of filtering options to edit text or sophisticated QA features if you are still amazed that a "translation memory" can spit out a sentence you translated once-upon-a-time if something similar shows up six months later.

And as mentioned, memSource - or some other tool - may indeed be the best solution on the project management side. So what's a professional translator to do if an interesting project is on offer but that platform is unavoidable? Riccardo's tips on how to process the MXLIFF files from memSource in memoQ offer part of a possible good solution which would work almost equally well in most other leading tools as well these days. One additional bit is needed in the memoQ Regex Tagger filter to handle the other tag type (dual curly brackets) in memSource, but otherwise the advice given will allow safe translation of the memSource files in other environments. I can even change the segmentation in memoQ if, as usual, the project manager has failed to create appropriate segmentation rules in memSource to accountfor some of the odd stuff one often sees in legal or financial texts, and this does not damage or change the segmentation seen later when the working file is returned to memSource.

Even concerns about the "lack" of access to shared online resources in memSource if an MXLIFF is translated elsewhere are easily addressed. A few useful things for this include:

  • pretranslation of the memSource files to put matches into the target before transferring to other environments,
  • leaving the browser-based or desktop editor for memSource open in the background for online term base or TM look-ups, and
  • occasionally exporting and synchronizing the MXLIFF in memSource to make the data available to team members working in parallel on a large project - this takes just a minute or two and allows one as much time as needed for polishing text in the other environment.

The last tip is particularly helpful to calm the nerves of project managers who are like mother hens on a nest of eggs which they fear might in fact be hand grenades and who panic if they don't see "progress" on their project servers days before anything is due. One can show them "progress" every twenty minutes or so without much ado if so inclined.

I am past the point where I recommend any translation memory management server in particular for agency and corporate processes. There are advantages to each (except Across, where these are actually hallucinations) and disadvantages, and where I see real problems, it is seldom due to the choice of platform but rather the lack of training and process knowledge by those responsible for the processes. The bright and shining prospects of a translation server are easily sold with a slick tongue, but without an honest analysis and recommendation of needs for initial and ongoing staff training these too often end up being bright and shining lies. I think very often of a favorite German customer who invested heavily in such a system four or five year ago and has not managed one single successful project with the system in all that time. This makes me sick to think of the waste of resources and possibilities.

So on the project management and process ownership side, memSource may be a great choice. Certainly some of my clients think so, and the improvements in their business often back this belief up. And for those who work with gangs of indigent, migrant or sharecropping translators whose marginal existences make the investment in professional resources like SDL Trados Studio or memoQ seem difficult or undesirable, it may be all that is needed by anyone.

The good news for those who depend on the efficiency of a favored tool, however, is that with a few simple steps, we need not compromise and can get full value from our better desktop tools while supporting interesting projects based in memSource. So each side of the translation project can work with what works best for them, without loss, compromise, risk or recriminations.

And the translating quail who start out in a dark box with a stunting lack of possibilities can look forward to the real possibilities of work liberation in a larger environment richer in healthy possibilities and rewards.
 

Aug 1, 2016

Corpus Linguistics and AntConc in the 2016 US Presidential Contest

Professor Laurence Anthony's AntConc concordancing software remains my favorite tool for analyzing the word content of text collections for my professional translation purposes. Although a free tool, it offers some important functionality beyond what I can get from the integrated term extraction and concordancing means in my translation environment tools, particularly SDLMultiTerm Extract and memoQ. AntConc is my first recommendation to my friends who teach at university and want to introduce their students to practical corpus linguistics and to my clients in industry who need to produce useful glossaries which cover the most frequently discussed things in their range of products and services.

That is not to say that its features are the most wide-ranging, but in addition to dead-simple incorporation of stopword lists (still a problem for most memoQ users), AntConc (like many other academic concordancers) offers excellent facilities for studying collocations, those words which occur together in important contexts. For years I have begged that this useful feature be added to the tools for professional translators, because it is a great aid in studying the proper language of a particular field or subject matter, and although the memoQ concordance can in fact search for multiple terms at once so that one forms a visual impression of their co-occurrence in text, it lacks the simple precision of AntConc for specifying the proximity range of the words found together in a sentence.

In one form or another, tools for analyzing the frequency of words and the contexts in which they occur have been a part of my life for a very long time. And yet it did not occur to me to use them as a means of studying the many words that are part of the many political and social debates taking place in the countries that concern me. One can get a quick impression with fun word cloud pictures (such as those in this post, created from the convention speeches of The Orange One and The Infamous HRC using a free online tool). But AntConc lets you go deeper and achieve a greater understanding of how language is used to influence our thoughts and discussions.

Katelyn Guichelaar and Kristin Du Mez have done just that in an interesting article title, "Donald Trump and Hillary Clinton, By Their Words", which offers some interesting insights into the psychology and public postures of the two candidates. No spoilers here – go read the article and enjoy. Then think about the professionally and personally relevant ways in which you might use the practical tools of corpus linguistics.


Oct 29, 2015

Revised target document workflows in SDL Trados Studio 2015 vs. memoQ

Yesterday I had an unexpected opportunity to see the new SDL implementation of the feature Kilgray introduced to memoQ two years ago, in which a revised target document (or some portion thereof) is re-imported to a translation project for purposes of updating the translation memory. Since my involvement with the concept and specification of this feature in memoQ, I have been expecting the competition to follow suit, since in principle at least, this is a useful feature which nearly everyone can use in several common scenarios.

The way in which SDL Trados Studio 2015 handles project updates with edited target documents appears very different than what memoQ does, so that one might easily think that the functions are different. And this is one of those rare instances where I have to give SDL credit for a smoother, more streamlined procedure less likely to cause confusion and frustration with users.

The positive difference starts with the choice of terminology in the command interface. SDL refers to a "target document" rather than a "monolingual document" - I think this is less ambiguous and potentially confusing to an average user. The fact that these updates are perhaps not supported for bilingual formats in memoQ is one of those nerdy details which will not interest most people, especially given that there is a stable, established update process for project updates using bilingual documents.

When the reviewed file to import is selected, the user has the option to go to the aligner and correct possible matching errors for the revised target document (desirable if, for example, edits might cause the segmentation to change), but the default is to go straight back to the working window for translation and editing, with the changes already shown in tracked changes mode. Very nice.

In memoQ, the trip through the aligner is mandatory, but for simple changes, this is usually not needed, so I like the fact that Studio 2015 offers this as an option. And in memoQ, several extra steps are needed to show the changes in tracked mode (redlined markup), with confusing traps in the interface along the way. In a recent blog post, I described how Kilgray's emphasis on commands and terms relevant only to server projects, with the usual tracked changes options a translator would want buried under the "Custom" command, causes many users to conclude that tracked changes simply do not work in memoQ, which is not true at all. You just have to run the evil interface gauntlet to get there.

Does this mean I think everyone should dump memoQ and start using SDL Trados Studio 2015? Heck no. There are many processes involved in successful translation work, and switching from one tool to another based on a single feature or a just a few features is not particularly clever, no matter which way you go. (Except for "away from Across", which is always a good idea.) I am very pleased and encouraged by SDL's different approach to this feature, because it shows once again the importance of competition and different approaches to a problem. Ultimately, ergonomics and user experiences should determine the further development of a feature. In my opinion, memoQ usually has the edge here, but not always, and this is a case where improvements to this innovative feature which first appeared in memoQ could very well be inspired by SDL.

Mar 18, 2015

memoQ&A in Porto - good people, great bagels!


Last night from 6:00 to 9:30 I enjoyed a "memoQ&A Evening" at the Porto Bagel Café as a reward for surviving the long bus ride to Porto/Gaia from Évora to attend the JABA Partner Summit. About 25 local colleagues attended to hear my not-as-short-as-promised presentation and discuss approaches to memoQ and other translation technologies as our working tools. The evening was part of the Translators in Residence initiative and a good start to my second visit to the area after my whirlwind tour last month to investigate venues for teaching events. Many thanks to the sponsors. the International Association of Professional Translators and Interpreters and Chip7 of Évora for providing the funding and tools (an excellent LCD projector - thank you, Carlos!) to do this.



I very much appreciate IAPTI's commitment to the professional education and continuing development of my good colleagues in Portugal, particularly in difficult economic times when many findit difficult to attend translators' events in faraway places. The evening was free for all attendees, who only had to pay for whatever they drank (great coffee - I had my usual galão) and ate (the best bagels in Portugal!).

After an initial hour of snacks, coffee and chat, the evening began with a discussion of the game-changing implications of speech recognition technologies for our working lives. Not only is it now possible for colleagues to use high-quality speech recognition on desktop computer and laptops in languages such as Hungarian and Portuguese, which are not currently supported by Dragon NaturallySpeaking (using, for example, the integrated recognition tools in the Mac Yosemite OS, as demonstrated with SDL Trados Studio and memoQ in Lisbon the day that SDL conquered Portuguese translation), smartphones are part of the game now too. Since picking up an older iPhone model (4S) for a few hundred euros about a month ago, I have had excellent results testing it with English, German, Russian and Portuguese and e-mailing texts to myself with just a few taps on the phone's screen. Once transferred as an e-mail, the text is then aligned in a CAT tool such as memoQ and subjected to tagging, QA and other procedures of the usual virtual translation working environments.

The use of memoQ and other CAT tools for single-language original authoring and text revision was also discussed. This flexible workflow extends the relevance of translation environment tools well beyond the usual limits within which translators and translation companies live and operate and offers interesting prospects for collaboration and re-use of creative resources. This topic willalso be covered next week in a lecture and workshop at Universidade de Évora and in an eCPD webinar on June 2, 2015.

Interoperability is another important topic for translators; I discussed different ways in which I use SDL Trados Studio and other tools to prepare projects to work in memoQ and vice versa as well as mz highly profitable use of SDL Multiterm to enhance customer loyalty and my professional image with this terminology management ssystem's excellent output management features.

Other tips and tricks in the memoQ&A included the untapped potential of LiveDocs, tracked changes and row histories in memoQ, dealing with embedded objects, graphics and transcription, PDF 3-ways and new tricks for nasty and/or illegible image PDFs, versioning and a concept for transforming translation memory concordancing into something much, much more useful and less prone to errors in editing and translation.

Copies of the slides from the evening's presentation are available here. It is, however, merely a palimpsest of the evening.

Many thanks also to colleague and translation tools teacher Felix do Carmo for kindly chauffeuring me around town and for the interesting tour of the training and production facilities at his company, TIPS.


Feb 21, 2015

CAT tools re-imagined - an approach to authoring and editing


I am often asked about the monolingual editing workflows I have used for some 15 years now to improve texts which were written originally in English, not created by translation from another language. And I have discussed various corpus linguistics approaches, such as to learn the language of a new specialty or the NIFTY method often presented by colleague Juliette Scott.

However, on a recent blitz tour of northern Portugal to test the fuel performance of the diesel wheels which may take me to the BP15 and memoQfest conferences in Zagreb and Budapest respectively later this year, I stopped off in Vila Real to meet a couple of veterinarians, one of whom is also a translator. During a lunch chat with typically excellent Portuguese cuisine, the subject of corpus research as an aid for authoring a review paper came up. I began to explain my (not so unusual) methods of editing and existing document when I was asked how the tools of translation technology might be applied to authoring original content.

The other translator at the table said, "It's a shame that I cannot use my translation memories to look things up while I write", and I replied that of course he could do this, for example with the memoQ TM Search Tool or similar solutions from other providers. And then he said, "And what about my term bases and LiveDocs corpora?", and I said I would sleep on it and get back to him. In the days that followed, other friends (coincidentally also veterinarians) asked my advice about editing the English of the Ph.D. theses and other works they will author in English as non-native speakers of that language. One of them noted that it would be "nice" if she could refer to corrections made by various persons and compare them more easily. I said I would sleep on that one too.

A few days after that the pain in my hands and feet from repetitive strain injuries and arthritis was unbearable, aggravated by a rope burn accident while stopping an attack on sheep by my over-eager hunting dog and by driving over 1000 km in a day. I doubled down on the pain meds, made a big jug of toxically potent sangria and otherwise ensured that I was comfortably numb and could enjoy a night of solid sleep.

It was not meant to be. Two hours later I woke up, stone sober, with a song in my head and the solution to the problem of my Portuguese friends writing in English and Tiago wanting to author his work in memoQ for the convenience of using its filters to review content. Since then the concept has continued to evolve and improve as others suggest ways of accommodating their writing or language learning needs.


After about a week of testing I scheduled one of my "huddle" presentation classes, an intimate TeamViewer training session to discuss the approach and elicit new ideas for adapting it better to the needs of monolingual authors. The recording of that session is available for download by clicking on the image of the title slide at the top of this post. (The free TeamViewer software is needed to watch the TVS file downloaded; double-click it, and the 67-minute lecture and Q&A will play.)

I'm currently building Moodle courses which provide more details and templates for this approach to authoring and editing, and it will be incorporated in parts of the many talks and workshops planned this year.

I am aware that SDL killed their authoring product, the Author Assistant, and that Acrolinx offers interesting tools in this area, as do others. But I'm usually hesitant to recommend commercial tools in an academic environment, because their often rapid pace of development (such as we see with memoQ) can play serious havoc with teaching plans and threaten the stability of an instructional program, which is usually best focused on concepts and not on fast-changing details. So I actually started out my work and testing of this idea using the Open Source tool OmegaT, the features of which are more limited but also more stable in most cases than the commercial solutions from SDL, Kilgray and others. But as I worked, I noticed that my greater familiarity with memoQ's features made it an advantageous platform for developing an approach, which in principle works with almost every translation environment tool.

Part of my motivation in creating this presentation was to encourage improvements in the transcription features available in some translation environments. But the more I work with this idea, the more possibilities I see for extending the reach of translation technology into source text authoring and making all the resources needed for help available in better ways. I hope that you may see some possibilities for your own work or learning needs and can contribute these to the discussion.

Jan 25, 2015

SDL conquers translation at Universidade Nova in Lisbon


The day started inauspiciously for me, with a TomTom navigation system determined to keep me from the day planned at Lisbon's New University to discuss SDL Trados Studio and its place in the translation technology ecosphere. When the fourth GPS location almost proved a charm, and I hiked the last kilometer on an arthritic foot, swearing furiously that this was my last visit to the Big City, I found the lecture hall at last, an hour and a half late, and managed to arrive just after Paul Filkin's presentation of the SDL OpenExchange, an underused, but rather interesting and helpful resource center for plug-ins and other resources for SDL Trados Studio victims to bridge the gap between its out-of-the-box configurations and what particular users or workflows might require. There are a lot of good things to be found there - the memoQ XLIFF definition and Glossary Converter are my particular favorites. Paul talked about many interesting things, I was told, and there is even a plug-in created for SDL Trados Studio by a major governmental organization which has functionality much like memoQ's LiveDocs (discussed afterward but not shown in the talk I missed, however). In the course of the day, Paul also disclosed an exciting new feature for SDL Trados Studio which many memoQ users have been missing in the latest version, memoQ 2014 R2 (see the video at the end).

I arrived just in time for the highlight of the day, the demonstration of Portuguese speech recognition by David Hardisty and two of his masters students, Isabel Rocha and Joana Bernardo. Speech recognition is perhaps one of the most interesting, useful and exciting technologies applied to translation today, but its application is limited to the languages available, which are not so many with the popular Dragon Naturally Speaking application from Nuance. Portuguese is curiously absent from the current offerings despite its far more important role in the world than minor languages like German or French.

Professor Hardisty led off with an overview of the equipment and software used and recommended (slides available here); the solution for Portuguese uses the integrated voice recognition features of the Macintosh operating system. With Parallels Desktop 10 for Mac it can be used for Windows applications such as SDL Trados Studio and memoQ as well. Nuance provides the voice recognition technology to Apple, and Brazilian and European Portuguese are among the languages provided to Apple which are not part of Nuance's commercial products for consumers (Dragon Naturally Speaking and Dragon Dictate).

Information from the Apple web site states that
Dictation lets you talk where you would type — and it now works in over 40 languages. So you can reply to an email, search the web or write a report using just your voice. Navigate to any text field, activate Dictation, then say what you want to write. Dictation converts your words into text. OS X Yosemite also adds more than 50 editing and formatting commands to Dictation. So you can turn on Dictation and tell your Mac to bold a paragraph, delete a sentence or replace a word. You can also use Automator workflows to create your own Dictation commands.
Portuguese was among the languages added with OS X Yosemite.

Ms. Bernardo began her demonstration by showing her typing speed - somewhat less than optimal due to the effects of disability from cerebral palsy. I was told that this had led to some difficulties during a professional internship, where her typing speed was not sufficient to keep up with the expectations for translation output in the company. However, I saw for myself how the integrated speech recognition features enable her to lay down text in a word processor or translation environment tool as quickly as or faster than most of us can type. In Portuguese, a language I had thought not available for work by my colleagues translating into that language.

A week before I had visited Professor Hardisty's evening class, where after my lecture on interoperability for CAT tools, Ms. Rocha had shown me how she works with Portuguese speech recognition as I do, in "mixed mode" using a fluid work style of dictation, typing, and pointing technology. She said that her own work is not much faster than when she types, but that the physical and mental strain of the work is far less than when she types and the quality of her translation tends to be better, because she is more focused on the text. This greater concentration on words, meaning and good communication matches my own experience, but I don't necessarily believe her about the speed. I don't think she has actually measured her throughput. My observation after the evening class and again at the event with SDL was that she works as fast as I do with dictation, and when I have a need for speed that can go to triple my typing rate or more per hour.

In any case, I am very excited that speech recognition is now available to a wider circle of professionals, and with integrated dictation features in the upcoming Windows 10 (a free upgrade for Windows 8 users), I expect this situation will only improve. I cannot emphasize enough the importance of this technology for improving the ergonomics of our work. It's more than just leveling the field for gifted colleagues like Joana Bernardo, who can now bring to bear her linguistic skills and subject knowledge at a working speed on par with other professionals - or faster - but for someone like me who often works with pain and numbness in the hands from strain injuries, or all the rest of you banging away happily on keyboards, with an addiction to pain meds in your future perhaps, speech recognition offers a better future. Some are perhaps put off by the unhelpful, boastful emphasis of others on high output, which anyone familiar with speech recognition and human-assisted machine pseudo-translation (HAMPsTr) editing knows is faster and better than what any processes involving human revision of computer-generated linguistic sausage can produce, but it's really about working better and doing better work with better personal health. It's not about silly "Hendzel Units".

It has been pointed out a few times that Mac dictation or other speech recognition implementations lack the full range of command features found in an application like Dragon Naturally Speaking. That's really irrelevant. The most efficient speech recognition users I know do not use a lot of voice-controlled command for menu options, etc. I don't bother with that stuff at all but work instead very comfortably with a mix of voice, keyboard and mouse as I learned from a colleague who can knock off over 8,000 words of top-quality translation per short, restful day before taking the afternoon off to play with her cats or go shopping and spend some of that 6-figure translation income that she had even before learning to charge better rates.

Professor Hardisty also gave me a useful surprise in his talk - a well-articulated suggestion for a much more productive way to integrate machine translation in translation workflows:
David Hardisty's "pre-editing" approach for MpT output
The approach he suggested is actually one of the techniques I use with multiple TM matches in the working translation grid where I dictate - look at a match or TM suggestion displayed in a second pane and cherry-pick any useful phrases or sentence fragments and simply speak them along with selected term suggestions from glossaries, etc. and do it right the first time, faster than post-editing. This does work, much better than the sort of nonsense pushed too often into university curricula now by the greedy technotwits and Linguistic Sausage Purveyors, who in their desire for better margins and general disrespect of human service providers and employees fail to understand that good people, well-treated and empowered with the right tools, will beat the software and hardware software of "MT" and its hamsterized process extensions every time. Hardisty's approach is the most credible suggestion I have seen yet for possibly useful application of machine pseudo-translation in good work. Don't dump the MpT sewage directly into the target text stream like so many do as they inevitably and ignorantly diminish the level of achievable output quality.

After the lunch break, Paul Filkin gave an excellent Q&A clinic on Trados Studio features, showing solutions for challenges faced by users at all levels. It's always a pleasure to see him bring his encyclopedic knowledge of that difficult environment to bear in poised, useful ways to make it almost seem easy to work with the tools. I've sent many people to Paul and his team for help over the years, and none have been disappointed according to the feedback I have heard. The Trados Studio "clinic" at Universidade Nova reminded me why.

Finally, in the last hour of the day, I presented my perspective on how the SDL Trados Studio suite can integrate usefully in teamwork involving colleagues and customers with other technology and how over the years as a user of Déja Vu and later memoQ as my primary tool, the Trados suite has often made my work easier and significantly improved my earnings, for example with the excellent output management options for terminology in SDL Trados MultiTerm.


I spoke about the different levels of information exchange in interoperable translation workflows. I have done so often in the past from a memoQ perspective, but on this day I took the SDL Trados angle and showed very specifically, using screenshots from the latest build of SDL Trados Studio 2014, how this software can integrate beautifully and reliably as the hub or a spoke in the wheel of work collaboration.

The examples I presented using involved specifics of interoperability with memoQ or OmegaT, but they work with any good, professional tool. (Please note that Across is neither good nor a professional translation tool.) Those present also left with interoperability knowledge that no others in the field of translation have as far as I know - a simple way to access all the data in a memoQ Handoff package for translation in other environments like SDL Trados Studio, including how to move bilingual LiveDocs content easily into the other tool's translation memory.


Working in a single translation environment for actual translation is ergonomically critical to productivity and full focus on producing good content of the best linguistic character and subject presentation without the time- and quality-killing distractions of "CAT hopping", switching between environments such as SDL Trados Studio, memoQ, Wordfast, memSource, etc. Busy translators who learn the principles of interoperability and how to move the work in and out of their sole translation tool (using competitive tools for other tasks at which they may excel, such as preparing certain project types, extracting or outputting terminology, etc.) will very likely see a bigger increase in earnings than they can by price increases in the next decade. On those rare occasions where it might be desirable to use a different tool or to cope with the stress of change from one tool to another, harmonization of customizable features such as keyboard shortcuts can be very helpful.

I ended my talk with a demonstration of how translation files (SDLXLIFF) and project packages (SDLPPX) from SDL Trados Studio can be brought easily into memoQ for translation in that ergonomic environment, with all the TMs and terminology resources, returning exactly the content required in an SDLRPX file. Throughout the presentation there was some discussion of where SDL and its competitors can and should strive to go beyond the current and occasionally dubious levels of "compatibility" for even better collaboration between professionals and customers in the future.

One of the attendees, Steve Dyson, also published an interesting summary of the day on his blog.


Dec 23, 2014

SDL Trados Studio in Lisbon – January 22, 2015 – possibilities, experience and expectations

SDL TRADOS - PAST EXPERIENCES,
PRESENT POSSIBILITIES AND FUTURE EXPECTATIONS
Thursday 22 January 2015, Auditório 2, piso 3, Torre B, FCSH

The Faculty of Social and Human Sciences of the New University of Lisbon is pleased to invite you to a day of talks on SDL Trados, with special guest speaker Paul Filkin, Director of Client Communities for SDL Language Solutions.

Event schedule (subject to updates)
10:00 Welcome 
10:15 – 11:30SDL Open Exchange” with Paul Filkin 
11:30 – 13:00Voice Recognition (Portuguese and English) and CAT Tools for translation and PEMT” with David Hardisty, Isabel Rocha and Joana Bernardo. This session will show how to use voice recognition to post-edit machine translation, how voice recognition is available in Portuguese, and how it can be used to cope with physical disability (such as cerebral palsy). 
LUNCH !!!
14:00 – 15:40SDL Trados Clinic” with Paul Filkin. In this session Paul will invite the audience to raise common issues they have with Trados, and cover other common issues and solutions. 
16:00 – 17:00SDL Trados and Interoperability with other CAT Tools” with Kevin Lossner. This session will present various ways to use SDL Trados to work successfully with those who have other translation environment tools such as memoQ, OmegaT, etc.

Presenters

Paul Filkin has worked for SDL Language Solutions since 2006. His main focus is evangelizing and helping users of SDL technology get the most out of their investment and is often found on social media providing advice wherever needed.  His blog “Multifarious” describes some of the practical challenges for translators and translation companies and how to resolve these with SDL technology in the mix.

Kevin Lossner has a healthy skepticism of translation technologies where it is not clear if they serve their users or the other way around. His blog “Translation Tribulations” discusses SDL Trados Studio, memoQ and more and is the ultimate authority on chocolate chip cookies in Évora, Portugal.

David Hardisty has taught Translation Tools at FCSH/UNL to undergraduate and postgraduate students since the inception of the Translation Programmes at FCSH. He has also worked with Technology in the teaching of ELT and in the last 25 years has co-authored five books published by Oxford University Press.

Isabel Rocha is a Spanish to Portuguese translator with thirty years of experience. She has completed the curriculum component of the Masters in Translation at Lisbon University and preparing to write her thesis.

Joana Bernardo is a Masters Student at FCSH/UNL. She has also studied Languages, Literatures and Cultures at the same university. She became interested in Translation after a summer internship at a subtitling company.

All professionals and students with an interest in modern translation technologies and working methods are welcome to attend this free event. For more information, please contact: David Hardisty [david1610.dh (at) gmail.com]  or just surprise us with your smiling face at the door.

Where the heck is this?


Click this graphic to see the location using Google Maps!


Jun 14, 2014

Be Strong. Buy SDL Trados Studio today!


I've been testing the new memoQ 2014, thinking of all the great features I can and probably should describe, even promised to describe when Kilgray kindly gave me and other bloggers early access to a pre-release version. But, alas, I have a soft heart and cannot help but think of the despair SDL stockholders and employees must feel, the anguish of Studio users when the reality of their plight sinks in and they read the latest news about that apostate tool, memoQ, which just works most of the time instead of making you work for it.

The quote above from a long-suffering member of the SDL Trados beta test team is indicative of the emotions and experience of many veteran Trados users, and that colleague and so many others are fleeing to higher ground, hoping to find a place on Kilgray's software ark before the floods of Big, Dirty Data wash over the land and drown the #sdlinnovators as they desperately tease performance from their "vertical MT". And shocking revelations of translation practices in Maidenhead, UK, as reported by a neighbor and reformed SDL employee with a direct view of the SDL parking lot there, have led many to fear the Days of Revelation for memoQ 2014.

And I think of the SDL consultants and translation IT consultants in Germany who were only following orders when they helped their customers implement Across, and the despair they must feel at the thought of the many memoQ trials now being conducted in their country.

I think of all the things I could show and tell about the new project templates and automation in memoQ 2014, and all the hearts and minds that will break at the realization, the weeping and gnashing of teeth over wasted hours, years... and lives in the shadowed margins of struggling existence, gnawing on classic Wordfast crusts.

Workflow interoperability and general ease of use from Kilgray are threatening diversity in translation technology as the flocks of sheeple move to greener software pastures and eat their fill of easier language service provision and take a greater share of profits better devoted to those kinder, gentler providers of linguistic sausage.

So please, gentle people, do your part, show mercy and invest in endangered translation tools, which seek only to take us down that primrose path, paved with good intentions, to relieve the undue burdens of buyers who sacrifice far too much of their capital on the altar of survival for undeserving freelance translators. Support the Smartling crusade, offer your Maidenhead to SDL, keep your head in the XTM cloud as you practice your Fluency for languages, enjoy that nagging sense of Dejà Vu, source your memory and surf the storm-TAUS'd Big Wave and Google in awe at automatic translation on the active commodity market.

Friedrich Nietzsche wrote: "Was dich nicht umbringt macht dich nur stärker" ("what does not kill you only makes you stronger"). Be strong. Don't take the easy way out of Translation Tribulations with memoQ 2014. Get your SDL Trados Studio license here instead.



Jun 4, 2014

OmegaT’s Growing Place in the Language Services Industry

Guest post by John Moran

As both a translator and a software developer, I have much respect for the sophistication of the well-known proprietary standalone CAT tools like memoQ, Trados, DejaVu and Wordfast. I started with Trados 2.0 and have seen it evolve over the years. To greater and lesser extents these software publishers do a reasonable job at remaining interoperable and innovating on behalf of their main customers - us translators. Kudos in particular to Kilgray for using interoperability standards to topple the once mighty Trados from its monopolistic throne and forcing SDL to improve their famously shoddy customer support. Rotten tomatoes to Across for being a non-interoperable island and having a CAT tool that is unpopular with most (but curiously not all) of the freelance translators I work with in Transpiral.

But this piece is about OmegaT. Unlike some of the other participants in the OmegaT project, I became involved with OmegaT for purely selfish reasons. I am currently in the hopefully final stage of a Ph.D. in computer science with an Irish research institute called the Centre for Next Generation Localisation (www.cngl.ie). I wanted to gather activity data from translators working in a CAT tool for my research in a manner similar to a translation process research tool called TransLog. My first thought was to do this in Trados as that was the tool I knew best as a translator but Trados’ Application Programming Interface did not let me communicate with the editor.

Thus, I was forced to look for an open-source CAT tool. After looking at a few alternatives like the excellent Virtaal editor and a really buggy Japanese one called Benten I decided on OmegaT. 

Aside from the fact that it was programmed in Java, a language I have worked with for about ten years as a freelancer programmer, it had most of the features I was used to working with in Trados.  I felt it must be reliable if translators are downloading it 4000 times every month. That was in 2010. Four years later that number is about to reach 10,000. Even if most of those downloads are updates, it should be a worrying trend for the proprietary CAT tools. Considering SDL report having 135,000 paid Trados licenses in total - that is a significant number.

Having downloaded the code, I added a logging feature to it called instrumentation (the “i” in iOmegaT) and programmed a small replayer prototype. Imagine pressing a record button in Trados and later replaying the mechanical act of crafting the translation as a video, character-by-character or segment-by-segment, and you will get the picture. So far we use the XML it generates mainly to measure the impact of machine translation on translation speed relative to not having MT. Funnily enough, when I developed it I assumed it would show me that MT was bunk. I was wrong. It can aid productivity, and my bias was caused by the fact that I had never worked with useful trained MT. My dreams of standing ovations at translator association meetings turned to dust.

If I can’t beat MT I might as well join it. About a year and a half ago, using a government research commercialization feasibility grant, I was joined by my friend Christian Saam on the iOmegaT project. We studied computational linguistics in Ireland and Germany on opposite sides of an Erasmus exchange programme, so we share a deep interest in language technology and a common vocabulary. We set about turning the software I developed in collaboration with Welocalize into a commercial data analysis application for large companies that use MT to reduce their translation costs.

However, MT post-editing is just one use case. We hope to be able to use the same technique to measure the impact of predictive typing and Automatic Speech Recognition on translators. I believe these technologies are more interesting to most translators as they impose less on word order.

At this point I should point out that CNGL is a really big research project with over 150 paid  researchers in areas like speech and language technology. Localization is big business in Ireland. My idea is to funnel less commercially sensitive translator user activity data securely, legally, transparently and, in most cases anonymously from translators using instrumented CAT tools into a research environment to develop and, most importantly, test algorithms to help improve translation productivity. Someone once called it telemetry for offline CAT tools. My hope is that though translation companies take NDAs very seriously, it is also a fact that many modern content types like User Generated Content and technical support responses appear on websites almost as soon as they are written in the source language, so a controlled but automated data flow may be feasible. In the future it may also be possible to test algorithms for technologies like predictive typing without uploading any linguistic data from a working translator’s PC. Our bet is that researchers are data-tropic. If we build it they will come.

We have good cause to be optimistic. Welocalize, our industrial partner, is an enlightened kind of large translation company. They have a tendency to want to break down the walls of walled gardens. Many companies don’t trust anything that is free, but they know the dynamics of open-source. They had developed a complex but powerful open-source translation memory system called GlobalSight, and its timing was precipitous.

It was released around the same time SDL announced they were mothballing their newly acquired Idiom WorldServer systemtheir system to replace it with the newly acquired Idiom WorldServer (now SDL WorldServer). This panicked a number of corporate translation buyers, who suddenly realized how deeply networked their translation department was via its web services and how strategically important the SDL TMS system was. As the song goes, "you don’t know what you’ve got till its gone" – or, in this case, nearly gone.

SDL ultimately reversed the decision to mothball TMS WorldServer and began to reinvest in its development, but that came too late for many some corporates who migrated en-masse to GlobalSight. It is now one of the most implemented translation management systems in the world in technology companies and Fortune 500’s. A lot of people think open-source is for hippies, but for large companies open-source can be an easy sell. They can afford engineering support, department managers won’t be caught with their pants down if the company doing the development ceases to exist, and most importantly their reliance on SDL’s famously expensive professional services division is reduced to zero. If they need a new web-service, they can program it themselves. GlobalSight is now used in many companies who are both customers of Welocalize and companies like Intel who are not. Across should pay heed. At a C-Suite level corporates don’t like risk.

However, GlobalSight had a weakness. Unlike Idiom WorldServer it didn’t have its own free CAT tool. Translators had a choice of download formats and could use Trados but Trados licenses are expensive and many translators are slow to upgrade. Smart big companies like to have as much technical control of their supply-chain as possible so Welocalize were on the lookout for a good open-source CAT tool. OpenTM2 was a runner for a while but it proved unsuitable. In 2012 they began an integration effort to make OmegaT compatible with GlobalSight. When I worked with Welocalize as an intern I saw wireframes for an XLIFF editor on the wall but work had not yet started. Armed with data from our productivity tests and Didier Briel, the OmegaT project manager, who was in Dublin to give a talk on OmegaT, I made the case for integrating OmegaT with GlobalSight. It was a lucky guess. Two years later it works smoothly and both applications benefit from each other.

What did I have to gain from this? Data.

So why this blog? Next week I plan to present our instrumentation work at the LocWorld tradeshow and I want Kilgray to pay heed. OmegaT is a threat to their memoQ Translator Pro sales and that threat is not going to reduce with time. Christian and I have implemented a sexy prototype of a two-column working grid, and we can do the same trick importing SDL packages with OmegaT as they do with memoQ. Other large LSPs are beginning to take note of OmegaT and GlobalSight.

However, I am a fan of memoQ, and even though the poison pill has been watered down to homeopathic levels, I also like Kilgray’s style. The translator community has nothing to gain if a developer of a good CAT tool suffers poor sales. This reduces manpower for new and innovative features. Segment-level A/B testing using time data is a neat trick. The recent editing time feature is a step in the right direction, but it could be so much better. The problem is that CAT tools waste inordinate amounts of translator time, and the recent trend towards CAT tools connected to servers makes that even worse. Slow servers that are based on request-response protocols instead of synchronization protocols, slow fuzzy matches, bad MT, bad predictive typing suggestions, hours wasted fixing automatic QA to catch a few double spaces. These are the problems I want to see fixed using instrumentation and independent reporting.

So here is my point in the second person singular. Kilgray – I know you read this blog. Listen! Implement instrumentation and support it as a standard. You can use the web platform Language Terminal to report on the data or do it in memoQ directly. On our side, we plan to implement an offline application and web-application that lets translators analyse that data by manually importing it so they can see exactly how much they earn per hour for each client in any CAT tools that implement that standard. €10 says Trados will be last. A wise man once said you get the behavior you incentivize, and the per-word pricing model incentivizes agencies to not give a damn about how much a translator earns per hour. The important thing is to keep the choice about sharing translation speed data with the translator but let them share it with clients if they want to.  Web-based CAT tools don’t give them that choice, so play to your strengths. Instrumentation is a powerful form of telemetry and software QA.

So to summarize: OmegaT’s place in the language services industry is to keep proprietary CAT tool publishers on their toes!


*******


See also the CNGL interview with Mr. Moran....

Jan 1, 2014

The 2013 translation environment tools survey

From mid-October until the end of 2013, I placed two small survey questions at the top of the blog page and publicized these in a variety of user forums. The questions were similar to two posed in 2010, because I was interested to see how things might have changed. This is, of course, an informal survey with a number of points in its "methodology" wide open to criticism, though its results are certainly more reliable than anything one can expect from the Common Sense Advisory :-) My personal interest here was to get an idea of the background readers here might have with various translation environment tools, because it is useful to know this when preparing posts on various subjects. Here is a quick graphic comparison of the 2010 and 2013 results:

Responses to the question about the number of translation environment tools were very similar in both cases. About half use only one, with between 25 and 30% of respondents using a second tool and increasingly small numbers going beyond that. The question posed covered preparation, translation and checking in projects, so some respondents using multiple tools may be translating and maintaining terminologies and translation memories in only one tool. I am encouraged by this result, as it means that despite changes in the distribution of particular tools, users are exercising good ergonomic sense and predominantly sticking to one for their main work. Everyone benefits from this: translators generally work more efficiently without tool hopping, and more effort is focused on what clients need - a good translation.

In 2010, half the respondents cited the use of some version of "SDL Trados" (more details on this were provided in a later survey); the next highest responses at just under 20% were for Déjà Vu and memoQ. Three and a half years later, Atril's share of users appears to have declined considerably, and the use of memoQ appears to be about on par with SDL Trados Studio. OmegaT, an excellent free and Open Source translation support tool capable of working with translation formats from the leading tools, appears to be doing better than many of the commercial tools in the survey, which should not surprise anyone familiar with that software.


Across continues to be a loser in every way. Despite massive efforts in the low end of the market to promote this incompatible Teutonic travesty and the availability of the client software free of charge to its victims (translators), no real progress has been made in the Drang nach Marktanteil. One would expect that a good solution supported by a competent professional development team and a marketing budget, available free to translators, would easily beat the low-profile OmegaT. And I am sure that this is the case. The case simply doesn't apply to Across, which drives some of the most technically competent translators I know completely berserk. The fact that OmegaT is about twice as popular despite its volunteer development and total lack of marketing budget speaks volumes.

More important than any of the individual figures for translation support tools are some of the implications for interoperable workflows that the numbers reveal. Most of the tools listed support XLIFF, so if you use a tool capable of exporting and reimporting translation content as XLIFF, developing an interoperable workflow for translation and review that will work with the majority of tools will probably not be that difficult. An XLIFF file from SDL Trados Studio or memoQ is usually a no-brainer for translation in Déjá Vu, OmegaT, Cafetran or Fluency, for example, and any concerns can be checked quickly with a "roundtrip test" using pseudotranslation or simply copying the source text to the target, for example.

While individual tools have largely improved in their mutual compatibility and ability to share translation and resource data, there is legitimate continuing concern about the increased use of translation servers by translation agencies and corporations with volume needs who manage their own translation processes. Jost Zetsche and I have expressed concerns in the past regarding the lack of compatibility between server platforms and various clients, though with the appropriate use of exchange formats, this can still be overcome.

The greatest challenges I have seen with server-based work is that the people creating and "managing" projects on these servers often lack a basic understanding of the processes involved, so that the skills of the translators competent with a particular client tool may be effectively nullified by an incompetently prepared job. I experienced this myself recently where segmentation, termbase rights and even the source language were set wrong on the server, and the project manager had no idea how to correct the situation. However, things worked out in the end, because I had a playbook of strategies to apply for such a case. In the end, better training and a good understanding of the interfaces to the processes our partners use can get us past most problems.

Nov 30, 2013

The state of the upgrade: memoQ 2013 R2

The memoQ 2013 release started off on the wrong foot with me in many ways. I was deeply disappointed by the features that were previewed in Budapest at the last memoQfest, and I was even less happy after I saw what a hash had been made of one of the features I use most: comments. In fact, I wrote a rather annoyed blog post about that not long after the release. There was a lot of talk about "game-changing innovation", but frankly I really could not see it. My translating colleagues asked me if it was worth it to upgrade, and aside from my usual warnings about the need to wait for at least 2 or 3 months after any release for it to mature and stabilize, I just could not find any compelling arguments for a freelance translator to move from the stable, excellent 6.2 version to the rather dodgy 6.5 version, or "memoQ 2013" as it was rechristened.

Almost on the usual schedule, however, two months later the bugs were largely sorted out, the initial mistakes in the comment feature redesign were well fixed, and I no longer saw the memoQ 2013 release in the same dim light, but could actually see some benefits for my freelance colleagues to upgrade to that version and no actual harm in doing so. And as I got to know the fuzzy term matching feature better and saw how it helped me deal with typo-laden source documents or the usual spelling chaos of German technical writers, I began to see some very compelling value in memoQ 2013 for translators.

Most of the "game changers" talked about in May actually arrived a month ago with Release 2 of memoQ 2013. I did my best to lower expectations for this release, not because I think it is crap, but because I think this is one of the best CAT tool version upgrades I have seen in 13 years, and I knew it would need the usual time to mature. I think by the end of the year this version will have so much to offer that I would rather not have people stressing over the small stuff that I am confident will be fixed well.

However, I decided to live dangerously, and I switched over all my production work to use this version even before the official release. The first few weeks were not fun with all the little quirks I discovered and duly reported, but I encountered nothing data-destroying or really shocking, mostly just housekeeping details like somebody forgetting to vacuum the rug after gutting the whole house and giving it a nice remodel.

One month after the official release, memoQ 2013 R2 is far more reliable than I remember any memoQ version being one month after release. There has been steady refinement in its features, and I continue to discover hidden gems that I sometimes suspect most of the Kilgray team aren't even aware of yet because so much was added and changed, but not in a way that disrupted older work processes. I have a long shopping list of refinements that I think should be made to new features like the TM search tool (which has only actually worked on my system since the release of the 6.8.5 build about a week ago) or that ground-breaking monolingual review feature which (will probably be the next big CAT feature to copy), but even the new features I consider rather immature are already looking pretty damned good. I can't guarantee that this release can be trusted for all your work right now (though it actually seems pretty good to me right now), but since it can be safely installed in parallel with older versions, I definitely recommend taking a look and joining the conversation on refinements still needed. I think Kilgray has been very responsive to user feedback in this round, and I can't say I am anything but encouraged by what I have seen in the last month.

One very exciting change for me in the current build (6.8.6) is that the rather risky non-optional export of target text comments with DOCX files has been sorted out very nicely. The solution seems a little strange to me right now, but it's a great step forward with some excellent possibilities.


When I saw those "severity levels" added to the commenting features in memoQ 2013 (6.5), I had very little good to say about them. I still don't think much about how they are named and wish I could choose my own labels, but now I can only applaud their usefulness. Why? Because the addition of the five checkboxes above has given me the control I want over comments to be included in an exported translation of a DOCX file. I can cleanly separate the comments which are notes to myself from those for my project partners and comments for my customers. This is very helpful.

I do think it is odd that this control was placed at Tools > Options > Miscellaneous > Translation when the comment exports (as far as I know) only affect DOCX files, but if there are plans to extend this feature to other exported formats, then this makes sense. I would like to see similar filtering controls for the ordinary view filters (on that last tab where comment and tag filtering criteria can be specified) and for comment inclusion in a bilingual RTF export. Either of these would be an enormous help to my frequent work processes, because I use a lot of comments intended for different people, and sorting these out cleanly can be laborious.

In recent weeks I have been working on the new edition of my memoQ tips book and taking a very close look at "corners" of the software that I suspect very few have time or inclination to look in. And I've had days when it really felt like Christmas has come early. One discovery after another of nice little refinements, lots of incremental improvements, which added together give a total with what I feel is a lot of value. I'm writing way too many private thank-yous to some of the people at Kilgray for what I see as excellent new directions even if I am inclined to argue over some of the details.

Since the release of memoQ 6.2 and its follow-ups with the bilingual text/Excel filter, there has been such a steady flow of useful improvements to help individual translators work better that those who claim that all the effort of development has been spent catering to the corporate sausage-making interests of the low-paying cattle call crowd simply haven't been paying attention. Or they have been confused by Kilgray's occasionally appalling failure to organize their messages properly for different interest groups. If you're talking to a big group of freelance translators and start discussing "great server features to monitor your translators' productivity", don't expect blown kisses and showers of rose petals. Sometimes it's obvious that the makers of the tool don't always understand the importance of what they have created for our work. Well, why should they? We're the ones doing it. But I tell you, right now there is a lot more gold for individual translators in the memoQ mine than anyone realizes. That goes for me too. I am surprised by fat new nuggets I find almost every week.

Do I care that so much effort is spent on developing cutting edge project management features for memoQ translation servers, even ones that I think can be abused in some pretty awful ways by some companies whose business practices I detest? Well yes I do... I think it's great. Besides, I can actually come up with nice uses of those awful features. You can do a lot of things with a cutting-edge: chop up a tasty salad... or the local nursery school. Blame the fool, not the tool.

Kilgray has avoided the disastrous errors committed by Atril in the last decade as their market mis-focus and disastrous failure to get the maintenance revenue needed to fix and develop features steadily eroded the ability of its loyal users to cope with a changing market. There was nearly a complete failure to compete for the business of translation agencies and corporate and government translation departments. And the solutions that prevailed in those quarters were mostly rather awful. I watched whole departments of Siemens traumatized by the disastrous Trados Teamworks, which made a number of those in the translation team of the medical products division look forward to retirement.

Kilgray has steadily built its business in the markets ignored by Atril a decade ago and in doing so has secured its future far better and ensured the funding of a truly remarkable series of improvements in the four and a half years I have been using memoQ. And now... when I look at the features of the recent SDL Trados 2014 release I see good things that I have known from other tools for a long time for the most part, nice to have really, but as I stifle a yawn I wonder if it all really has to be so complex since I'm not depending on consulting or training for SDL to pay my bills. And then I get back to memoQ and keep getting rocked by the "wow factor" as I find useful new things while trying to concentrate and get a job done. memoQ 2013 R2 is one of the worst offenders I've seen in a long time for its very real threats to make my work a lot easier and more fun!

Nov 22, 2013

memoQuickie: keyboard shortcuts for migrants (updated)

(PM - Pro - 2013R2 - 2013 - 6.2 - 6.0 - 5.0)

You can adapt memoQ keyboard shortcuts to your personal preferences or to be ergonomically compatible with other translation environments tools you use frequently for better productivity and reduced risk of errors.


Although keyboard shortcuts can be managed in the Resource Console, it is more useful to do so under Tools > Options… > Keyboard shortcuts, because that is the only place where a given set of keyboard shortcuts can be selected for use. Marking the checkbox for a list in the dialog shown above will make it the active one.

Look carefully at the keyboard shortcuts available in memoQ. Not all of these commands are found in menus (for example, the shortcut for quick search with selected text in a translation grid, Ctrl+Shift+F by default). To examine a set of keyboard shortcuts, select it and click Edit to show the list.


To change a keyboard shortcut, select the value in the Shortcut key column of the editing dialog and press the new key combination.

Oct 13, 2013

Games agencies play, part 2: "word counts"

Last week a colleague called me up, very worried because her count of a rather tricky and somewhat long chemical text differed from the translation agency's count by more than 10%. I had only recently introduced her to that company and was vastly relieved to have competent backup for a recent flood of chemical manufacturing procedures, so the thought this might escalate into a serious misunderstanding put a sick feeling in my gut.

Fortunately, I had noticed some similar issues recently and had a conversation with the project manager involved about the unusual issues for her customer's texts and some of the technical challenges we face in overcoming legacy trash (format trash in this case, not content thank God) and making fair and accurate estimates of the work involved not only for fair compensation but also to plan some increasingly stressful schedules.

I discovered in the chat that the PMs at the agency were "in transition" with their working tools, and although they had SDL Trados Studio around for years, it was only being used about half the time for analysis and costing; the other half of the time the now-discontinued SDL Trados 2007 was used.

I spit my coffee in surprise. Well, I shouldn't have been surprised.

The old Trados tool generally gives much lower word counts, especially for the kinds of scientific texts I often do, with a good portion of dates and numbers to be dealt with. In addition to that, there are considerable differences in "leverage" (presumed matches from a translation memory, which is the case of the customer mentioned above are often useless and incorrect because of bad segmentation issues and massive crap in the TM from 10 years of failure to define appropriate segmentation exceptions). And then there are the tags, which are another matter as well: I love three or four words embedded in 20 or so tags in a segment. Whoever thinks something like that should be charged at a word rate with a count of 3 or 4 is a fool or a fiend or both.

But mostly these are just matters of ignorance and/or reluctance to understand the problem and consider it in costing and compensation.

Paul Filkin of SDL has an excellent presentation which I saw last year at TM Europe in Warsaw in which he showed systematic differences in text counts between tools. I suspect that information is available in some form somewhere, because it's also important for individuals and companies using Trados or other tools to understand just how pointless and arbitrary this focus on word counts actually is. (So far I've avoided bringing up the problem of graphics and embedded objects so frequently found in certain document types and how few of the tools in common use are able to count the text in these, much less the effort to access, translate and re-integrate that text. I've talked about that enough on other occasions, so not now.)

So what's the agency game here? Well, in the case of my friend's concern, no more than an unconsidered resort to the wrong tool by a project manager under pressure and in a hurry, and once they talked about it, it became clear that matters would get sorted out to nobody's disadvantage most likely. Word counts, and the tools chosen to make those counts, can have a huge impact on translator compensation. This can be exploited systematically by unscrupulous agencies to screw their service providers thoroughly, and I suppose there are a few out there beside Pam the Evil PM in the Mox comics who plot such moves carefully.

However, I think it's usually a matter of ignorance, where a bit of education is all that is needed. Sometimes it's fear: I have heard some silly skirts tell me that they are aware of the problem but that quoting with accurate methods would inflate job costs to a level their price-sensitive customer cannot accept. Usually, though, this means that this person or those in her organization responsible for sales lack the communication skills to deal maturely with clients and help them understand what is reasonable and sustainable for a good business relationship. I seldom argue with such people. I note them on the list of Linguistic Sausage Producers and cross them off the list of viable partners for work, and when I hear later how they are circling ever nearer to that drain to the sewers I might offer a sad smile of understanding, but I have nothing more to give.

An agency that offers piece-rate quotation but does not even try to estimate the "pieces" and their relationship to time required very likely does not have a sustainable business model. But that is probably no more unsustainable than all the panting bilge one sees from all those acolytes in the MT temple who don't realize they are brought into the rituals to be relieved of their cash and goods by a greedy IT priesthood eager for another great scam to live off like the old Y2K scare.

What do word counts matter when words will be free or nearly so, given to us by Machines of Ever Loving Grace in LQA-blessed near-perfection, requiring just a bit of post-editing time to be fit for purpose?

Ah, time. That's really the crux of the problem, isn't it? How much time will something take? Proper project management in which the inputs are measured and assessed correctly is critical to understand this regardless of whatever piece rates may or may not be applied. An agency owner recently mentioned a job he had to "translate" date formats into something like 14 different local flavors. He pointed out, quite correctly, that any word count, even an accurate one, was meaningless there. (And he revealed himself as a user of the old Trados by saying that the word count was "zero" anyway, which brings us back to the stupid logic of SDL Trados which began this discourse.)

I'm not an advocate of billing strictly by time. Yes, attorneys do that, but it's not really a viable model all the time anyway for all services, and one could a library with volumes of true tales on the abuse of the billable hour by law firms. Sometimes hourly rates make sense, sometimes the value, an intangible requiring some judgment and risk to estimate, matters more.

Time, value or meaningless commodity units (word, lines, pages or pounds of sausage): these will surely still be sources of consideration and dispute in the translation profession long after we are all dead. Until then, it really does pay to become more aware of current practice and its implication and remain alert so that it does not work to your disadvantage, even if the other parties are not deliberately playing a game.