Oct 28, 2013

Want a revolution? Try memoQ 2013 Release 2.

OK, so I'm exaggerating a bit. And even though the new version of memoQ was officially released today by Kilgray, it really is still beta software. But damned good beta. I expect that there will be more of interest to individual translators added in this version of memoQ than in any other version I've seen up to now. Lots of T's to cross and i's to dot still, but there is great promise, and it's worth having a look now at the future of memoQ.

I'm not talking about changes to the memoQ Server. There are lots of those in this version, and for a change many of them actually seem to be helpful to translators working on the server and less focused on slicing and stuffing linguistic sausage faster like many of the 6.x server features introduced. The rollout webinar with István Lengyel and Florian Sachse of Kilgray showed enough of why memoQ Server users should be pleased. But they could have filled the hour and three quarters with nothing but presentations of new or improved functions for the rest of us and still not run out of material. Since I still have a project to finish tonight, I'll just hit a few of the highlights that I'll probably return to later as the features stabilize and are truly ready for productive work.

Language recognition
memoQ now intelligently recognizes the language(s) of the source text. This is a small convenience in setting up projects perhaps, but for those occasions when a source language has many passages in another language or more than one other language, these other language segments can be identified automatically, copied source to target and locked. I can think of more than a few patent dispute translations where this would have been helpful.

Startup Wizard
A new feature under the Help menu gives a quick, friendly guided tour of important settings that are often overlooked that are hard to find for new users and many experienced ones. This is actually one of my favorite new features and possibly the best help I've seen yet for making a better start with the software.

Better Microsoft Word spelling integration
Custom dictionaries can now be imported from Microsoft Word with greater ease. Users can now also choose Microsoft Word for dynamic marking of possible spelling errors (unknown words). This is a good thing for those of us who hate Hunspell. Oh, and those pesky doubled words are caught now.

More stuff with Microsoft Word...
like exporting tracked changes between translation versions to a DOCX file (sans formatting I think), exporting target comment to a DOCX file (alas! in writing the specification Kilgray failed to consider that one might want to select which comments get exported and possibly suppress all the comments, but I'm told this will be remedied quickly), font substitution in DOCX files (this was a major WTF feature for me, but if I understood correctly, there is some way I can use this to protect text formatted a certain way, such as code in a programming guide - if that's true, this is cool) and...

the TM lookup tool,
an external application which runs in Microsoft Word and any other environment and allows you to look up text copied to the Clipboard in selected memoQ TMs. Too bad they didn't include termbases in this new feature. Yet.

New filters and processes
like direct import of InDesign files with a preview using the free online Language Terminal integration, Adobe InCopy and some file formats that must be pretty damned geeky because I've never heard of them.

Why am I excited about
a plain text view which is about as exciting as lukewarm, unspiced pea soup. Well, because it's absence has been driving me nuts for years now. It's in this version.

Meanwhile, back at the termbase
great things are happening with new import options that are still a wee bit buggy but will get very good very soon. Until now memoQ could only import terms as TMX and delimited text. New options include Excel (at last!), MultiTerm XML and TBX. It was child's play for me to tweak a couple of TermStar MARTIF exports from STAR Transit to import those terms, because TBX is a dialect of MARTIF and STAR's MARTIF is very close to TBX. Extra effort? About 2 minutes of search and replace so I'm hoping Kilgray will go the extra five yards and touch this import option down.

The addition of the MultiTerm XML import option means that memoQ users can now roundtrip data from memoQ to partners using SDL MultiTerm and back for termbase updates. Unfortunately at the moment, the only meta data transferred in the import is the definition field, but efforts are in progress to support at least the MultiTerm fields memoQ exports to XML with Kilgray's own definition. That was simply forgotten at specification time (oops). But still, this will be serious headache relief for those of us who work in teams with SDL Trados users and want to share terminology in the most effective ways.

Is that all?
No. This new version of memoQ is like a very messy Christmas where one can easily lose the overview of hat's under the tree with all the wrapping paper and bits of ribbon cluttering the floor. As it gets cleaned up, we'll all notice a good bit more, and I suspect that Santa's Hungarian and German helpers will be slipping a few more things under the tree that they might forget themselves until some user trips over them. There has been so much effort put into consolidation and improvement of existing features that it's simply too much to keep track of. I've made a list and checked it more than twice and still find things to add. But I'll end with another look at something I've already blogged about, that groundbreaking

Monolingual Project and TM Update
with edited files in any target format. It still has a lot of little quirks, especially with some formats, but here I expect a lot of improvements. I've made a little demonstration video and put it on YouTube; it shows the reimport of edited translations to update the translated file and the TM in memoQ, and it shows two different ways to look at tracked changes before revealing the dark secret of Row History Recovery which I think Kilgray didn't realize was possible. Well, damnit, they should have made it a feature with a button anyway.

(View this in full screen mode by clicking the icon at the lower right of the video window.) 

Oh yes, and one more cool little thing about this release that I forgot to mention...

... the quickstart shortcut to creating memoQ projects
in the context menu by right-clicking on a file. I'm not much into single-file projects any more and prefer to use "container" projects for customers or categories instead, but it's still a nice little addition that can save time once in a while:

Oct 27, 2013

A tale for Halloween, perfectly horrifying!

The night of terror began with a puzzling tweet in the afternoon:

I clicked the link and read the latest on Susan Bernofsky's Translationista blog,which gave an update on some of her recent work. An upcoming release of her translation of Kafka's The Metamorphosis was mentioned; that caught my eye since I had reread it in German very recently, and I find the variations in its translation quite interesting. I made a note to look at her version when it comes out in January.

Her blog post also mentioned the release last week of her translation of Die schwarze Spinne by a 19th century Swiss pastor writing under the name of Jeremias Gotthelf. The Black Spider? I grew up in a basement  bedroom well-stocked with black widow spiders, so the title had a certain creepy, nostalgic fascination for me. I was unaware of the high regard this novella was held in by so many, but the description of the tale on the blog and in the Wikipedia articles I read intrigued me, so I checked in at and bought a Kindle copy of the new translation. For good measure, I grabbed a copy of the original tale in German and treated myself to an atmospheric introduction of the story with the LibriVox audio recording by what sounds like an old guy with a Swiss German accent, like the grandfather who relates this moralistic tale of mortal terror.

The German audio recording was a bit fatiguing, gave me a claustrophobic feeling with its heavy diet of adjectives and too-familiar village custom. I began to feel a flashback the the Brandenburg hellhole I escaped from earlier this year and the suffocating customs of its denizens. With some desperation, I abandoned my plan to finish the entire work in German before starting Bernofsky's translation, so with a little twitch of guilt, I grabbed my bag and headed off to the cantinho for dinner in a quiet corner with my Kindle. Her translation started off with very much of a period feel, over-rich with its double serving of adjectives and long sentences that reminded me of my first encounters with John Stuart Mill in the tenth grade. I began to get the same claustrophobic feeling I had from the German reading, yes, I was back in Oberkrämer in Brandenburg and that's enough horror for one evening, thank you.

But gradually without realizing the art with which her well-crafted English drew me into the Swiss Calvinist spirit of the tale I was caught in a well-paced story that kept my interest and made me wonder if I would enjoy the original as much in some parts. And so I was drawn, unwitting, into the open jaws of Evil, which closed slowly about my torso and squeezed the breath out of me, leaving me gasping more than once and failing to notice that the liter of sangria had gone too fast before I ordered more to quench the burning horror unfolding. The walk home was too long, and the way could not be lit well enough.

At home I paused for a while, centered my mind by translating a deadly dull document with terms and conditions for purchase, went on a safer bug hunt in the latest beta version of memoQ and then, feeling that the house was much too dark, I screwed up my courage and lay down to sleep... well no, to read just a bit more, because those jaws were still closed around me, and the several dull pains about my sternum and spine made me wonder if my heart and bones would last to the end of the tale. Don't be so dramatic I thought, and I wasn't, really, the real drama was before my eyes, transfixing me in terror as wished the dogs would lie heavier on my legs and chest and distract me from the dark corners of the room I could not see because my eyes were on the shadows in the book and what waited so terribly in them.

This is a damned good translation. Maybe. Let me put it this way: I hope the original tale can live up to what I read tonight. But I'm not going to make the mistake of finding that out in the dark again.

Oct 24, 2013

The Next Big CAT Feature To Copy?

Thanks to the persistent disbelief of some users, particularly cranky financial and legal translators who don't understand the challenges of programming, that it really is "impossible" to enable a project or TM update based on an edited monolingual target text, Kilgray has decided to just do it and make this feature available to the masses with the next release, memoQ 2013 R2. It's not a perfect solution, but a great start, and when the rest of the specification is implemented later, it will be really, really good.

Here's an example of the first step reimporting a short text in which every sentence except the first was rewritten and rearranged (click the pic to see it full-sized):

memoQ monolingual document import and alignment for translation memory update

This monolingual alignment and the matches it assigned was totally automated - no adjustments by me. When applied to the translation document in memoQ, it doesn't change the order of the translation, but it does make updating the translation memory much easier. I can also use the tracked changes feature for versioning to look at the edits in more detail, and the view can be filtered to show the changes in a large document more clearly to ensure that nothing was missed.

What good is this? Well, so far Kilgray is rather fixated on the idea that one might receive edited documents from a proofreader or a client at some later date and import the changes to the project to update the TM. Maybe, but in many cases, this won't really happen in my workflow. If I finish I project, now I very often send the documents to a LiveDocs corpus, where they make a marvelous pseudo-TM (if a bit slow or slower sometimes) and an excellent context reference for concordance searches. I then delete the documents from the project, because I use projects as "containers" for repeated business, so unless some day the monolingual updates are made available for a document in LiveDocs, I may often not be able to take advantage of it. One could, of course, apply the same principle of monolingual alignment to a translation memory or even a TMX file, and I am sure somebody will do that before long if it doesn't already exist in some flaky academic freeware for supernerds somewhere.

So why am I so excited about this feature? Because I already use it every day. It saves me time and puts a big smile on my face. When I get ready to deliver  translation, the last step for me is to look at it in the source application - Microsoft Word, PowerPoint, etc. There I make last-minute adjustments, change words, combine sentences, split sentences, delete things, etc. A lot of these changes never make it back to the TM, because that maintenance can be a major pain in the backside, especially when there's a lot happening on my desk and in my e-mail inbox. This feature is a big step toward reducing that stress.

Kilgray isn't done with this by any means. Plans for source text merges to allow combined sentences in an edit to be handled easily have not been implemented yet, but I'm told that will follow no later than the next version. I hope so. The current beta version is also a bit dodgy with many formats I've tested so far beside DOCX and TXT, and changes of target text file type have been forbidden in the current version, as have any edits except segmentation adjustment and linking during the monolingual alignment.I plan a lot more testing to understand the limits of the current implementation, and I expect there will be many improvements in this area. But this is an excellent start.

When the specification was developed, Kilgray was unaware that something like this was already available from SDL in one of that company's pricy "regulatory" licenses (I wonder if the idea came from watching the video of the public argument about this option at memoQfest two years ago - I know that remarks about the SDL OpenExchange were followed closely). However, SDL has taken no great advantage of this to offer such a feature to a wider user base yet, so I have no idea how well it looks. But mark my words - in a few years, this innovation will be one of those things that users of any good CAT tool should take for granted!

Oct 22, 2013

Complex dictionaries in memoQ LiveDocs

One never knows when a good idea might come up. This one isn't particularly original, in fact it's probably bleedin' obvious to the memoQ LiveDocs cognoscenti. I think it's drifted in and out of my mind a few times, but I never gave it much heed until a friend contacted me shortly before midnight with a slightly urgent question about what to do with an "XML terminology" that a client had sent. It turned out to be an SDL MultiTerm XML export but without a definition file. She wanted the data conveniently available in memoQ. Oh, crap, I thought. This could be a long night. Kilgray has added such capabilities to its qTerm server application, but the Fußvolk who use memoQ desktop versions don't have that option right now. And I shelved my XSLT efforts for this some time ago because nobody seemed seriously interested.

But then she said something about a "Word file". It turned out that the client had made one of those nice RTF dictionary exports that MultiTerm can produce and which was also the target of my XSLT work a year ago. This was exactly what I planned to make for her if the XML proved to be loaded with synonyms and term metadata. It was.

And then... I thought... why not just throw this in LiveDocs as a "monolingual" document? And thus a nice way to make complex glossary data available without importing it into a termbase was (re)born. Of course stuff like this has been going on for ages with Archivarius and other search tools. But not so much in an integrated way with CAT tools. Here's a quick visual tour of the process and the end result:

Here's the RTF file and a peek at the financial term data it contains. Not a chance I can parse that beast for a termbase! So I picked a LiveDocs corpus and clicked Import document and chose the RTF file:

I lied and said it was "German". Well, that's partly true and in this case, the end justifies the means.

A few minutes later, this dictionary was available in an ordinary concordance search, its entire content indexed as "source" text. To see the context, I right-click on the concordance hit to open the document saved in the LiveDocs corpus.

And here it is. I can do further searches within the document using Ctrl+F (Find). The English definition can be copied from here if I feel like doing that.

Now I know what I should do with that huge trilingual fire safety dictionary that's been kicking around my reference folders for the past 10 years... once again, LiveDocs made my day.

Interoperability ad absurdum

I saw this in one of my mailing list digests this afternoon: "pptx.itd.sdlxliff - does not import to DVX2". My first thought was "My God, why should it? Serves you right!" as my mind boggled at the thought of someone taking a PowerPoint file through the venerable SDLX application to some version of Trados Studio and then on to DVX2. Whither next? Why not Wordfast Classic just for fun, with a detour through a few different versions of memoQ and a pass through TagEditor just for laughs. Somewhere in all of this, text presumably needs translation, but with all the myriad MT integrations along the way that will surely happen to the satisfaction of the translation as a utility gurus at TAUS.

Seriously, though, that slightly disturbing process chain evident from the file-extension-too-far is just one example of many of the odd surprises out there in the world of real projects and why even in my most ungenerous moments I won't expect any provider to cover every inconceivable scenario perfectly, only to offer sufficient interface support so that somewhere, somehowever improbably, a suitable tool can be added to the process to get the job done.

What's the most absurd process chain you've experienced? This message from the Déjà Vu list seriously tempts me to see how long and ridiculous a "supply chain" I might be able to simulate with a moderately complex file and still run the translation back through the chain to get an acceptable target file. We could have a contest and call it the Rube Goldberg Memorial Translation Relay. The diagrams alone would be worth a round of beer.

Oct 18, 2013

Computer-aided translation tool survey

This file is licensed by under the Creative Commons Attribution-Share Alike 3.0 Unported license.
A bit over three years ago I conducted a small survey to get an idea of what working tools were popular among visitors to this blog and how widespread the use of multiple tools is. A lot has changed in the meantime, and another look at current pattens of use could be interesting.

The two survey questions will be found in the left margin at the top until the end of this year. If your work habits change between now and then, you can return and change your answers. Let's try to get the best statistical sample we can - spread the word!

UPDATE: The results are here:

Oct 16, 2013

Small caveats for memoQ fuzzy term matching

In the months since it was introduced this year, the terminology fuzzy match feature of memoQ has proved to be a great help in my work. The authors of the German texts I translate are sometimes particularly challenged with respect to spelling, and I might find the same source word spelled five or six different ways in a text, with some or all of the variations repeated frequently: Scheidungsurteil, Schaidungsurteil, Scheidungurteil, Scheidungs-Urteil, Scheidung Urteil and so on. It can be a real nuisance trying to keep terms in the translated text consistent when the source text is out of control this way, and often I've make frustrated searches for a term I know I put in the termbase, only to find that it was spelled a little differently. And then some of the changes between plural and singular forms could contribute to the difficulties of consistency, particularly in large texts with the terms thinly sown.

For cases such as these, the fuzzy term matches have been enormously helpful. I no longer have to make a catalog of crappy spelling and enjoy my bit of Schadenfreude as I share the hard-won terminology with the client in a pretty PDF dictionary that proudly displays all the misspelled variants in the source mapped to a single clean target term. Now I can maintain cleaner termbases that may actually be useful for reversed application (with German as the target language, not loaded with garbage spelling just to catch the matches with German as the source language).

But there's a dark side to this too. In some cases, I am misled by how fuzzy term matches are highlighted. An example of this can be seen here:

Look at the highlighted fuzzy term match in segment 3. The prefix un- is a negative, so in this case we're talking about fake raccoon skin underwear. Not the real thing. The problem is compounded in the QA term check:

The translation in this case actually correct, but it is flagged as an error because of the fuzzy match. I'm not actually sure there is anything to be done about this except perhaps identify troublesome cases like this and change the term entries to custom with appropriate wildcards or some other setting less likely to report an false error or overlook a real one. However, I think it might be a help for visual checking if the blue highlighting for fuzzy matches could be set not to extend to prefixes or portions at the end which go beyond the length of the term entry. Of course I do not know what the implications of this may be for other languages, so changes of any kind require careful thought.

Right after I posted this, I was contacted by a friend who had the same frustration with the misleading matches with some financial terms. This translator said that even adding the correct term for the mismatch did not correct the problem, and the proper match would not be displayed. That sounded very strange to me, so I had to have a look. I added the "fake raccoon underwear":

But my translation results pane showed both matches. What really bothered me, however, was that the worse match still took precedence for insertion as the tool tip indicates:

Oops. This doesn't change my positive opinion of the fuzzy matching for terms. It's still extremely helpful and overall helps me maintain better term consistency. But there are some things in the current version (6.5.15) which need a little tuning - like this goofy precedence problem - but even after any bugs are fixed there will still be a few inherent risks of which one may need to be aware and for which some particular QA strategies may need to be considered.

Oct 13, 2013

Games agencies play, part 2: "word counts"

Last week a colleague called me up, very worried because her count of a rather tricky and somewhat long chemical text differed from the translation agency's count by more than 10%. I had only recently introduced her to that company and was vastly relieved to have competent backup for a recent flood of chemical manufacturing procedures, so the thought this might escalate into a serious misunderstanding put a sick feeling in my gut.

Fortunately, I had noticed some similar issues recently and had a conversation with the project manager involved about the unusual issues for her customer's texts and some of the technical challenges we face in overcoming legacy trash (format trash in this case, not content thank God) and making fair and accurate estimates of the work involved not only for fair compensation but also to plan some increasingly stressful schedules.

I discovered in the chat that the PMs at the agency were "in transition" with their working tools, and although they had SDL Trados Studio around for years, it was only being used about half the time for analysis and costing; the other half of the time the now-discontinued SDL Trados 2007 was used.

I spit my coffee in surprise. Well, I shouldn't have been surprised.

The old Trados tool generally gives much lower word counts, especially for the kinds of scientific texts I often do, with a good portion of dates and numbers to be dealt with. In addition to that, there are considerable differences in "leverage" (presumed matches from a translation memory, which is the case of the customer mentioned above are often useless and incorrect because of bad segmentation issues and massive crap in the TM from 10 years of failure to define appropriate segmentation exceptions). And then there are the tags, which are another matter as well: I love three or four words embedded in 20 or so tags in a segment. Whoever thinks something like that should be charged at a word rate with a count of 3 or 4 is a fool or a fiend or both.

But mostly these are just matters of ignorance and/or reluctance to understand the problem and consider it in costing and compensation.

Paul Filkin of SDL has an excellent presentation which I saw last year at TM Europe in Warsaw in which he showed systematic differences in text counts between tools. I suspect that information is available in some form somewhere, because it's also important for individuals and companies using Trados or other tools to understand just how pointless and arbitrary this focus on word counts actually is. (So far I've avoided bringing up the problem of graphics and embedded objects so frequently found in certain document types and how few of the tools in common use are able to count the text in these, much less the effort to access, translate and re-integrate that text. I've talked about that enough on other occasions, so not now.)

So what's the agency game here? Well, in the case of my friend's concern, no more than an unconsidered resort to the wrong tool by a project manager under pressure and in a hurry, and once they talked about it, it became clear that matters would get sorted out to nobody's disadvantage most likely. Word counts, and the tools chosen to make those counts, can have a huge impact on translator compensation. This can be exploited systematically by unscrupulous agencies to screw their service providers thoroughly, and I suppose there are a few out there beside Pam the Evil PM in the Mox comics who plot such moves carefully.

However, I think it's usually a matter of ignorance, where a bit of education is all that is needed. Sometimes it's fear: I have heard some silly skirts tell me that they are aware of the problem but that quoting with accurate methods would inflate job costs to a level their price-sensitive customer cannot accept. Usually, though, this means that this person or those in her organization responsible for sales lack the communication skills to deal maturely with clients and help them understand what is reasonable and sustainable for a good business relationship. I seldom argue with such people. I note them on the list of Linguistic Sausage Producers and cross them off the list of viable partners for work, and when I hear later how they are circling ever nearer to that drain to the sewers I might offer a sad smile of understanding, but I have nothing more to give.

An agency that offers piece-rate quotation but does not even try to estimate the "pieces" and their relationship to time required very likely does not have a sustainable business model. But that is probably no more unsustainable than all the panting bilge one sees from all those acolytes in the MT temple who don't realize they are brought into the rituals to be relieved of their cash and goods by a greedy IT priesthood eager for another great scam to live off like the old Y2K scare.

What do word counts matter when words will be free or nearly so, given to us by Machines of Ever Loving Grace in LQA-blessed near-perfection, requiring just a bit of post-editing time to be fit for purpose?

Ah, time. That's really the crux of the problem, isn't it? How much time will something take? Proper project management in which the inputs are measured and assessed correctly is critical to understand this regardless of whatever piece rates may or may not be applied. An agency owner recently mentioned a job he had to "translate" date formats into something like 14 different local flavors. He pointed out, quite correctly, that any word count, even an accurate one, was meaningless there. (And he revealed himself as a user of the old Trados by saying that the word count was "zero" anyway, which brings us back to the stupid logic of SDL Trados which began this discourse.)

I'm not an advocate of billing strictly by time. Yes, attorneys do that, but it's not really a viable model all the time anyway for all services, and one could a library with volumes of true tales on the abuse of the billable hour by law firms. Sometimes hourly rates make sense, sometimes the value, an intangible requiring some judgment and risk to estimate, matters more.

Time, value or meaningless commodity units (word, lines, pages or pounds of sausage): these will surely still be sources of consideration and dispute in the translation profession long after we are all dead. Until then, it really does pay to become more aware of current practice and its implication and remain alert so that it does not work to your disadvantage, even if the other parties are not deliberately playing a game.

Oct 6, 2013

OmegaT workshop in Holten (NL) November 18th!

is free software, but time is money, and to use this excellent Open Source tool effectively and enjoy its many benefits, expert guidance can be enormously helpful. This software holds its own with commercial leaders such as SDL Trados Studio and memoQ in many respects and surpasses them in some cases (for example, in its ability to read embedded objects, including charts, in the Microsoft Office document formats 2007, 2010 and 2013).

On November 17 and 18, the Stridonium Holten Lectures will feature Marc Prior in a workshop for OmegaT, a professional computer-assisted translation environment originally developed by Keith Godfrey and currently maintained and extended by a team led by Didier Briel. It is available in nearly 30 languages and includes:
• fuzzy matching
• match propagation
• simultaneous processing of multiple-file projects
• simultaneous use of multiple translation memories
• user glossaries with recognition of inflected forms
Document file formats include:
• Microsoft Office: Word, Excel, PowerPoint (DOCX, XSLX, PPTX, etc.)
• other translation tool formats such as TMX, TTX, TXML, XLIFF & SDLXLIFF
• Open Document formats (LibreOffice,
• MediaWiki (Wikipedia)
• plain text
... and around 30 other file types as well as
• Unicode (UTF-8) support for non-Latin alphabets
• support for right-to-left languages (Hebrew, Arabic, etc.)
• an integral spelling checker
• MT integration with Google Translate
Marc has been a technical translator since 1988, working primarily from German to English. In 2002, he joined Keith Godfrey, the original author of OmegaT, to launch the program as an open-source (free) project. Since then, he has been involved in the project in various capacities, including 
• project co-ordination
• Match propagation
• authoring of manuals
• localization co-ordination
• website management and
• programming of auxiliary tools
He is a frequent source of advice and support on OmegaT user forums, contributed to the knowledge of the user community in many other online venues (including this blog) and has spoken on OmegaT at events in Germany and Belgium. He also introduced a module for computer-assisted translation tools, based on OmegaT, to the Professional Support Group (PSG) of the UK's Institute for Translation and Interpreting (ITI). He currently lives in Gelsenkirchen in the German federal state North Rhine-Westphalia.

The day will start with a conceptual overview for OmegaT, followed by a session demonstrating a sample project and responding to any questions from participants.

After lunch, the first afternoon session will present some extensions and advanced functions of OmegaT.

In the final session of the day, Marc will discuss “drawbacks that aren’t”, answer questions and debunk myths (appropriately entitled “Myths, FAQs and Workarounds”).

The workshop fee is €250 (€225 for Stridonium members), which includes a room at the venue for Sunday night arrivals who can enjoy an optional networking dinner and get a fresh start in the teaching sessions the next day. The availability of rooms included in the workshop fee is limited, so book early! 

The workshop is designed for translators and language project managers interested in the many possibilities for using this Open Source tool in their work.

Further information and updates can be found on the Stridonium events page, which also includes a button link for registration and payment ("Register for the Holten Lectures 3") below the course description.

You can also follow @stridonium on Twitter and watch the hash tags #strido and #Holten for announcements.

CPD points have been applied for with Bureau BTV in the Netherlands.

Previous Stridonium workshops in Holten have included corpus linguistics with the NIFTY method last June (participant's report here) and the recent teamwork day which presented ideas for overcoming distance in collaboration and using free and Open Source technologies as alternatives or addenda to more restrictive, proprietary commercial server solutions (participant's report here).

Future event plans include legal English for the insurance sector with a UK attorney and a series of three workshops on legal English for contracts (April 28, 2014), legal drafting (May 19, 2014) and commercial law (June 2, 2014) with attorney Stuart Bugg. These events would also interest practicing attorneys and others involved in the drafting and revision of contracts.

How to get there:
From Deventer(A1)

- Take the A1 towards Hengelo/Enschede

- Exit 26: Lochem/Holten

- Turn left for Raalte, follow the signs for Holterberg

- Go straight ahead over the roundabout, turn right after the viaduct and left at the T junction

- Turn left at the roundabout and after 50 m take a right turn for Holterberg

- After approx 1 km turn right (at yellow building)

From Enschede/Hengelo (A1)

- A1 towards Deventer/Apeldoorn/Amsterdam

- Exit 27: Holten/Markelo

- Continue through the center of Holten, take the Holterberg exit at the roundabout and after 50 m take a right turn for Holterberg

- After approx 1 km turn right (at yellow building)

By train

- A 10-minute walk from the station (Beukenlaantje)

 - Let the organizers know when you arrive and either they or hotel staff will collect you!

Oct 5, 2013

Two years with an e-book reader

Author = NotFromUtrecht (see link). This file is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license.
Nearly two years ago I acquired my first e-book reader, an Amazon Kindle like the one shown here. I had various thoughts of using it professionally but was in any case delighted with the fact that I could read text without eyestrain on it, even without my reading glasses. Some colleagues shared their experiences, and one was kind enough to mention Calibre, which I use periodically to convert file formats for better use in e-book readers or other media.

So what's the score after two years? On the professional side not so hot, because other distractions have prevented me from exploring all the possibilities of converting reference data for use on my reader. It's possible, but I'm still tweaking the technology to get exactly what I want with formatted, searchable RTF and HTML from terminology exports from my many CAT tool termbases. I could do that all along without much trouble using SDL Trados MultiTerm and various XSLT scripts, but I went down the rabbit hole of trying to make these solutions more accessible to colleagues who don't like a lot of technical fiddling, and though I think the problems are solved, I haven't had time to share most of the solutions or implement them on a large scale myself.

I do read literature related to the translation profession with some frequency. Found in Translation by Jost Zetzsche and Nataly Kelley gave me many pleasant, entertained hours with its Kindle version, attempts to read texts in PDF format by others have been less successful because of display issues, and the current version of my own book with memoQ tips is not a happy experience on a small black-and-white ebook reader. The latter has me thinking about what information might work in formats for e-book readers and smartphones, and the latter has been one of the motivations for my recent experimentation with short video tutorials on YouTube. Not only should we consider the current trends in media such as e-book readers, tablets, smartphones and whatnot for our own professional leaning and teaching needs, but also how our clients and prospects may use these media to create content which we might be asked to translate. This has already begun to happen with me in a small way, and those projects were possible only because of things I learned in my teaching experiments shortly before.

I also copy web pages into text or HTML files "to go" when I want to read up on a subject in the park while my dogs play or in a local café somewhere. My reader has a web browser, but many sites are difficult to view in a way that is friendly to a smaller screen. It's easier to grab what I want in separate files and organize these into a "collection" I can refer to easily later.

I never have done any proofreading or review with my Kindle, though I have used texts on it to translate manually (in a separate notebook) on occasion. However, that's not really compatible with most of the texts I work on.

What I have done most with my e-book reader is carry a growing library of world literature with me, familiar and unfamiliar old works and some new. I still hear some people talk about how they could not imagine reading without the heft of the book and the feel of the paper pages turned by their fingers. I'm just as caught up in the sensuality of a dusty old library as any other obsessive bibliophile, but the heft and feel don't mean much when accumulated nerve damage means that the book is more a source of pain than pleasure after ten minutes in your hands, and my once excellent eyesight has now decided that its term is served and I can find my own way with small type and lousy lighting conditions: there, the e-book reader is gift of great value.

Most important to me, however, are the words. The finest binding, gold-edged pages and elegant type mean nothing if the words mean nothing. Words of beauty and power are worth straining to read in weathered stone inscriptions, on crumbled clay tablets written before the founding of Rome or on crumbling acid-paper pages in books forgotten in an attic. How much better then to have these same words in a legible format on your reader in minutes after a short search in an online database and a quick download or a purchase and transfer.

The Velveteen Rabbit had the same nursery magic on the Kindle in the cantinho last night as it would on the delicate old pages of the original edition, but I didn't have to worry about spilling my sangria on it. In the two years since I received my Kindle I have re-read many books that were lost as my library of thousands was slowly dispersed in my many relocations. Hundreds of new books from classic literature in two languages have come to me, go with me in my small, black volume with its Cloud-based backup, and this library will likely not be lost again wherever I go and no matter how lightly I travel. 

Oct 1, 2013

RIP Yahoogroups :-(

No, the popular free Yahoogroups have not been eliminated. Yet.

However, the latest "improvements" introduced by the idiots at Yahoo responsible for such things may have killed the usefulness of this feature for many people.

Those who use or subscribe to these groups - which cover a range of subjects far beyond translation - will understand what I mean if they have occasion to use the web interface sometimes. The new web design introduced a few weeks ago is radically different and disorienting, and I must now resort to e-mail functions if I want to contribute to a discussion. Many others seem to have similar issues.

Note to designers and developers: if it ain't broke, maybe you should keep your hands off it. Something might look dated but still be far more serviceable than your "clever" new design ideas.

I see this all as part of a trend to undermine years of progress in human/computer interface design. In recent years, programmers have been moving close boxes on dialogs, perhaps to overcome well-trained user impulses to get rid of them before they are read. Often there is a commercial interest behind such a change. Or it might be pure, stupid ego. I am reminded of an arrogant SOB at a former employer of mine in Germany, a very intelligent software developer who had created an interesting document management module but decided one day that print functions belong under the Edit menu. WTF? He was completely deaf to all protests that the File menu was the standard, expected place for such things and felt that as a demigod he had the right to dictate to thousands of others that they must bow to his superior will.

This is not a useful way to work. We programmers are not gods, though at times I'm not sure thata ll of us are human. (Most are in the ways that count, fortunately.)

When I think of the great contributions these online groups have made to our translation profession, providing support communities for the software we use, for our discussions of language, the payment practices of clients and much more, I am very saddened to think that this might all be lost now because of the stupid redesign by Yahoo. But Yahoo is a troubled company and may be on the way out, so perhaps we need to think about ways of preserving these information archives if they are worth keeping and look to new fora for the future. Google+? Maybe. But despite all the enthusiasm of Guy Kawasaki and the interesting things I am discovering in that integrated platform, I do not yet see it as offering structured discussion groups in quite the same useful way. I hope I'm missing something obvious.

Apparently Yahoo is receiving many thousands of complaints, access to those with disabilities no longer work, and there are many new problems with technical features and spam. But why should that matter to demigods? "Neo", as the new mess is called, is here to stay according to Yahoo.

There are many discussions of the problem out there. Most I find depressing. Don't bother. 

Time to vote with our feet?

Translation philosopher and controversialist Miguel Llorens

It's just a bit over a year now since our profession lost Miguel Llorens, a young man of extraordinary insight and wit, who offered some of the most effective challenges I know to some of the stupidity threatening the integrity of our work. I can only imagine how greatly his absence must continue to be felt by his family and friends, and I am reminded very, very often in the present time of his contributions to debates that continue.

Now a colleague has kindly collected Miguel's public writings in English and Spanish and the associated commentary and made these available in various free e-book formats. Details can be found here on the Méteme blog.