Sep 29, 2012

We are the wind beneath their wings

I've been translating now for some 35 years, though the first two decades and a bit more I did so to support my personal interests and research rather than to pay my way. But I've had a close eye on many developments in the profession since my student days, when I dabbled in Russian to German translation and a bit of Sumerian, and later, when I was married to a sometime translator, I watched as technology began to have a greater impact on the profession. I can even plead guilty to having introduced modern computer-aided translation technology to an unsuspecting corporate translation department, which was dissolved about a year later as the company realized that outsourcing kept the translators on their toes and prevented some too-frequent abuses.

There are many advantages to having an in-house translation team. Those who are part of one or who have one in their organization can likely cite these better than I can. One of the greatest advantages I see is continuity: the translators and editors know the products and the people who create them and usually use terminology more appropriately and represent product characteristics more accurately. Let's not talk about the horrors of L2 translation (into one's second, third or fourth language) imposed by management that often does not understand linguistic competence and how best to use it in the corporate interest.

Some understand these matters, at least in principle, and rely on experts to guide and support them. For a very long time, translation agencies have been part of this network of experts. There are good ones and bad ones, and I have known very many good ones, and I continue to support some of these as a translator and consultant. But the raw truth is that most of the time, using a translation agency is a poor substitute for the relationship needed to sustain or even enhance one's foreign language communication. And many of my experiences in the past few years lead me to the conclusion that even the best agencies as they are now operated are not in their clients' best interests.

I think that Once Upon A Time the translation business was relatively straightforward and did not challenge the business skills of many translators and agency operators much. There were - and continue to be - great inefficiencies, but in the manure of that inefficiency a good amount of healthy expertise flourished and nourished those who called on our profession for assistance. Expectations were perhaps often lower (at least with regard to schedules) and I imagine they were often exceeded. Perhaps not; old hands can enlighten me.

Now, however, one sees a commoditization of our "product" - expression in other languages and cultural contexts - and a widening gap between the linguistic "haves" and "have nots". Experienced agencies whose management has usually long since ceased to live from their personal labor of words have jettisoned in-house translation competence and rely on a juggled network of external language service providers (often referred to as freelancers) to do the work as young, inexperienced clerks shuffle schedules, quote projects by outdated and/or inaccurate means and off-load anything requiring technical competence to other "departments" or the external translators themselves.

It's a dirty secret that the pool of truly good translators and revisers is very limited in many specialties, even in the most common language pairs. Or at least the pool from which dozens or hundreds or even thousands of agencies draw is very small. Many of us have been asked to revise our own work by an agency which was not contracted for the original translation. Conversations with agency owners and product managers in organizations with high standards reveal time and again that they use the same list of a half dozen or fewer people for their best work involving law and science. So even if you, the corporate translation consumer or private individual with translation needs, choose the best agencies around, there is absolutely no guarantee that they will get the best translators for you. In fact, very often your work will be auctioned off on public sites like human exports from Africa used to be about two centuries ago, or it is pandered in mass mailings and given to "first response linguists" who might be good at some things but not at yours.

What can be done about this? A lot actually. Most of us wouldn't think of relying on streetwalkers for our social support and personal health, and building a relationship with linguists who will loyally and competently provide you with the best service you can hope for is easier than you might imagine. It starts by finding and connecting with the right people. And understanding the fair value for the services you require.

The mangled words in the picture above were photographed on the wall of an exhibitor's booth at the recent Security trade fair in Essen. I had my camera with me and, as I walked the floor with my copywriting partner, I could have taken similarly appalling photos at nearly every display in nine full exhibition halls. Some of these were the result of in-house "expertise"; many were the unhappy result of being serviced by agencies and their outcall "translators". Imagine the damage to corporate images!

There is a reason why premium freelance language service providers like Chris Durban have long passed cent per word benchmarks for piece rate translation that many of the better agencies consider unachievable and why others I know charge on a schedule similar to a good attorney. This is because smart companies and individuals understand that good communication is critical to surviving a difficult market.


After the day at the trade fair, I talked to an agency friend who told me how happy his end client (who exhibited there) was with recent work I had provided, how much better the service was than anything the company had received before. Small mis-steps like giving earlier projects to a cheaper, less experienced translator after I had helped to acquire the client in the first place (and been promised the business into English) were forgotten. And I imagine that future relations with this valuable manufacturer of high quality safety equipment will soar as they should with a competent team. When I spoke to a freelance copywriter later and told her how, in the current market, I often have a picture in my head of many good translation agencies as biplanes, trailing a bit of smoke and losing altitude, coming close to the trees or to crashing into a hillside. Then they are borne up suddenly with an updraft of air, a competent translation or revision from a skilled linguist with an understanding of the field and a sensitivity to the psychological nuances of words. They rise and fly safely for a while until once again that saving wind is needed. She thought about that image, and about her future, and told me "This draught of air isn't drafting any more."

Sep 28, 2012

Miguel Llorens RIP

I just learned that we have lost a colleague I very much looked forward to meeting next week for the first time. Miguel Llorens passed away about two weeks ago. His blog had been silent since mid-August and nothing had been seen from him on Twitter since the beginning of this month. Quite unusual for a man who readily and frequently shared his insight and wit on many relevant controversies in our profession.

We who are his peers in the profession can never know the loss of this man as his family and friends surely do, but I saw him as a man of integrity who did not hesitate to call out naked emperors and other charlatans, and he often did so with a razor wit and a depth of analysis that I cannot match on my best days. He entertained me, he informed me, and he often inspired me. He was a man worth knowing.

Farewell, Miguel. Vaya con Dios.

Sep 16, 2012

Equivalent Rates in Translation Billing

This article appeared in its original form in September 2008 on an online translation portal. It has been moved here for better maintenance of the content and links. The original text has been modified  slightly and the link to the quotation tool updated. The rate equivalence calculator is part of the Sodrat Suite for Translation Productivity. An older post on this topic includes links to other tools for calculating rate equivalency.


*****

Billing questions are seen very frequently in the forums of popular translator portals or professional association sites. Often these concern whether translations should be billed by the word or agglomerated words (thousands or hundreds), lines, pages or other units in the source or target text. The answers to these questions reveal a wide range of approaches, which are often dictated by local convention, habit or simple fear. Often it is claimed that a particular method "does not make sense" because of compound word structures or other issues (which is frequently claimed for my language pair, German to English, for example). These claims are interesting, because other successful translators with the same language combinations often view things very differently. Who is right?

Before answering that question, let’s ask what sort of an answer can be trusted most. Would you stake your business on a “gut feeling” recommendation of another translator or translators when many others argue just as vehemently without evidence? Or would you feel more secure using a mysterious online calculator on an agency web site which purports to be based on a large body of text in the respective languages? Or would you prefer to see hard numbers based on your own translations that you have carried out in a number of different fields for various customers? Which approach do you think would give you the best basis for making your business decisions and protecting yourself against getting shortchanged in pricing?

I prefer to deal with real data from my own work. It not only reflects what I have been translating but also what I am likely to be translating in the next year or two. If I make a radical change I can quickly check and make sure that my pricing model is still “safe”.

To answer my own pricing questions, I created a spreadsheet in Microsoft Excel, which allows me to enter the actual data from individual projects and see what the relationship between target and source text pricing would be for words, lines and pages. You can have a copy of this spreadsheet to use yourself by downloading it here.

Because the true measure of price appropriateness is reflected in what one actually earns for the time spent working, a tracking calculator for the hourly earnings on each job was included in that spreadsheet to serve as a "control", but here I would like to focus on the relationship between different unit costing approaches and how to “adjust” prices to the units your prospect or customer is most comfortable with.

When I entered actual data from two types of translation work that I do (one specialized, the other a general category), I discovered some interesting things. I set up my spreadsheet to calculate the relative standard deviations (RSD) of the data, and what I found was that these were generally under 5%. What does this really mean? It means that if the data for your translation jobs is normally distributed, about 60% of the time the “actual” price relationship between two methods will fall in a “band” of plus or minus one RSD around the average, 95% of the time the actual relationship will be plus or minus two RSDs and 99% of the time it will be plus or minus 2.5 RSDs.

Still confused? Here’s a specific example from the downloadable spreadsheet:

If I want to earn € 0.15 per target word for a chemistry translation but the customer I am dealing with wants me to bill in source lines (allowing a “fixed price” to be calculated in advance – line rates are also common in Germany), I enter my desired rate in the little calculator table in cell B20. In cell E20 I see that I need to charge about € 1.27 per source line of 55 characters. How reliable is that figure? The relative standard deviation of the source line to target word ratio is just under 5% (some versions of the calculating spreadsheet in circulation do not include this figure, but it is accurate). This means that 99% of the time if you use this pricing, the “worst “real target word rate you achieve will be about 13.1 euro cents and the “best” you’ll do will be about 16.9 euro cents. If you work consistently with this pricing strategy your average earnings will be 15 euro cents per word. In many cases the bandwidth of variation will be much narrower than the example I have presented here. Compile your own data and see.

But what about the “exception” the fearful translator might say? One indicator of trouble for my language pair if I am using a source word pricing strategy might be an unusually low source word to source line ratio, which would indicate the likely presence of very long compound words in German. What do you do in a case like this? Raise your price if you feel like it. Use real data to show that this text really is different and must be priced differently from other work. Not everyone will agree with this idea, but you may have more success with it than you would expect. The important point here is that, by tracking actual data from your own work, you have a much clearer understanding of when rates may need adjusting. 

When you examine your own data you will find that the actual variation in earnings between the calculation methods presented is small in most cases, at least for European languages. If this is not the case for your language, then you will have hard numbers to use in your quotation. Negotiations based on fact often work better, though all this can be greatly outweighed or offset by psychological factors.

By using the rate equivalence spreadsheet or creating your own similar tool, you can navigate the hazards of various quotation methods with greater confidence, quickly determining equivalent rates in the units expected by prospects and customers. This will ensure that you reach your average earning targets and achieve the same average hourly earnings as with your familiar unit pricing. You’ll know how much you have to raise your word price or how much margin you have to reduce it if you are asked to quote by source word instead of target word or vice versa. Now get down to business.

The Sodrat Suite: delimited text to MultiTerm

The growing library of tools in the Sodrat Suite for Translation Productivity now includes a handy drag & drop script sample for converting simple tab-delimited terminology lists into data which can be imported directly into the generations of (SDL) Trados MultiTerm with which we've been blessed for more than half a decade.

Many people rightly fear and loathe the MultiTerm Convert program from SDL and despite many well-written tutorials for its use, intelligent, competent adult translators have become all too frequent callers on the suicide hotline in Maidenhead, UK.

Thus I've cast my lot with members of an Open Source rescue team dedicated to squeezing a little gain for the victims of all this pain and prescribing appropriate remedies for what ails so many of us by developing the Sodrat Software Suite. The solutions here are quick, but they aren't half as dirty as what some pay good money for.

The script below is deliberately unoptimized. It represents less work than drinking a cup of strong, hot coffee on a cold and clammy autumn morning. Anyone who feels like improving on this thing and making it more robust and useful is encouraged to do so. It was written quickly to cover what I believe is the most common case for this type of data conversion. An 80 or 90% solution is 100% satisfactory in most cases. Copy the script from below, put it in a text file and change the extension to VBS, or get the tool, a readme file and a bit of test data by clicking the icon link above.

To run the conversion, just put your tab delimited text file in the folder with the VBS script and then drag it onto the script's icon. The MultiTerm XML import file will be created in the same folder and use the name of the original file with terms as the basis of its name.

Drag & Drop Script for Converting Tab-delimited
Bilingual Data to MultiTerm XML

ForReading = 1
Set objArgs = WScript.Arguments
inFile = objArgs(0) ' name of the file dropped on the script

Set objFSO = CreateObject("Scripting.FileSystemObject")
Set objFile = objFSO.OpenTextFile(inFile, ForReading)

' read first line for language field names
strLine = objFile.ReadLine
arrFields = Split(strLine, chr(9))

outText = "          "UTF-16" & chr(34) & "?>" & chr(13) & "" & chr(13)   
   
Do Until objFile.AtEndOfStream
 strLine = objFile.ReadLine
 if StrLine <> "" then
  arrTerms = Split(strLine, vbTab)
   
  outText = outText & "" & chr(13)
      for i = 0 to (UBound(arrTerms) )
        outText = outText & chr(9) & "" & chr(13) & chr(9) & chr (9) _
                   & "" & chr(13)
        ' write the term
        outText = outText & chr(9) & chr (9) & chr (9) & "" & _
               arrTerms(i) & "
" & chr(13) & chr(9) & "
" & chr(13)
      next
  outText = outText & "
" & chr(13)
 end if
Loop

outText = outText & "
"
objFile.Close
outFile = inFile & "-MultiTerm.xml"

' second param is overwrite, third is unicode
Set objFile = objFSO.CreateTextFile(outFile,1,1)
objFile.Write outText
objFile.Close


A new look for MultiTerm XML data in memoQ & Trados

Recently I got back to testing suggestions made last year for improving the quality and usability of terminology data from memoQ and Trados MultiTerm. With a bit of a refresher for rusty XSLT skills and brilliant help with sorting challenges from Stefan Gentz of Tracom, things are now looking quite promising. Here's a first look at early results for HTML conversions of term data in MultiTerm XML format:


This approach, perhaps including other useful conversions, will be included in the chapter on "possibly useful scripts and macros" in the tutorial guide memoQ 6 in Quick Steps, which will be released very soon with over 200 pages of productivity suggestions for the latest version of Kilgray's desktop technology.

Sep 11, 2012

memoQ for Trados Studio users

Guest post by Jayne Fox

Why would a Trados Studio user be interested in memoQ? For me, I was intrigued by what I’d heard about improved matches from the translation memory.

About a year ago I started a big project that involved updating a whole lot of text that I’d previously translated. The problem was that the text formatting had got a bit mangled so I was only getting about a 25% match rate with my translation memory.

I had a hunch that memoQ would find a lot more matches with its translation memory-driven segmentation. I gave it a try and I was right – the match rate went up to a massive 45%. Here are some of the things I’ve learned as a Trados Studio user who dabbles with memoQ.

Starting out with memoQ 
Firstly, there are different editions of memoQ to suit different needs. I use Translator Pro, but there’s also a Project Manager version and a memoQ 4 free edition that you can try. There’s more information on the different editions of memoQ here.

You can download a handy Quick Start Guide to memoQ from the Kilgray website.

When you start out with memoQ, you’ll need to export any existing translation memories to TMX format and import them to memoQ. You’ll also need to import any termbases, which can be done via TMX or a delimited format.

There are good instructions on setting up a project in memoQ here. It’s very easy to create a new translation memory or termbase when setting up a project, and you don’t need a separate terminology application as it’s all built in.

One thing to note about memoQ projects is that, unlike Trados Studio, it doesn’t use project templates. An easy way to get around this is to set up a project for a particular client, and just add or remove files from this project.

Some things are pleasantly easy in memoQ:
  • Inserting matches or terms into the translation: these are displayed in a numbered list of suggestions and you can insert any of them using CTRL+the suggestion number. 
  • Inserting tags: CTRL+F9 inserts the next inline tag from the source segment. (The keyboard shortcuts are configuragble so you can change them to whatever you’re used to.) 
  • Joining and splitting segments: use CTRL+J to join segments and CTRL+T to split them. (It’s great to have the freedom to do this!) 
LiveDocs is another great thing about memoQ. You can add source and target versions of reference documents here and align them for use during translation. You can also add monolingual reference documents, media files and previously translated documents.

However, some things don’t work quite like Trados Studio:
  • You can review your translation and set the segment status to reviewed, but to do this you need to set yourself up as a project manager for the particular project. This is a bit easier in Studio. 
  • The Translator Pro edition doesn’t have a sign-off status like Trados Studio. To get around this you can change the reviewed status back to translated and then review the translation again. 
  • If your source document is a PDF you can translate the text in memoQ, but it’s handled without formatting, i.e. it’s just text. So you may need to add the formatting, or use OCR software. Studio takes a different approach and can handle formatted PDFs reasonably well, as long as they’re not too complex or require OCR. 
  • Lastly, track changes works a bit differently in memoQ. You can compare versions at any stage, but it’s less intuitive than the MS Word-style change tracking in Studio. 
Overall, I’ve found memoQ easy to use and great for translation memory matches; it’s a stable, reliable tool that’s a pleasure to translate with.



About the author: Jayne Fox is a German to English translator for business and IT. She has a background in science, training, technical writing and management and has worked as a translator since 1996. She blogs BetweenTranslations,and you can follow her on Facebook or Twitter.

Sep 7, 2012

Translation tech newsletter deal until September 9th!


Jost Zetsche's Tool Box Newsletter has been one of my professional information staples almost as long as I have been a professional translator. The wide range of information - reviews, technical advice and other tips - has guided me in many important decisions on how best to use technology in my work.

This source is also about as neutral as you will find in the field. Though Jost has his own personal preferences for working tools, he does an excellent job of looking at all the options objectively and reporting strengths and weaknesses as well as one can be expected from a mere mortal trying to keep an overview of very dynamic technology. 

The newsletter comes in two editions: 
  • The Premium one has everything ans is packed with technical tips and tricks of the trade. It also comes with access to the whole archive of newsletters going back to 2008. 
  • The Basic edition of the Tool Box Newsletter is a FREE subscription that includes fewer articles and features than the premium edition but is still an extremely useful publication.
Those who subscribe to the Premium edition by September 9, 2012 can decide for themselves what to pay. The usual cost of an annual subscription is USD 25.00 - and worth several times that.

If you want to keep abreast of the latest developments and intrigues in the translation "industry" and benefit from insightful commentary, take advantage of Jost's promotion. This newsletter also makes an excellent gift to clients and colleagues; I gave a subscription to a favorite client earlier this year. 

Sep 6, 2012

TM-Europe 2012: managing translation, not memories


Last year I attended the annual international translation management conference in Warsaw (TM-Europe) for the first time. Other than a guest post on the XLIFF symposium, this event wasn't mentioned on the blog, because shortly after the event my dog nearly died of babesiosis from a tick bite, and I had other things to think about for a long while, so many other tidbits, such as My Dinner with Andrä (well, lunch, actually) never made it to publication at the end of 2011.

Despite the silence, there was much worth telling about the event in Poland. I went on a lark because I had never been there before and wanted to see a few of the attendees I knew, and I was greatly surprised by what was possibly one of the most personally interesting conferences for translation I have ever attended. Despite the word "management" in its title, the presentations and discussions were very relevant to my freelance business as well as to agency and corporate attendees.

One of the highlights of the conference for me wasn't a presentation, but a conversation with one of the great creative developers for translation tools, Daniel Benito of Atril, whose insights and ideas on terminology mining still have my mind spinning with fascinated speculation on their feasibility and potential. Yves Champillon, the creator of WordFast - a product I don't like much at all - gave a presentation so fascinating that I actually found myself testing that suite of products again and developing interoperability solutions for agencies in my consulting clientele. The quantitative research on MT post-editing presented by Indra Samite was excellent, though I thought the conclusions overly optimistic and unsupported by the data, which rather spoke for itself. Paul Filkin's presentation on SDL Trados Studio 2011 was absolutely superb and helped me to understand an environment with ergonomics that often leave me baffled. The networking opportunities were superb, with leading experts, developers, professional organization heads, corporate language specialists, translation agency personnel, freelancers, consultants and others from around the world mingling and sharing ideas in a personal atmosphere.

The 2012 conference has the future of translation and localization as its theme. But don't expect the future vision of dark, Satanic post-editing mills propagated by the Common Nonsense Advisory and TAUS acolytes. Like last year, machine translation is part of the program, but here there will be an open discussion about whether its role is not being oversold just a wee bit, with MT advocates trading polite, professional blows with skeptics such as Miguel Llorens who see the future as having more of a human component. Freelance transcreator and consultant Chris Durban will speak on the mass market versus the premium one and why you should care about the difference for your future.

Mark Childress of SAP will speak on the future of terminology management. Given his deep expertise in that field, I would go to Warsaw on a bicycle to hear him. Fortunately, my carpool arrangements in a colleague's van newly converted to LPG promise to be more fun and comfortable. And my dog hates riding piggyback on the bike.

Have a look at the event program for 2012 (with information still being added). It has a good balance of presenters from all sides of translation and localization and promises once again to be an excellent forum for exchanging ideas to do our business better.

Early bird conference registration closes tomorrow (September 7th - not the 5th as stated on the web page... the organizers do an excellent program, but the written pre-conference communication this year has been a little confusing sometimes). However, registration is possible until the event itself as far as I know.

The conference will be held from October 4th to 5th; on October 3rd there will be a pre-conference workshop on project management and an evening welcome reception sponsored by Kilgray.

Fun with the memoQ 6 Client API!

The release of Build 55 of memoQ version 6 included a client application programming interface (API) for the first time. It is currently available only in the project manager edition of memoQ, which is really a shame, but I look at this as a good start nonetheless. I have wanted this API for years, and a mere 6 months before it was released Kilgray's chief developer swore that it would never happen, because the work involved was monumental and the effort had no perceived payoff. Well, events unanticipated on that cold February night in Budapest changed that perception, and this is an excellent start for a great tool that was never supposed to happen. The current scope is pretty much limited to project preparation, analysis and TM manipulation, but even there much can be found to simplify the lives of some translators and corporate users with automation.

A simple snippet of script code for exporting a TM to TMX has been circulating for a while now as a VBA macro to run from Microsoft Word, for example. Personally, I object to running something in MS Word that has nothing to do with that program, so I recoded it as an executable script and added a few extra tweaks:
tmFolder = InputBox("Which TM should be exported?")
if tmFolder <> "" then

' The path where all my memoQ TMs are stored
standardTMpath = "C:\ProgramData\MemoQ\Translation Memories\"

'build absolute paths
outputTMXfile =  ".\" & tmFolder & "_" & date() & ".tmx"
tmFolder = standardTMpath & tmFolder

Set fact = CreateObject("MemoQ.ClientService.ServiceFactoryScripting")
Set tmService = fact.CreateTMService
Set createTMRes = tmService.ExportToTMX(tmFolder, outputTMXfile)
if createTMRes.Success = False then
   MsgBox createTMRes.ShortErrorMessage
else
   MsgBox "The TM was exported."
end if

end if
Just copy that script into a text file, rename the extension to *.vbs and you have a double-clickable script to export a TM without opening memoQ. The TMX export is placed in the same folder where the script is executed and tagged with the date of the export.

Encouraged by this little test, I went on to tackle one of my pet peeves: the lack of muliti-file import capabilities in memoQ TMs. Trados Studio has no problem importing a folder full of TMX files to a TM in one go, but with memoQ one must import each TMX file - painfully - one at a time. The pain is felt quite severely if, for example, you are a former OmegaT user with a legacy of 300+ TMX files from your old projects.

So I wrote another little script which allows me to drag and drop any number of TMX files onto its icon and have them all import to the specified TM. This is a rather crude example for just one set of had-coded sublanguages (DE-DE and EN-US). The API currently does not allow sublanguages to be ignored for the import. Adapt this to use your relevant sublanguages if you like:
'
' memoQ TMX import macro
' drag & drop TMX files onto the script icon
'
tmFolder = InputBox("To which TM should the TMX file(s) be imported?")
If tmFolder <> "" Then

' The path where all my memoQ TMs are stored
standardTMpath = "E:\Working databases\MemoQ\TMs\"

'build absolute path
tmFolder = standardTMpath & tmFolder

' Create the ServiceFactoryScripting object and TM service
Set objSFS = CreateObject("MemoQ.ClientService.ServiceFactoryScripting")
Set svcTM = objSFS.CreateTMService

' Set import options parameters
Set objImportOptions = CreateObject("MemoQ.ClientService.TMImportOptionsScripting")
objImportOptions.TMXSourceLanguageCode = "ger-de" 
objImportOptions.TMXTargetLanguageCode = "eng-us"  
objImportOptions.TradosImportOptimization = False
objImportOptions.DefaultValues = Null
objImportOptions.DefaultsOverrideInput = False

  Set objArgs = WScript.Arguments
  For I = 0 To objArgs.Count - 1
    tmxfile = objArgs(I)
    logFileName = tmFolder & "_" & date() & "_" & "importlog." & I & ".txt"

    Set returnvalue = svcTM.ImportFromTMX(tmFolder, tmxfile, objImportOptions, logFileName)
    If returnvalue.Success = False Then
        MsgBox returnvalue.ShortErrorMessage
    Else
        MsgBox "No errors in the import of " & tmxfile & ". See the log file at: " & logFileName
    End If
  Next

end if

Since I once wasted a full day importing a big load of TMX files to memoQ (before I got the bright idea to use The Other Tool as an intermediate step), I was so delighted to get this script working that I just kept making new test TMs and running it long past the point where there was anything left to prove. It's just a thrill to know that consolidating my TMs is now much, much simpler!

These are just a few of the myriad simplifications that are possible for one's processes with the new API. I expect most of its applications will be in "real" programs with real programmers using "real" languages and not wimpy, half-baked scripts like I've thrown together and shown here. This API has potential benefits to a great number of ordinary memoQ users, I think - especially if little productivity tips like these here are shared. So I do hope that, at some point, access to the client API is expanded to include the memoQ Translator Pro edition!

Sep 4, 2012

memoQ 6.0.55: The Great Leap Forward with a Client API

Yesterday in the Yahoogroups forum, Kilgray's COO quietly announced the release of a new build of memoQ, which contains some very significant additions and improvements.
Important: memoQ 6.0.55 released 
Mon Sep 3, 2012 12:12 pm (PDT). Posted by: "Istvan Lengyel" 

Hi All, 

Sorry for the long silence since the previous memoQ build - we had something in the making. memoQ 6.0.55 was uploaded to our website today, you can download and install it. This build now supports 64-bit installation, however, we have an issue with AutoUpdate which we may or may not be able to solve (it's third-party software), so for the time being you have to install 6.0.55 yourself from the website, and it may remain so in the future if we can't fix this. Therefore AutoUpdate is not available for an indefinite amount of time. Besides numerous bugfixes, there is new functionality added: 
  • - the long-awaited SDLXLIFF filter
  • - a client-side API - only for users of the project manager edition (hello Paul :)), 
  • - on the server side, the possibility to use FirstAccept from content-connected projects. 
We did not release this as a new version as the number of features does not qualify for a full new upgrade. I hope it will meet your expectations. 

István 
There were actually many other improvements; a great number of fixes to bugs in the concordance and LiveDocs, which were driving me nuts. Also, the performance for importing very large XLIFF files (think EU DGT scale!) was improved by an order of magnitude, though to see the full benefit one needs a 64-bit operating system and lots of RAM.

The SDLXLIFF filter should be helpful to the many memoQ users who translate files created in SDL Trados Studio. It has been possible to read this format since it first appeared using thestandard XLIFF filter, but this new filter offers better results.

I am particularly excited by possibilities offered by the new client application programming interface (API). This will enable certain functions of memoQ to be run from other applications, even when memoQ is not active. Available features include analysis and TM functions; I've seen three simple lines of macro code in Microsoft word that will export a memoQ TM to TMX, for example. I think this will lead to many interesting extensions of memoQ functionality and automation. Note that the API is only available in the Project Manager edition, but the additional cost of that version is less than I paid for my Déjà Vu X Workgroup upgrade years ago, and for those who work with multiple target languages or who need additional features for outsourcing and collaboration, an upgrade to memoQ Project Manager makes sense anyway.

And who knows? With rumors of SaaS resources for memoQ on the horizon, I can imagine more reasons to upgrade from memoQ Translator Pro.

Addendum: The documentation for the Client API is found at C:\ProgramData\MemoQ\SDK

Polish colleague Marek Pawelec also commented in the Yahoogroups list:
I'm happy to report that in version 6.0.55 you can import .sdlxliff files with mapping segment states without any hassle (see SDLXLIFF tab in filter settings) and if appropriate option is selected (States tab, Map memoQ states to XLIFF states on export, select SDLXLIFF in Source drop-down), memoQ states are properly mapped back to Studio states.

memoQ 6 desktop: working with other memoQ users



The best methods for memoQ desktop editions to work with other memoQ users are influenced by the versions of the software you and others use. If these versions are compatible, project information can be shared fully, including previews and status settings for the translation. Otherwise, many of the compromises of working with other translation environments apply.

Bilingual exchange files





Bilingual exchange files are generated via Project home > Translations > Export bilingual. If the other person also uses memoQ 6, the best and "friendliest" option is to create a memoQ XLIFF and include the skeleton and preview. The "skeleton" allows target files to be created. For earlier versions of memoQ, a simple XLIFF with the extension changed to XLF or one of the other bilingual formats will do. In memoQ 6, bilinguals are imported using the Import command (and are recognized automatically); in earlier versions, the Import/update bilingual command is used. The tags will always be respected.

If the bilingual DOC format is used for exchange, the finished work must be exported via Export bilingual. Other export commands produce monolingual target documents. The least complicated format to use for users of memo 4.2 to 5.0 is the two-column RTF.

Version 6 TMs and termbases are fully compatible with Version 5 of memoQ, and data can be exchanged with all versions via TMX and delimited formats.

Project backups

A fully configured project with all settings, translation files, TMs, termbases and corpora can be sent by creating a backup. On the memoQ Dashboard, select the project and click Backup selected. Warning: backup files can be very large, so you might want to detach very big TMs first, for example. And of course this requires the same version of memoQ to be used.

Handoff packages (PM version only)

If translations have been assigned by name in Project home > Translations, handoff packages for translators and reviewers, including necessary resources, can be created after running a check on Project home > Overview > General > Handoff checks.This, too, requires the same version of memoQ.

Another point to consider: memoQ versions 5 and 6 can co-exist on the same desktop computer, so if you need to continue working with clients who have the version 5 server, for example, there is little to stand in the way of upgrading to version 6. The only real difficulty might arise if you want to attach corpora you have migrated; this may require restoring the LiveDocs corpora from the backup of the old version or creating a new one.