Showing posts with label workflow. Show all posts
Showing posts with label workflow. Show all posts

Aug 5, 2021

Workflow Wednesday: Getting started with memoQ templates

 

Recorded Aug 11, 2021

It has been more than seven years since memoQ introduced the use of project templates, and although the default method of project creation involves templates when the New Project icon is clicked on the Project ribbon, most users stick with the examples provided, venturing little beyond them, or they use the old Project Wizard and avoid templates altogether. It took me some years to really get my head around the use of project templates in memoQ, and the fully configured sample templates included with installation and made to specifications that were seldom aligned to my needs were not particularly helpful.

When I finally did understand how templates could revolutionize my productivity in local and online projects, I responded to help requests by some LSP consulting clients by providing fully configured templates to address all the problems they listed with the often complex needs of their high volume clients. And to my surprise, most of these configurations went unused. The project managers were simply overwhelmed. As I had been for nearly six years.

And then a colleague's request to help with a filter for a package type not included in memoQ's standard configuration opened my eyes to the importance of simplicity. I had to use a template for that particular challenge, and the template allowed easy import of GLP packages full of TXLF files and did no other special thing.

A weekend of training with project managers from a local LSP showed that this approach could clear up the confusion often caused by immediate confrontation with "kitchen sink" templates as an introduction. When the team shared their desires of "just one thing" to make their work easier and saw how simply that one thing could be accomplished, the understood the value of templates quickly and were soon able to build more sophisticated templates as their confidence grew and they dared tread just a bit farther. Step. By. Step. 

So this webinar took a different approach to templates than you have probably seen so far, emphasizing simplicity and simple needs as a foundation for robust processes and automation. I had no intention of talking about all the myriad options for configuration and automation, though some of these were discussed in the Q&A. This talk is for people who are confused by templates. Who think they aren't really of any use for what they do. Or who are even scared stiff of them. So enjoy the recording (best viewed on YouTube, where you can take advantage of the time-coded table of contents).

Jan 6, 2019

A voice-activated recorder for iOS


Some years ago I picked up an Olympus hand-held digital recorder which served me well for some translation work as well as for evidentiary purposes during assaults by a drunken psychotic. I also used it on many occasions to record ideas for projects and other things while out and about.

But juggling several electronic devices at once has never been easy for me, and I kept losing the little recorder in jacket pockets, suitcases, desk drawers, etc. I still have it, but it's been some weeks since I could tell you where.

I use the Voice Memos app on my iPhone, but for me its continuous recording feature is inconvenient, and I don't like pausing and resuming the recording frequently. It's too distracting. The Olympus device and an old tape recorder I used decades ago both had a convenient voice activation feature which avoided excessive dead space.

Being the technosaurus that I am, it took a long time to realize that there is probably an app for that these days. And indeed there is. Several in fact. I downloaded the iOS app depicted above, and in the days ahead I'll be using it for a few translation projects to evaluate what I consider to be an improved version of the three-step translation workflow I demonstrated a few years ago in a remote conference lecture in Buenos Aires using the now-defunct Dragon Dictation app from Nuance. Stay tuned.

May 21, 2018

Best Practices in Translation Technology: summer course in Lisbon July 16-21

As usual each year, the summer school at Universidade Nova de Lisboa is offering quite a variety of inexpensive, excellent intensive courses, including some for the practice of translation. This year includes a reprise of last year's Best Practices in Translation Technology from July 16th to 21st, with some different topics and approaches.

Centre for English, Translation and Anglo-Portuguese Studies

The course will be taught by the same team as last year – yours truly, Marco Neves and David Hardisty – and cover the following areas:
  • Good translation workflows.
  • Using voice recognition in translation.
  • Using machine translation in a humane, intelligent way.
  • Using checklists to improve communication in translation.
  • Using glossaries, bilingual texts and other references in multiplatform environments.
  • Good practices for using terminology and reference texts in the target language.
  • Planning and creating lists for auto-translation rules and the basics of regular expressions for filters.

Some knowledge of the memoQ translation environment and translation experience are required.

The course is offered in the evening from 6 pm to 10 pm Monday (July 16th) through Friday (July 20th), with a Saturday (July 21st) session for review and exams from 9 am to 2 pm. This allows free days to explore Lisbon and the surrounding region and get to know Portugal and its culture.

Tuition costs for the general public are €130 for the 25 hours of instruction. The university certainly can't be accused of price-gouging :-) Summer course registration instructions are here (currently available only in Portuguese; I'm not sure if/when an English version will be available, but the instructors can be contacted for assistance if necessary).

Two other courses offered this summer at Uni Nova with similar schedules and cost are: Introduction to memoQ (taught by David and Marco – a good place to get a solid grounding in memoQ prior to the Best Practices course) from  July 9–14, 2018 and Translation Project Management Tools from September 3–8, 2018.

All courses are taught in English and Portuguese in a mix suitable for the participants in the individual courses.

May 31, 2015

Authoring and Editing with memoQ (webinar)

Last February I described my initial work with translation tools as environments for authoring and editing documents in a single language. Some people have been doing this quietly for a while; occasionally I would hear puzzled comments from a trainer who had held a class on SDL Trados Studio, OmegaT or memoQ which had been attended by a technical writer or someone with other professional writing interests not related to translation. But to my knowledge there has been no systematic approach to this.

Some weeks later I began to discuss and present some new possibilities for speech recognition in 38 languages which go well beyond the limitations of Dragon NaturallySpeaking for automated speech transcription in the eight languages for which it is available. These possibilities include a number of mobile solutions which are quickly gaining traction among translators and other professional writers.

On Tuesday, June 2nd (two days from now), I will be presenting a one-hour introduction to "MemoQ for Single-language Authoring and Editing" in the eCPD Webinar series. The registration page is here.

This presentation will be an update of the talk I gave earlier this year which discussed CAT tools in general as authoring and editing tools. Although any tool works in principle (and even a user of SDL Trados Studio, for example, can probably draw enough ideas from the upcoming eCPD talk to make good use of the approach), memoQ has some particular advantages, not the least due to its corpus-handling features in LiveDocs and its superior predictive typing facilities, including "Muses" (which are like SDL's AutoSuggest with more flexibility and without the onerously high data quantity requirements).

The presentation will include an overview of some of the latest advances in speech recognition in 38 languages for ergonomically superior writing by automated transcription as well as discussions of version management and dictation workflows which can be applied for greater ease in editing monolingual documents or even translations, including post-editing of machine pseudo-translation (PEMpT by the "Hardisty Method"). I've been fairly quiet on this blog in recent months due to conference organization and travels and the considerable time put in to researching improved work ergonomics for translation, writing and editing processes. (In fact I didn't even find time to blog the memoQ Day on April 22nd in Lisbon yet!) Elements of all these efforts, which have sparked no little interest at recent conferences and workshops I have presented at in Europe, will be part of Tuesday's talk, which will include Q&A afterward to explore the interests of those participating.

So if you are a translator involved in a lot of revision or editing work (bilingual or monolingual, a technical writer or other professional writing in a single language for publication, someone working on a thesis or authoring for other purposes, the eCPD presentation may help you to do this with better organized resources and greater efficiency. As one friend of mine who wrote a thesis just before I developed this approach put it, with this she would at least have been able to keep track of the feedback on her work from its five or so reviewers without going completely nuts.

Feb 21, 2015

CAT tools re-imagined - an approach to authoring and editing


I am often asked about the monolingual editing workflows I have used for some 15 years now to improve texts which were written originally in English, not created by translation from another language. And I have discussed various corpus linguistics approaches, such as to learn the language of a new specialty or the NIFTY method often presented by colleague Juliette Scott.

However, on a recent blitz tour of northern Portugal to test the fuel performance of the diesel wheels which may take me to the BP15 and memoQfest conferences in Zagreb and Budapest respectively later this year, I stopped off in Vila Real to meet a couple of veterinarians, one of whom is also a translator. During a lunch chat with typically excellent Portuguese cuisine, the subject of corpus research as an aid for authoring a review paper came up. I began to explain my (not so unusual) methods of editing and existing document when I was asked how the tools of translation technology might be applied to authoring original content.

The other translator at the table said, "It's a shame that I cannot use my translation memories to look things up while I write", and I replied that of course he could do this, for example with the memoQ TM Search Tool or similar solutions from other providers. And then he said, "And what about my term bases and LiveDocs corpora?", and I said I would sleep on it and get back to him. In the days that followed, other friends (coincidentally also veterinarians) asked my advice about editing the English of the Ph.D. theses and other works they will author in English as non-native speakers of that language. One of them noted that it would be "nice" if she could refer to corrections made by various persons and compare them more easily. I said I would sleep on that one too.

A few days after that the pain in my hands and feet from repetitive strain injuries and arthritis was unbearable, aggravated by a rope burn accident while stopping an attack on sheep by my over-eager hunting dog and by driving over 1000 km in a day. I doubled down on the pain meds, made a big jug of toxically potent sangria and otherwise ensured that I was comfortably numb and could enjoy a night of solid sleep.

It was not meant to be. Two hours later I woke up, stone sober, with a song in my head and the solution to the problem of my Portuguese friends writing in English and Tiago wanting to author his work in memoQ for the convenience of using its filters to review content. Since then the concept has continued to evolve and improve as others suggest ways of accommodating their writing or language learning needs.


After about a week of testing I scheduled one of my "huddle" presentation classes, an intimate TeamViewer training session to discuss the approach and elicit new ideas for adapting it better to the needs of monolingual authors. The recording of that session is available for download by clicking on the image of the title slide at the top of this post. (The free TeamViewer software is needed to watch the TVS file downloaded; double-click it, and the 67-minute lecture and Q&A will play.)

I'm currently building Moodle courses which provide more details and templates for this approach to authoring and editing, and it will be incorporated in parts of the many talks and workshops planned this year.

I am aware that SDL killed their authoring product, the Author Assistant, and that Acrolinx offers interesting tools in this area, as do others. But I'm usually hesitant to recommend commercial tools in an academic environment, because their often rapid pace of development (such as we see with memoQ) can play serious havoc with teaching plans and threaten the stability of an instructional program, which is usually best focused on concepts and not on fast-changing details. So I actually started out my work and testing of this idea using the Open Source tool OmegaT, the features of which are more limited but also more stable in most cases than the commercial solutions from SDL, Kilgray and others. But as I worked, I noticed that my greater familiarity with memoQ's features made it an advantageous platform for developing an approach, which in principle works with almost every translation environment tool.

Part of my motivation in creating this presentation was to encourage improvements in the transcription features available in some translation environments. But the more I work with this idea, the more possibilities I see for extending the reach of translation technology into source text authoring and making all the resources needed for help available in better ways. I hope that you may see some possibilities for your own work or learning needs and can contribute these to the discussion.

Nov 25, 2014

My first project with memoQ 2014 Release 2

When I got my first look at the test version of the upcoming memoQ release, memoQ 2014 R2, I argued with  Kilgray that it ought to be called memoQ 2015 instead, not only because the year 2014 is almost over, but because this software represents a major break with the old interface design. Kilgray likes to point out that there are not so many new features being introduced here - perhaps a mere "dozen" give or take a bit - but just one of these - the new ribbon interface - has its own 50 page manual. Meu deus.

On the whole I am coming to view the rapid pace of development for some CAT tools in a rather negative light. I rather like the current memoQ 2014 release, but I am not even close to coming to grips with the 70+ new features introduced earlier this year (which has probably grown to 100+, depending on how you want to count them). I think back to my experiences as a corporate systems consultant in the archiving and document management sector and those working with a state department of transportation before that, where many thousands of networked workstations and other systems had to be managed for maximum productivity and minimum disruption. It took me a while back then to understand why, after many months of thorough testing at enormous cost, an upgrade for  something like Internet Explorer was permitted for the version two whole numbers below the current one. I eventually learned that these big organizations were not so dumb after all: being on the "leading edge" is too often the same as the "bleeding edge", which can have considerable, unanticipated costs.

This is the reason why for years I have advised my clients and colleagues not to consider new versions of any tool for routine production use until several months at least past its release date, and to use "roundtrip testing" in every instance to ensure that a technically usable result can be obtained from every project. Ultimately, Kilgray and others are going to have to determine whether constantly stirring the feature pot in a way that too often makes established workflows obsolete is in the best interests of their clientele and market future. Despite all the trendy talk of the benefits of disruption and "creative destruction" I am unconvinced that this is the case.

However, I do see very good reasons for the major changes to the interface in memoQ 21014 Release 2, and I think that new features like the limited sharing of online translation memories and termbases (with an open API in the future to allow access by other tools I'm told) are an excellent intermediate stage for those who aren't quite ready to move up to a team server solution like memoQ Cloud or the greater access capacity of the full memoQ Server license but who still need realtime data sharing for projects with a partner from time to time.

Kilgray's blog has a good post describing the basic shift in the logic of the environment from cataloging commands as one might in a library or inventory system to organization by the normal sequence of work. This makes a lot of sense, and this is also the way I teach new users to use the software, with small sets of features organized according to the sequence of typical work.



After spending about a week just staring at the new ribbons, I decided to do my first small, low-risk commercial project with the test version. Everything went quite well, but I had to fight a sense of disorientation as I kept looking for commands in the lower area of the screen, which is now free for viewing more files and file information in a project. In some cases, I had to get used to clicking on the little arrows under icons rather than the icons themselves. Nothing I needed was difficult to locate, but as a longtime user of memoQ with many ingrained habits, I will take some time getting used to this, after which my work will probably proceed even more smoothly. In any case, it is clear to me that new users will find their way more quickly with this new, workflow-based interface.

This impression of greater ease for new users was reinforced by remarks from a colleague in my office a few days ago. Her professional background prior to her activities in translation was as an educational psychologist and adult education teacher, and when I began to complain about how awful the new ribbons were and how uncomfortable I felt with them, she patiently explained how I had it all wrong and why the new design was much more logical and easier to use. Years ago I teased SDL Trados users who bitched at first about the change from the nasty old over/under TWB interface to the tabular working environment of SDL Trados Studio only to become enthusiasts later when they realized how much their workflows had improved; I fear that I will also become a just target for such teasing.

I don't want to admit it, really, but I am already beginning to like those awful ribbons, which are perhaps rather useful after all. And if I really don't want to look at them, they can be hidden with just a click, leaving me with even more working space on my screen. So all right, I'll say it. Reluctantly. Good job, Kilgray.

Now who is going to re-do all the screenshots and videos for my tutorials?

Jul 14, 2014

My translated document won't export! A 3-step preventive solution

A file export failure in memoQ
How many times have I heard that? Experienced it myself in various CAT tools? No idea. Lots.

About three years ago I published an article on the "pseudotranslation" feature of memoQ, which had been introduced in version 5. This was a feature I had made good use of years before in Passolo (before SDL did its Packman thing with the good company) to determine whether all text in software to be localized was accessible to the translation environment. In that article and on many other occasions, I have discussed the idea of roundtrip testing files to be sure they can be translated and then transformed afterward back into the desired formats. Often I just refer to this as "roundtripping".

Roundtripping is very simple. Anyone can do it, and it generally takes about a minute, sometimes a bit more, often less. And it more or less guarantees that your plans to translate a file and get a technically usable result will succeed. Roundtripping can be done with any respectable CAT tool and possibly with some of the ones that aren't.

Here's how it goes:
  1. Import the document(s) you intend to translate into your translation environment (Wordfast Pro, OmegaT, SDL Trados Studio, memoQ, etc.).
  2. Copy the entire source text exactly - including all tag structures - to the fields for target text. (This is actually a pain in the ass in OmegaT currently because even its developers don't understand how that somewhat hidden, idiotically command-line based function works. But for everyone else pretty much it's a piece of cake.)
  3. Export the target text document (which of course is exactly the same as the source text) from the translation environment and ensure that it opens properly in its relevant application.
If there are problems in Step 3, then your source document is either corrupted (very likely) or the working environment screwed up the document on import. If the file type involved is one you work on regularly, you can be pretty sure that the problem lies with the original document and has nothing at all to do with your translation tool.

Corrupted documents occur with some frequency when PDFs are converted to editable formats such as RTF, DOC or DOCX, particularly by persons without a proper understanding of the best procedures for doing so. Even top-end tools like Omnipage or Abbyy Finereader sometimes create documents with hidden flaws in their file structure, which might open in a word processor but which go to Hell once imported in a CAT tool. Table structures used to be particularly vulnerable to corruption and probably still are.

So a smart outsourcer or translator roundtrips files before the actual translation starts to avoid last-minute panics and missed deadlines. It's fast, free insurance.

But what can you do if the corrupt file is all you have?
Sometimes nothing except go back to the client and ask for a new file. But I have also noticed rather often that corruption can be avoided by zipping file attachments to e-mail, and it seems that the corruption of unprotected files often occurs when these files are downloaded from the mail server. So if you can, try another copy of the attached file off your e-mail server.

In the case of Microsoft Office files (for Word, Excel and PowerPoint), re-saving the file in a different format causes its structure to be reworked by the application and often repaired. Sometimes that corrupt DOCX file can simply be re-saved as DOCX and all will be well with a roundtrip in your CAT tool, but if that doesn't help, saving the DOCX as RTF and then re-saving that RTF file once again as DOCX will effect the necessary "repairs" to the file and ensure that you can get a usable result.

Why not just translate that RTF file if it's OK? I prefer not to, because if there are tags present, these may be represented differently in some working environments (such as memoQ, which shows very different tagging in RTF and DOCX), and this messes up my matching a bit and obscures the tag function as well, forcing me to look at a printout or PDF too often to see what the markup is about. 

Get it right the first time
The files I work on often have unusual abbreviations which affect segmentation (and require me to update my rules), or I join and split segments while I work on a complex patent or legal pleading in order to make the work go better. This takes time. And if I discover at the end of a three-day job with 10,000+ words that my translation will not export to a target file, then I can look forward to a lot of extra time recreating my desired segmentation, especially if I was lazy and did not update the segmentation rules. While features like memoQ's "TM-driven segmentation" can overcome this somewhat, there are limits, and those limits are exceeded in cases where I might join 7 or more segments because the source language segmentation rules were seriously suboptimal.

So take a minute. Or two. And roundtrip those documents before you start translating or send the job out for someone else to do!

Jul 8, 2014

Time-saving tip: the DELETE key!


As one who clings stubbornly to the notion of value in a "personal touch" for business communication, I found this colleague's suggestion that using the name of your target might indeed be important. In this age of HAMPsTr'ized workflows in the Bulk Market Bog inhabited by Luigi & Co., when flexibility for bottom-feeding agencies means suggesting yoga lessons so translators can bend over farther, I do agree that if you ask people to work for peanuts you should at least ask them using their actual names.

I do, however, appreciate very much the consideration shown in many of these inquiries, which ask me to quote my best rate. As a memory aid, I have put the highest rate I have been paid onto a Post-It note on my auxiliary monitor to remind myself that I can certainly do better than that. So each time one of these silly requests comes along now, I up the ante and double down... or just hit DELETE and spend my time instead on the numerous serious business partners in the queue.