Showing posts with label video tutorial. Show all posts
Showing posts with label video tutorial. Show all posts

Jun 14, 2024

Substack in #xl8 & #l10n

 As related in my last post, I've been testing the Substack platform as an alternative means for distributing and managing the kind of information I've published for many years now on this blog, professional sites and social media, course authoring platforms, YouTube, Xitter, Mastodon and probably a few other channels I've forgotten. And I'm quite surprised to find any expectations I had to be exceeded at this point. On the whole I can do a better job of creating accessible tutorials and other reference information there, and there is far less effort involved with maintenance, style sheet fiddling and whatnot.

All my various projects there can be found here on my profile page.

Currently, these are the
Other plans include The Diary and Letters of Charles Berry Senior, a US Civil War veteran from Yorkshire, England, who participated in Sherman's march to the sea and who is my great-great grandfather. Some of his records exist in very deteriorated form in a university archive in the United States and can be found online, but much is missing, and I am in possession of the complete transcript prepared by one of the man's daughters more than 100 years ago when the family feared the information would be lost to the forces of material decay. I'll be preparing clean text from her handwritten record (the typescript done by a cousin about 50 years ago was lost and probably contains more errors) and probably an audio reading. There's some hard stuff there, as well as some surprises and interesting lessons in how our world has changed in the past 160 years.

And at some point I'll probably share some of my culinary obsessions as well as what life was like traveling on the other side of the Iron Curtain sans papers or in dingy Paris bookshops and refugee hotels 40 years ago and more.

This past week, I've done a big blitz on memoQ LiveDocs, for which there are still another dozen or so drafts to be finished, and a lot of stuff from my CAT tools resource online class from last year should be appearing there in updated form in due course. There isn't a lot of translation-related activity that I've found on Substack, at least not for the technical side of things, but a lot of historians and authors I follow are very present there, so I'm hoping the better half of my translation technologist friends will join the Substack party at some point.

Most of my new text, video and teaching content will appear on those Substack channels. It's simply far easier to manage, and there you won't have the same RSS headaches you might have here. And the damned editor of Google Blogger just keeps accumulating bugs I don't have to cope with in Substack. And don't get me started on bloody Wordpress!

I hope to see you in my Substack channels soon! I think there will be something like a memoQ QA course there before the summer is over....

Oct 18, 2023

An Unfiltered Look at memoQ Filters (webinar, 19 October 2023, 15:00 CET)


 

This presentation and discussion covered some of the challenges and opportunities to improve memoQ project workflows through correct filter choice and design. There are many different aspects to filters in memoQ, and the right choices for a given translatable file or project are not always clear, or different options may offer particular advantages in your situation.

Cascading filters - an important feature for dealing with complex source texts - are also part of the talk, not just the basics but also examples of going beyond what visible memoQ features allow, to do "the impossible". This session is part of the weekly open office hours for the course "memoQuickies Resource Camp", but everyone is welcome to attend these talks regardless of enrollment status. Those interested in full access to all the course resources and teaching may enroll until the end of January 2024.

To join sessions for the October and November office hours, register here.

After registering, you will receive a confirmation email containing information about joining the meeting. 

Here is an edited recording of the October 19th session, with a time-coded index available on YouTube in the Description field:

Aug 5, 2021

Workflow Wednesday: Getting started with memoQ templates

 

Recorded Aug 11, 2021

It has been more than seven years since memoQ introduced the use of project templates, and although the default method of project creation involves templates when the New Project icon is clicked on the Project ribbon, most users stick with the examples provided, venturing little beyond them, or they use the old Project Wizard and avoid templates altogether. It took me some years to really get my head around the use of project templates in memoQ, and the fully configured sample templates included with installation and made to specifications that were seldom aligned to my needs were not particularly helpful.

When I finally did understand how templates could revolutionize my productivity in local and online projects, I responded to help requests by some LSP consulting clients by providing fully configured templates to address all the problems they listed with the often complex needs of their high volume clients. And to my surprise, most of these configurations went unused. The project managers were simply overwhelmed. As I had been for nearly six years.

And then a colleague's request to help with a filter for a package type not included in memoQ's standard configuration opened my eyes to the importance of simplicity. I had to use a template for that particular challenge, and the template allowed easy import of GLP packages full of TXLF files and did no other special thing.

A weekend of training with project managers from a local LSP showed that this approach could clear up the confusion often caused by immediate confrontation with "kitchen sink" templates as an introduction. When the team shared their desires of "just one thing" to make their work easier and saw how simply that one thing could be accomplished, the understood the value of templates quickly and were soon able to build more sophisticated templates as their confidence grew and they dared tread just a bit farther. Step. By. Step. 

So this webinar took a different approach to templates than you have probably seen so far, emphasizing simplicity and simple needs as a foundation for robust processes and automation. I had no intention of talking about all the myriad options for configuration and automation, though some of these were discussed in the Q&A. This talk is for people who are confused by templates. Who think they aren't really of any use for what they do. Or who are even scared stiff of them. So enjoy the recording (best viewed on YouTube, where you can take advantage of the time-coded table of contents).

Sep 26, 2019

10 Tips to Term Base Mastery in memoQ! (online course)

Note: the pilot phase for this training course has passed, free enrollment has been closed, and the content is being revised and expanded for re-release soon... available courses can be seen at my online teaching site: https://transtrib-tech.teachable.com/
In the past few years I have done a number of long webinars in English and German to help translators and those involved in translation processes using the memoQ environment work more effectively with terminology. These are available on my YouTube channel (subscribe!), and I think all of them have extensive hotlinked indexes to enable viewers to skip to exactly the parts that are relevant to them. A playlist of the terminology tutorial videos in English is available here.

I've also written quite a few blog posts - big and small - teaching various aspects of terminology handling for translation with or without memoQ. These can be found with the search function on the left side of this blog or using the rather sumptuous keyword list.

But sometimes just a few little things can get you rather far, rather quickly toward the goal of using terminology more effectively in memoQ, and it isn't always easy to find those tidbits in the hours of video or the mass of blog posts (now approaching 1000). So I'm trying a new teaching format, inspired in part by my old memoQuickie blog posts and past tutorial books. I have created a free course using the Teachable platform, which I find easier to use than Moodle (I have a server on my domain that I use for mentoring projects), Udemy and other tools I've looked at over the years.

This new course - "memoQuickies: On Better Terms with memoQ! 10 Tips toward Term Base Mastery" - is currently designed to give you one tip on using memoQ term bases or related functions each day for 10 days. Much of the content is currently shared as an e-mail message, but all the released content can be viewed in the online course at any time, and some tips may have additional information or resources, such as videos or relevant links, practice files, quality assurance profiles or custom keyboard settings you can import to your memoQ installation.

These are the tips (in sequence) that are part of this first course version:
  1. Setting Default Term Bases for New Terms
  2. Importing and Exporting Terms in Microsoft Excel Files
  3. Getting a Grip on Term Entry Properties in memoQ
  4. "Fixing" Term Base Default Properties
  5. Changing the Properties of Many Term Entries in a Term Base
  6. Sharing and Updating Term Bases with Google Sheets
  7. Sending New Terms to Only a Specific Ranked Term Base
  8. Succeeding with Term QA
  9. Fixing Terminology in a Translation Memory
  10. Mining Words with memoQ
There is also a summary webinar recorded to go over the 10 tips and provide additional information.
I have a number of courses which have been developed (and may or may not be publicly visible depending on when you read this) and others under development in which I try to tie together the many learning resources available for various professional translation technology subjects, because I think this approach may offer the most flexibility and likelihood of success in communicating necessary skills and knowledge to an audience wider than I can serve with the hours available for consulting and training in my often too busy days.

I would also like to thank the professional colleagues and clients who have provided so much (often unsolicited) support to enable me to focus more on helping translators, other translation project participants and translation consumers work more effectively and reduce the frustrations too often experienced with technology.


Dec 29, 2018

memoQ Terminology Extraction and Management

Recent versions of memoQ (8.4+) have seen quite a few significant improvements in recording and managing significant terminology in translation and review projects. These include:
  • Easier inclusion of context examples for use (though this means that term information like source should be placed in the definition field so it is not accidentally lost)
  • Microsoft Excel import/export capabilities which include forbidden terminology marking with red text - very handy for term review workflows with colleagues and clients!
  • Improved stopword list management generally, and the inclusion of new basic stopword lists for Spanish, Hungarian, Portuguese and Russian
  • Prefix merging and hiding for extracted terms
  • Improved features for graphics in term entries - more formats and better portability
Since the introduction of direct keyboard shortcuts for writing to the first nine ranked term bases in a memoQ project (as part of the keyboard shortcuts overhaul in version 7.8), memoQ has offered perhaps the most powerful and flexible integrated term management capabilities of any translation environment despite some persistent shortcomings in its somewhat dated and rigid term model. But although I appreciate the ability of some other tools to create customized data structures that may better reflect sophisticated needs, nothing I have seen beats the ease of use and simple power of memoQ-managed terminology in practical, everyday project use.

An important part of that use throughout my nearly two decades of activity as a commercial translator has been the ability to examine collections of documents - including but not limited to those I am supposed to translate - to identify significant subject matter terminology in order to clarify these expressions with clients or coordinate their consistent translations with members of a project team. The introduction of the terminology extraction features in memoQ version 5 long ago was a significant boost to my personal productivity, but that prototype module remained unimproved for quite a long time, posing significant usability barriers for the average user.

Within the past year, those barriers have largely fallen, though sometimes in ways that may not be immediately obvious. And now practical examples to make the exploration of terminology more accessible to everyone have good ground in which to take root. So in two recent webinars, I shared my approach - in German and in English - to how I apply terminology extraction in various client projects or to assist colleagues. The German talk included some of the general advice on term management in memoQ which I shared in my talk last spring, Getting on Better Terms with memoQ. That talk included a discussion of term extraction (aka "term mining"), but more details are available here:


Due to unforeseen circumstances, I didn't make it to the office (where my notes were) to deliver the talk, so I forgot to show the convenience of access to the memoQ concordance search of translation memories and LiveDocs corpora during term extraction, which often greatly facilitates the identification of possible translations for a term candidate in an extraction session. This was covered in the German talk.

All my recent webinar recordings - and shorter videos, like playing multiple term bases in memoQ to best advantage - are best viewed directly on YouTube rather than in the embedded frames on my blog pages. This is because all of them since earlier in 2018 include time indexes that make it easier to navigate the content and review specific points rather than listen to long stretches of video and search for a long time to find some little thing. this is really quite a simple thing to do as I pointed out in a blog post earlier this year, and it's really a shame that more of the often useful video content produced by individuals, associations and commercial companies to help translators is not indexed this way to make it more useful for learning.

There is still work to be done to improve term management and extraction in memoQ, of course. Some low-hanging fruit here might be expanded access to the memoQ web search feature in the term extraction as well as in other modules; this need can, of course, be covered very well by excellent third-party tools such as Michael Farrell's IntelliWebSearch. And the memoQ Concordance search is long overdue for an overhaul to allow proper filtering of concordance hits (by source, metadata, etc.), more targeted exploration of collocation proximities and more. But my observations of the progress made by the memoQ planning and development team in the past year give me confidence that many good things are ahead, and perhaps not so far away.

Sep 11, 2018

Adding time codes to YouTube videos

For years now, I have advocated the use of tables of contents for long instructional videos, recorded webinars and suchlike. I saw these in a few instances, but it was never clear how the indices were produced, so I suggested merely writing a list of relevant points and their play times and scrolling manually. Understandably, not many adopted this suggestion.


Then I discovered that my video editor (Camtasia) could create tables of contents for a video automatically when creating a local file, an upload to YouTube or other exports if timeline markers were added at relevant points. The only disadvantage for me with this approach was the limit on the length of the descriptive text attached to the markers. Worse than Twitter in the old days.

But when I accidentally added a marker I didn't want and removed it from the YouTube video description (which is where a TOC resides on YouTube), I saw that things were much simpler than I imagined. And a little research with tutorials made by others confirmed that any time code written at the beginning of a line in the video's description will become a clickable link to that time in the video.


So I've begun to go through some of my old videos with a text editor opened along side. When the recording gets to a point that I want to include in the table of contents, I simply pass the cursor over the video, take note of the time, and then write that time code into the text file along with a description of any length.


Afterward, I simply paste the contents of that text file into the description field in YouTube's editor. When the Save button at the top right is clicked, the new description for the video will be active, and viewers can use the index to jump to the points they want to see. Because only a few lines of the description text are visible by default, I include a hint at the beginning of the text to let people know that the live table of contents is available if they click the SEE MORE link.

If Kilgray, SDL, Wordfast and others involved with the language services sector would adopt techniques like this for their copious recorded content on the Web, the value and accessibility of this content would increase enormously. It would also be very simple then to create hot links to important points in other environments (PowerPoint slides, PDF files, etc.) to help people get to the information they need to learn better.

Not to do this would truly be a great waste and a shame in many cases.



Nov 3, 2013

Kilgray training resources: is this what you need?

Some years ago I was relaxing at an informal occasion with one of Kilgray's directors, who expressed concern that the growing number of features might lead to confusion among users and obscure the basic simplicity of memoQ, which at the time was the company's only product. I think version 3.5 or 4.0 was the current release at the time of our chat. I disagreed with him at the time, because compared to other tools at the time, memoQ was easier to understand, more ergonomic than any of the leading tools. It still is.

But as most users of the software know, things have gotten a lot more complicated since then. As memoQ has taken a forward position in the market for translation environment tools, many features have been added (by necessity one could argue) to accommodate various interest groups. Some of these features I find very good and useful, others inspire a response that cannot be expressed in polite company, because they support server workflows which I personally find exploitative and offensive in the hands of some companies. But any tool can be used for good or bad purposes, and one of my favorite tools for planning my time accurately - the homogeneity analysis - is sometimes abused by Linguistic Sausage Producers to put further economic pressure on individual translators, yet I would not wish it to go away.

Many, many features to master for a wide range of work challenges. Even the so-called "experts" often don't have a clear overview. This problem is, of course, common to almost any popular software application: the situation with SDL Trados Studio is similar, and look at Microsoft Word, my God. An "expert" might understand 10% of Word's features.

Kilgray, it must be said, does try to go the extra mile and provide information to users in many ways so that they can work effectively and avoid frustration when navigating the sometimes tricky paths one must follow in a complex project. Each month the company offers free webinars, and recordings of these are available for later reference in most cases. There is also a knowledgebase (which is quite a challenge to keep up to date given the software's rapid pace of development). The company has also produced a number of shorter instruction videos. The Kilgray YouTube channel has a wide mix of material, including some recordings of past conferences, which are not always easy to understand but which contain a lot of interesting and useful material for some groups. There is also an extensive collection of user guides and white papers on the Kilgray site.

More recently Kilgray as taken its first steps with integrated e-learning, adopting the Moodle platform popular with many educational institutions. I think this is a very interesting new direction; since late spring I have been researching and testing such platforms myself, and I expect very good things to come of this in the future.

The first Kilgray e-learning course was a memoQ basics course, consisting of ten recorded PowerPoint lectures, each about 10 to 15 minutes long. There is a short review quiz at the end to give learners some feedback on what they have retained, and a certificate is offered for those who get a certain number of questions right. Although this course structure uses only a small part of the potential of the Moodle platform, it is easier to navigate and find particular information than it might be in a webinar, for example. The basics course is free to any memoQ user with a current support contract and is worth a look. Others can take the course for a fee of €90 (which is close enough to the cost of annual support that you might as well update your contract and enjoy the updates it includes). Feedback and suggestions should be sent to support@kilgray.com to help in the planning of further courses to help users.

More recently, another Moodle course was published for project managers working with the memoQ server. I know from my own experiences as a consultant and someone who occasionally has to deal with the frustrations of misconfigured server projects set up by my clients that there is a real need for better training for those who work with the memoQ Server. Many of my clients who have adopted this solution have had very little prior experience with CAT tools at all, and given the many pressures of a production environment, they may find visual media a more effective form of support than "RTFM".


The structure of the PM training course is similar to that of the memoQ basics course: ten recorded PowerPoint lectures in English. There is also a quiz at the end. The course is available free to all licensed memoQ server users, who should contact their Kilgray support representative to arrange access.

As the screenshot above indicates, Kilgray has moved its e-learning resources to Language Terminal, and along with all the other plans for that platform, there are many under consideration to expand the scope and quality of learning resources available, not only for Kilgray's products, but perhaps for other knowledge which can contribute to their successful use in the complex world of real projects.

What is your experience so far with Kilgray's training resources? What has worked for you? What has not? What kind of resources do you think would help you and those with whom you work to master the challenges of your daily routine?


Oct 5, 2013

Two years with an e-book reader

Author = NotFromUtrecht (see link). This file is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license.
Nearly two years ago I acquired my first e-book reader, an Amazon Kindle like the one shown here. I had various thoughts of using it professionally but was in any case delighted with the fact that I could read text without eyestrain on it, even without my reading glasses. Some colleagues shared their experiences, and one was kind enough to mention Calibre, which I use periodically to convert file formats for better use in e-book readers or other media.

So what's the score after two years? On the professional side not so hot, because other distractions have prevented me from exploring all the possibilities of converting reference data for use on my reader. It's possible, but I'm still tweaking the technology to get exactly what I want with formatted, searchable RTF and HTML from terminology exports from my many CAT tool termbases. I could do that all along without much trouble using SDL Trados MultiTerm and various XSLT scripts, but I went down the rabbit hole of trying to make these solutions more accessible to colleagues who don't like a lot of technical fiddling, and though I think the problems are solved, I haven't had time to share most of the solutions or implement them on a large scale myself.

I do read literature related to the translation profession with some frequency. Found in Translation by Jost Zetzsche and Nataly Kelley gave me many pleasant, entertained hours with its Kindle version, attempts to read texts in PDF format by others have been less successful because of display issues, and the current version of my own book with memoQ tips is not a happy experience on a small black-and-white ebook reader. The latter has me thinking about what information might work in formats for e-book readers and smartphones, and the latter has been one of the motivations for my recent experimentation with short video tutorials on YouTube. Not only should we consider the current trends in media such as e-book readers, tablets, smartphones and whatnot for our own professional leaning and teaching needs, but also how our clients and prospects may use these media to create content which we might be asked to translate. This has already begun to happen with me in a small way, and those projects were possible only because of things I learned in my teaching experiments shortly before.

I also copy web pages into text or HTML files "to go" when I want to read up on a subject in the park while my dogs play or in a local café somewhere. My reader has a web browser, but many sites are difficult to view in a way that is friendly to a smaller screen. It's easier to grab what I want in separate files and organize these into a "collection" I can refer to easily later.

I never have done any proofreading or review with my Kindle, though I have used texts on it to translate manually (in a separate notebook) on occasion. However, that's not really compatible with most of the texts I work on.

What I have done most with my e-book reader is carry a growing library of world literature with me, familiar and unfamiliar old works and some new. I still hear some people talk about how they could not imagine reading without the heft of the book and the feel of the paper pages turned by their fingers. I'm just as caught up in the sensuality of a dusty old library as any other obsessive bibliophile, but the heft and feel don't mean much when accumulated nerve damage means that the book is more a source of pain than pleasure after ten minutes in your hands, and my once excellent eyesight has now decided that its term is served and I can find my own way with small type and lousy lighting conditions: there, the e-book reader is gift of great value.

Most important to me, however, are the words. The finest binding, gold-edged pages and elegant type mean nothing if the words mean nothing. Words of beauty and power are worth straining to read in weathered stone inscriptions, on crumbled clay tablets written before the founding of Rome or on crumbling acid-paper pages in books forgotten in an attic. How much better then to have these same words in a legible format on your reader in minutes after a short search in an online database and a quick download or a purchase and transfer.

The Velveteen Rabbit had the same nursery magic on the Kindle in the cantinho last night as it would on the delicate old pages of the original edition, but I didn't have to worry about spilling my sangria on it. In the two years since I received my Kindle I have re-read many books that were lost as my library of thousands was slowly dispersed in my many relocations. Hundreds of new books from classic literature in two languages have come to me, go with me in my small, black volume with its Cloud-based backup, and this library will likely not be lost again wherever I go and no matter how lightly I travel. 

Aug 15, 2013

Comments on memoQ comments and YouTube playlists

I recently produced a small video tutorial on what I feel are the useful aspects of the comment feature in memoQ 2013. Although quite a few new things have been introduced to commenting in the current version of the software, the real significance of these changes for ordinary users of the software is limited. Now that what was broken in the memoQ 2013 release is largely fixed, those who care about comments for offline use can continue to use this great feature without much inconvenience.

Here is an idiosyncratic overview of how I use comments in my projects (HINT: these embedded videos are easier to watch if you do that in full screen mode by clicking the icon at the lower right of the play window):

Time  Description
0:28  Opening the comment dialog
1:01  Commenting highlighted text
1:48  Adding "codes" to comments for later filtering
2:44  Selecting all files, creating a view with all comments
3:35  Comments shown in speech bubble tooltips
3:57  Creating a filtered list of comments (code = '@PM')
4:50  Creating a filtered list of comments (code = '@CST')
5:20  Exporting commented segments in a bilingual RTF file
6:15  Check segments for extraneous comments before sharing the exported list
What I didn't show here is my usual way of accessing and exiting the comment dialog: keyboard control, opening with Ctrl+M and exiting with a quick tab to the OK button and hitting the Enter key. Having multiple comments makes editing slightly less convenient if one has to click on an icon, but the ease of deleting an entire comment in a series, and the separation of comments by a new paragraph in an exported RTF bilingual file are compensating conveniences.

That six-and-a-half minute video really has more information than someone generally familiar with the old way of using comments in memoQ would care about. The only part which might really interest someone who already knows how to create an exportable view with commented segments is how the procedure for creating views of selected comments differs from creating a view with all comments. So I decided to use the excerpting feature of YouTube playlists to create a special "view" of the tutorial video which shows only that little bit from which I believe many experienced users may benefit.

(Use the link above to look at the trimmed playlist on YouTube - I removed the embedded video code here, because its behavior in the Google Blogger environment seems to be quirkier than links on a Moodle web page or Facebook page. This technique is useful but may still require careful testing of the environment in which it will be used.)

No index needed here - the video is barely over a minute long in its two parts. This technique of playlist excerpting on YouTube could be used to "mine" longer teaching videos for specific bits of information needed to understand a specific issue. One can combine separate video clips, "in whole or in part" as contract lawyers like to say, or individual segments of a single video as I have done here. This is a useful technique which, along with time index lists such as that shown above, I hope to see applied more often for the education and support of translators.

How does Kilgray present the state of commenting in memoQ? The video clip below is from Kilgray's YouTube channel - interesting, but really another world. The video shows the commenting feature as it was at the end of May, with "innovations" which sparked the Commentgate controversy.


This presentation is really very focused on users of the memoQ server, because all that lovely highlighting is only visible in the memoQ environment itself, and these comments with highlighting do not currently export in the usual medium for sharing feedback (comments) with clients offline: RTF bilingual files. In fact, it's really a shame that in all the years that memoQ has offered exportable comments, this very helpful feature has hardly been part of the official teaching, because in the real world of client relationships, it is often a great asset.

Aug 9, 2013

memoQuickie: Exporting TMX from memoQ

Translators are often asked to export translation memory data (TMX files typically) to deliver with their work. Although this is fairly simple to do in memoQ, too often more than just the required data is sent by mistake.

Select the TM from which to export via Tools > Resource Console... > Translation memories or Project home > Translation memories.

Click Export to TMX to export all the data in the chosen TM.

If a selective export (just some of the data in the TM) is desired, click Edit, enter the filter criteria in the dialog that appears and click OK. It is also possible to filter in the editor view. Only the data shown will be included in the TMX file created.

The memoQ TM editor with filtered records shown. Click to enlarge.


Here is a "video tour" of the process:


Ten brownie points to anyone who can figure out where I "cheated" with the translation information shown in the video.

Aug 6, 2013

Translating presentations in memoQ: PowerPoint vs. OpenOffice Impress

Microsoft PowerPoint files can be a real nuisance to translate. One of the biggest challenges with these files is the haphazard formatting that many authors apply when working in that medium: line breaks and paragraph breaks in the most inconvenient places, which can cause some stress when working with many translation environment tools.

The current status of the PowerPoint filters in memoQ (version 6.5 build 10) is not as well developed as the filters for Microsoft Word and Excel files; in particular the inability to configure the handling of "soft breaks" (line feeds) causes me no little grief. However, I can at least join segments to get complete sentences where I want them. That's something you can't do in SDL Trados Studio, though that tool at least represent the breaks as inline tags. Sometimes I prepare my PowerPoint files in Trados Studio and then translate the SDLXLIFF file in memoQ if there are a lot of breaks in the sentences. But then I miss the preview.

Recently I had occasion to look at a presentation created with OpenOffice Impress, a rather nice alternative to PowerPoint. Given the confusion over Microsoft's new licensing practices for MS Office 2013, I would not be surprised if more of my corporate clients begin to use the clever free alternative.

However, when I tried to import the Impress (ODP) files to memoQ, I found that the files were not recognized as a translatable format. However, that problem was quickly solved, and the technique for translating ODP files in the current and older versions of memoQ is shown in the video below. One could, of course, convert these to PowerPoint formats, but you might not want to. With ODP files, it is possible to have breaks treated as inline tags.


Time Description
0:33
  Importing the PowerPoint file to memoQ with options
1:10  Examining the segments of the imported PowerPoint file
1:35  Joining segments for "broken sentences" in the imported PowerPoint file
1:43  The presentation as an OpenOffice Impress (ODP) file
2:07  Importing the ODP file to memoQ
2:39  Setting the filter for the "unknown" file type
3:04  Configuring "soft" breaks as inline tags
3:34  Examining the segments of the imported ODP file
I hope to see a few more refinements of the PowerPoint and OpenOffice filters in future builds of memoQ!

Aug 3, 2013

memoQ&A: How do I leverage the pretranslated SDLXLIFF content?

Given the interesting and surprising answers I received in previous two-stage "quiz posts" in which a challenge was posed for others to answer before I present my approach, I have decided to try a series of such posts. I've polled a few friends about a possible name for this series - memoQuiz, memoQ&A, CATquiz or perhaps something else. The first two choices suggest that memoQ would be the focus, but despite the impressions some may have of my publication habits, memoQ is far from my only concern with productivity involving the software we use for translation processes. So I'll leave that question open for now and use the current "vote leader". Arguments for and against in the comments are welcome.

Today's "quiz" is inspired by my continuing research into the current status of interoperability between SDL Trados Studio and memoQ 2013. As Kilgray has continued to upgrade the quality of its filters and other features for working with files from other platforms, SDL advocates have been increasingly at pains to find the rare, exceptional cases that do not work well or at all and present these as "common" and proof that we should all just bow down and kiss the One Ring ;-) The latest variant of that theme which I saw involved tracked changes displayed in source segments of the translation grid. It was fascinating, really, but a bit bizarre and utterly outside anything in my experience with 13 years of commercial translation. I'm not about to torture myself with an unergonomic application if a simpler one covers most of my professional needs. The Pareto principle rules.

Here's the scenario:
  • You receive a pre-translated SDLXLIFF file with segments of various status. Some are pretranslated fuzzy matches, some are not-yet approved or even rejected pre-translated segments or one which the outsourcer confirmed (but did not "approve") before sending the file. And some segments have not been translated at all.
  • The outsourcer just went on holiday and forgot to send you the translation memory!
  • You want to be able to use the pretranslated and approved content in the SDLXLIFF file as a reference while you translate this file and others. How can this be done???
  • Here is the file to translate. It is an English source text being translated into German.
The file to translate as seen in SDL Trados Studio. Click to enlarge.

Thank you to those who contributed their suggestions in the comments! Here is how I approached the problem:
The bilingual file I was given has different "qualities" of translated segments. There are unconfirmed (and possibly dodgy) sentences, including a "rejected" 100% match, translated (confirmed) segments and approved (proofread) segments. A TM in memoQ gives me no opportunity to differentiate match quality based on row status. LiveDocs does!
So I send the SDLXLIFF file received to a LiveDocs corpus on a "temporary" basis, where I apply special settings to apply a fairly heavy penalty to unconfirmed segments, a mild penalty to translated (confirmed) but unapproved (not proofread) segments and no penalty at all to the parts which have already been check and approved.
Details of the settings configuration and an example of how these settings apply to the SDLXLIFF file used as an example are shown in the video below. A similar approach can be applied to any bilingual file (or translation stored in LiveDocs) where there may be significant differences in segment status.


Time index to the video tutorial:

0:30  Creating a new LiveDocs settings profile
1:05  Editing the new LiveDocs settings profile
1:29  Match threshold settings
2:16  Alignment penalties
3:01  Bilingual document penalties
3:45  Penalty for unfinished alignments
4:24  Sub-language difference penalty
4:57  A "tour" of the row status for segments in the SDLXLIFF
6:20  Adding the translation file to the LiveDocs corpus
7:14  Applying the new LiveDocs settings to the LiveDocs corpus used
8:00  How the new LiveDocs settings work for matches in the translation window
9:22  Advantages of using LiveDocs rather than a translation memory

Jul 27, 2013

The great guessing game: file extensions in Windows!


How many file types are there in the screen shot above?

Use your discerning eye and write your answer in the comments of this post. Anyone who figures out the correct answer and writes it as a comment before I update this post will be offered a special "reward" for clever deduction.

Update #1: A link to the solution has been posted in the comments. I will follow this up soon with a bit more explanation and a short video. In my comment with the solution, you'll also see why I hate Google's Blogger and don't recommend it to anyone, though I use it myself thanks to 5 years of habit. I'm the owner of the blog, and I can't even edit my own typos in comments, just in posts. Obviously, the Gods of Microsoft do not "thing". One would presume that they "think", however often the evidence of their actions may be against this.

Update #2: Hot summer nights are better suited to screencasting than sleep, so here is the video, a little sooner and longer than planned. But the part that actually matters (1:20 - 1:55) is in fact well under a minute.

 
 
This problem of hidden file extensions has come up so often in so many ways over the years. In the translation world of CAT tools I deal with, it comes up too often in discussions of how one can change the extension of an MQXLIFF file, for example, to XLIFF or XLF so that it can be imported by SDL Trados Studio or other tools.
I also have to deal with this problem when I send someone an RTF and a DOC file; these two file types are indistinguishable by icon, and without visible file extensions, it is fairly hopeless to discuss these files with anyone.

Jul 26, 2013

The trouble with voice recognition in translation environment tools....


I had not planned to make a video on voice recognition tools any time soon, but a few remarks by my American colleague Kevin Hendzel well down in the many comments about thepigturd's letter to translators sort of goaded me into it. I thought, "What the heck, I'll just grab some text from Wikipedia, record a bit of the work with Camtasia, and post a quick demo of how easy it is to work with Dragon Naturally Speaking." So I got a text about chickens. And activated the screencast recorder. And then the trouble started.

It really sucked. Working with Dragon in memoQ is usually a fairly painless process, but tonight the dogs were anxious and kept poking me in the ribs, and I never did get the microphone adjusted quite right. Some days, microphone position is everything to my scaly transcriptionist. So I suffered with a lot more editing than usual, as anyone watching the video above will see. I worked in my usual "mixed mode" manner, with both keyboard and voice control. Some colleagues who swear by DNS like to do everything by voice and would probably wipe their backsides in the WC that way as well if they could, but that's way too geeky for me. After watching my copywriting partner fly through some 10,000 words of legal translation - and edit it - in a short working day while I slogged through my 3,000 and finished long after she called it a day, I realized that I could work in the relaxed way she did with thoughtful stares at the screen, muttered bursts and the occasional keyboard touch.


But today was a bad day with the Dragon. I might have gone a bit faster with the text. After all, chickens aren't rocket science or even chemistry, with its tag-ridden notation. I could have just dictated in a word processor and everything would have one faster. And if I really want a TM or want to check the terminology, alignment is fast and also a good environment for editing my first draft. I know a number of translators who work that way now. Even with a dictaphone.

In his comments on the other post, Kevin Hendzel expressed a similar feeling to mine when translating with voice recognition: greater engagement and concentration on the text and its structure and meaning. But these tools are not without risk: any errors will in fact pass muster with a spelling checker, so proofreading workflows may have to be very different to be effective. I have noticed this myself - reading my text soon after I have translated it, I am very likely to overlook a missing or switched article or a homophone. Perhaps dictating into a word processor or - since I often look to the glossary hits and other hints on the right of my working window - exporting my text and re-aligning it in the CAT tool after an external rewrite may force my eyes to see things a little differently. In the two years that I have been making serious use of voice recognition I have not yet found the "perfect" workflow.

There are a lot of ways I can tease better results out of this work. But even on a bad day like today, things aren't all that awful. In fact, those familiar with some of the more honest estimates of output in optimized machine translation and post-editing scenarios will realize that today's lousy results (see the end of the video), maintained over the course of a working day, meet or beat the expectations for post-editing in a highly optimized scenario. Without the brain rot typically caused by PEMT! Now that's an advantage. Why don't we stop wasting time with machine translation and instead increase output by more research into the best ways of using voice recognition technology? Ah, but voice recognition is not yet optimized for every language! Ha ha ha... like MT is or ever will be. The millions that get flushed down the toilet with machine translation could and should buy a lot of improvement with voice recognition.

The real trouble with voice recognition is that you may not want your competition to use it. With or without CAT tools. Unlike machine translation.

New tutorials for translation productivity

As many may have noticed from reading this blog or following me on Twitter in recent weeks, I have begun to create and post on YouTube a larger number of videos on various topics related to translation support tools or processes to make the business of translation a little easier. Many, but not all of these videos cover the use of memoQ; there are also discussions of special file preparation, VBA macros, piece rate equivalency calculations and more.


Subscribe to the channel and keep up to date as I add more material on a range of software tips and other productivity ideas for translators.

I have used a number of different tools to record and edit these videos - Camtasia in a few cases, and so far mostly the free Open Source tool CamStudio, both for Windows. The production values of the clips vary considerably as I get back into video production after a 17 year break. I'm not aiming for perfection here, but rather for quick and practical - more or less in the spirit of my memoQuickie tutorials which formed the basis of much of the e-guidebook of memoQ tips I released last year. I start gnashing my teeth before the 4-minute mark with many translation tool tutorials I see, so I try hard to keep most of the clips well under that length. I find that even the good longer videos are difficult to use as references, because I often have to search for the few minutes that interest me in an hour-long webinar recording, and without a reasonable index, that's just too difficult.

Many of these videos and the ones that follow will be embedded content in courses I am developing for translator and project manager education using Moodle. These combine text, video and audio along with practice files in many cases for multimodal learning of basic and more advanced processes that those in the translation business often require.

I've watched the growth of webinars in recent years as well as various other forms of distance assistance and instruction. But aside from coaching sessions for clients and colleagues using TeamViewer, I have refrained from taking part in these popular new media, because I am unconvinced of their effectiveness in many cases. This is not to say that there are not many very good webinars available, both live and recorded, but even with the best I am often left with the sense that there should be something more. Maybe something like a self-paced review course online with the resources where one can practice what was just covered in the long webinar and go over its most important points more easily. Such a resource might also be useful as a backup to live lectures or courses I or others might do.

I think that having a "toolbox" of online short courses might change the way I teach in person in some positive ways. It would likely give me more flexibility in how to cover topics for groups where a range of skills are present - the "outliers" requiring more remediation or greater challenge could perhaps be served better by offering them appropriate online follow-up resources, or even drawing material from these in a live workshop as the need arises.

Careful study of a wide range of current e-learning approaches in recent months has led me to think about what appears to work best and how I can learn from that and encourage others to do so as well. This has been an active and interesting discussion with quite a few friends and colleagues, and I don't expect we'll ever find the ultimate answers. But I imagine we will have fun and probably learn a lot, and maybe help others to learn some useful things too.

Jul 24, 2013

What good is memoQ fuzzy term matching?


When Kilgray introduced fuzzy term matching with the release of memoQ 2013, I was first concerned with how it worked after a few puzzling tests of the feature. Discussions with the development team soon cleared up that mystery, and I wrote an article describing the current fuzzy state of term matching technology in the translation environment tool that has done such a fine job of waking SDL and others from the long slumber of innovation that prevailed in the last decade.

But questions still remained in the minds of most users as they asked why they should care about this feature and what good it would really do for them.

The answer to that has become clearer for me as I have used the feature in recent weeks and noticed certain things. Like the fact that crappy spelling in my source texts is not as much of a burden for term matching any more:


This actually applies to more than just bad spelling. Those who translate from English will benefit from the fact that fuzzy term matching will help them if the UK source term is in the glossary but the author of the text used an American spelling. I cope with problems caused by old and new spelling conventions in German as well as the fact that a great many Germans cannot agree on how their compound words should be glued together. And my Portuguese friends tell me every week about the hassles of the spelling reform in progress in that linguistic corner.

Fuzzy term matches is currently not implemented for QA checking in memoQ, but I think it would make sense for Kilgray to add this feature to allow fuzzy term matching for QA on the source side. It could be a bit of a disaster to have it on the target side, however, for reasons I will leave readers to guess.

For those who want to set their termbases to use fuzzy matching by default in a particular language, here is a short video that shows how to change the termbase properties and how to change to match settings for legacy terms to "fuzzy":


I was initially a bit skeptical of the latest version of memoQ, but as this feature and a few others have begun to "sink in", while I still don't feel comfortable with the company's hyperbole over new features like LQA, which is largely pointless for freelance translators, I do feel confident in saying that fuzzy term matching is a reason for most of us to seriously consider upgrading to memoQ 2013. This will be even more the case if it is added to the QA features.

Ah, but what about the change to the comments function, Kevin? You really hated that!

There's more to say on that topic now. Some of it is even good.

Jul 20, 2013

Teach the translators well

My colleague Jayne Fox recently offered a list of various continuing education opportunities of interest under the title "Free webinars for translators and interpreters". It's worth a look.

I find her blog post title unfortunate but astute. Many colleagues caught in the Poverty Cult mentality won't look at anything unless it is "free". Free, software, free training, whatever. Very often a waste of time and money though. Translators and everyone else would do better to set their personal filters to seek the good first, and then apply cost criteria. If I'm flat broke, I'm not going to be paying my bills faster by wasting my time on crap. I need to focus on what will really build my skills and help my marketing. The fact that some of the things that will cost little or nothing is a matter of almost secondary importance no matter how empty the refrigerator and bank account might be.

In the same way, fat and happy translators billing €40 per word will not benefit from the fastest, most expensive computer hardware and software available. Anyone can benefit from good tools and some very good ones, such as OmegaT, can be had for no investment but your time. Whether OmegaT is better than SDL Trados Studio, Fluency or memoQ would depend on the task to be accomplished. That's one of the reasons why interoperability is such an important topic for me in translation technology: for thirteen years I have tried to use the best combination of tools to optimize the ergonomics of my work and get the best results.

Most of the web presentations Jayne listed are very good. She gave us an excellent overview which can help a great many people. One of my favorites in the list is the translators training site which Jost Zetzsche has been involved with for so many years: it has pay-for-few recordings that show specific, important and profitable tasks which should interest many translators, but it also offers free short video tutorials for every CAT tool I can think of (about 20 of them), comparing how each performs the same simple translation job, records terminology, etc. Often these little tutorials would be all someone needs to make a good start with their chosen CAT tool, and the videos are a good way to get a simple overview of the different "feel" of the various environments.

But we can do better.
How? I don't have all the answers. I have a few notions, and for some time now I have been researching past and current practice, pestering people with questions, wasting time and mining ideas. Along the way I've stumbled into a few interesting business opportunities as a provider of language services, I've learned a lot and had fun. Some have seen my experiment and been motivated to start their own. I hope that they and others will continue to question the models of online and offline instruction which currently dominate our practice and question whether there is something more to be had.

Let's take the webinar as an example. These are very popular, and rightly so. I have learned a lot from them and probably could have learned a lot more. But to date I have resisted all attempts to draw me into teaching one myself, despite the fact that I have been committed to teaching in various forms for over 30 years. This is because many webinars are a waste of time. Even the best webinars waste time I think. Maybe not. But I think it's fair to say that someone watching the best 30 to 60 minute webinar I could offer would have a lot of their time wasted, and they would have a harder time making use of the lessons later than if these were presented differently.

Many times I have wanted to go back and review some useful technical point in a Gábor Ugray webinar on memoQ, and I just can't find it in the hour presentation, the dog ate my notes, and by the time I do find it, said dog needs to go for a walk and I forget the whole matter.

There is no indexing for most webinars. If you must leave that long talk in one big chunk, why not put an index under it which notes important points and the time at which they are discussed. There is probably some clever way to make this a clickable hyperindex which immediately skips to that point, but I don't mind being low tech and dragging a slide bar to get to the part of the video that interests me.

One hour is too f-ing long most of the time. I swear if I ever do an hour-long webinar, I will edit down to the twenty minutes that really matter and then slice that up into the three to five individual topics of interest. And I'll add a little text and perhaps some graphics to a web page in which the individual clips are embedded if this can reinforce or supplement the message in some useful way.

If the visuals don't matter I might even just extract the audio from the video and offer an MP3 "podcast" you might listen to in the car on a long drive or on a jog through the neighborhood (though I refuse all liability if you get killed at an intersection while not paying attention to your route).

If I'm trying to teach you about software and how to use it for a task like handling particular file formats or types of information, I might think about providing a demonstration file with which you can practice. How many people do this now? Doh. It's all very well to talk about how much a particular translation environment tool can do, but if people can't apply that and gain confidence before they are asked to quote on a big job, they might well be too afraid of failure and leave their refrigerators and bank accounts understocked.

The same applies to skills that have less or nothing to do with translation and multilingual matters. Current teaching paradigms are underdeveloped, and improving them is not usually a matter of better editing, flashier effects and easy listening soundtracks. Real value can often be simpler, faster, cheaper and cruder than that. Integrated instruction is more a matter of imagination than technology and budget.


My current "research" is being performed in most cases with production tools which are free and usually Open Source and which, most of the time, are definitely inferior to the premium Adobe software I used for similar tasks in the 1990s. This is my concession to the Poverty Cult and my own sometimes involuntary priorities, and I fear that if I download the latest copy of Camtasia, Adobe Premiere, Adobe Captivate and other fine tools someone with something useful to teach might confuse the medium with the message and not make a valuable tutorial they could create quite adequately using free and easy tools. There is always time to upgrade after the point has been driven into the ground and the stake is there for all to see.

Jul 10, 2013

Coping with objects and graphics to translate in Microsoft Office documents

About a year ago, I published a series of posts describing a simple way to get at the objects and graphics embedded in Microsoft Office documents, such as Microsoft Word DOCX documents or PowerPoint PPTX presentations. These investigations were inspired by a series of jobs where I had to cope with up to 60 embedded Excel tables in a Microsoft Word document. The four related posts are:
The post titles may differ a little from the text in the links here, which is updated for a little more clarity.

I've also added two short videos to my YouTube channel which illustrate how to remove embedded objects from a DOCX for translating separately from the Microsoft Word document and how to put them back afterward.

Here's how to extract the embeddings folder from the DOCX file:



And here is how to put the translated embedded objects into the DOCX file and refresh the view of the embedded objects in your translation:



These and other videos I've produced recently are part of an effort I began recently to develop integrated courses for self-instruction and review with software tools used by many of us. These courses use the Moodle platform and offer text, screenshots, audio, video and data files such as examples of file formats to translate, backups of memoQ practice projects to restore on your local computer for training, configuration resources for memoQ, useful macros to support work with many translation environment (CAT) tools and a host of other resources and learning links.

Jun 29, 2013

Caption editing for YouTube videos

I've spent a great deal of time in recent weeks examining different means for remote instruction via the Internet. In the past I've had good success with TeamViewer to work on copywriting projects with a partner or deliver training to colleagues and clients at a distance. So far I have avoided doing webinars because of the drawbacks I see for that medium, both as an instructor and as a participant, but I haven't completely excluded the possibility of doing them eventually. I've also looked at course tools such as Citrix Go To Training and a variety of other e-learning platforms, such as Moodle, which is the tool used by universities and schools around the world and which also seems to be the choice of Kilgray, ProZ and others for certain types of instruction.

Recorded video can be useful with many of these platforms, and since I've grown tired of doing the same demonstrations of software functions time and again, I've decided to record some of these for easy sharing and re-use. When I noticed recently that my Open Source screen recording software, CamStudio had been released in a new version, I decided quite spontaneously to make a quick video of pseudotranslation in memoQ to test whether a bug in the cursor display for the previous version of CamStudio had been fixed.

After I uploaded the pseudotranslation demo to YouTube, I noticed that rather appalling captions (subtitles) had been created by automatic voice recognition. Although voice recognition software such as Dragon Naturally Speaking is usually very kind to me, Google's voice recognition on YouTube gave miserable results.

I soon discovered, however, that the captions were easy to edit and could also be exported as text files with time cues. These text files can be edited very easily to correct recognition errors or combine segments to improve the timing and subtitle display.

Once the captions for the original language are cleaned up and the timing is improved, the text files can be translated and uploaded to the video in YouTube to create caption tracks in other languages. As a test, I did this (with a little help from my friends), adding tracks for German and European Portuguese to the pseudotranslation demo. And if anyone else cares to create another track for their native language from this file, I'll add it with credits at the start of the track.

It's easy enough to understand why I might want to add captions in other languages to a video I record in English or German. But why would I want to do so in the original language? My thick American accent is one reason. I like to imagine that my English is clear enough for everyone to understand, but that is a foolish conceit. Of course I speak clearly - I couldn't use Dragon successfully if that were not true. But someone with a knowledge of English mostly based on reading or interacting with people who have very different accents might have trouble. It happens.

Although most of the demonstration videos SDL has online for SDL Trados Studio are easy to follow, some of the thick UK accents are really frightening and difficult for some people in places like Flyover America to follow. Some Kilgray videos of excellent content are challenging for those unaccustomed to the accents, and the many wonderful demos of memoQ, WordFast, OmegaT and other tools by CAT Guru on YouTube would have been difficult for me before I was exposed to the linguistic challenges of the wide world that can English. All of these excellent resources in English would benefit from clear English subtitles.

How difficult is it to create captions? The three-minute pseudotranslation demo cost me about ten minutes of work to clean up the subtitles. The English captions for another slightly shorter video explaining the use of the FeeWizard Online to estimate equivalent rates for charging by source or target words, lines, pages, etc. also took me about 10 or 15 minutes with all the text and timing corrections. And I've spent a good bit of time in the past week transcribing a difficult spoken English lecture by a German professor: it took me about 7 hours of transcription work to cope with a spoken hour. I don't know if this is typical, because I almost never do this sort of thing, and there were a lot of WTF moments. But I suppose three to seven times the recording length might be a reasonable range for estimating the effort of a draft edit and some timing changes. Not bad, really.

So if you are involved in creating instructional videos to put on YouTube or use elsewhere, please consider this easy way of making good work even better by investing a little time in caption creation and editing. Once you have done this for the original language, it will also be a simple matter to translate those captions to make your content even more accessible.

Sep 3, 2012

memoQuickie: "compatible" XLIFF in memoQ 6, 6.2 and 2013


The recent builds of memoQ 6 have caused some confusion with Kilgray following the SDL practice of creating its own extension for XLIFF files, so these may not be recognized by other applications and would need the extensions changed to XLF or something else generally recognizable as XLIFF.



This can actually be done directly in the export dialog for the bilingual XLIFF file. Just change the file extension in the File name field. Despite a renamed extension, the file will be recognized as a memoQ bilingual when re-imported, and the translation of the original file will be updated.

If you encounter a compressed memoQ XLIFF export (MQXLZ), be aware that this is just a ZIP file. Rename the extension to ZIP and unpack it to get the real XLIFF (MQXLIFF) file. That file has a generic name (document.mqxliff) and should probably be renamed before it is shared.

Update 2013-07-23: Here is a short video I posted on YouTube to show how to export an XLF file and open it in SDL Trados Studio: