Dec 11, 2018

Your language in Hey memoQ: recognition information for speech

There are quite a number of issues facing memoQ users who wish to make use of the new speech recognition feature – Hey memoQ – released recently with memoQ version 8.7. Some of these are of a temporary nature (workarounds and efforts to deal with bugs or shortcomings in the current release which can reasonably be expected to change soon), others – like basic information on commands for iOS dictation and what options have been implemented for your language – might not be so easy to work out. My own research in this area for English, German and Portuguese has revealed a lot of errors in some of the information sources, so often I have to take what I find and try it out in chat dictation, e-mail messages or the Notes app (my favorite record-keeping tool for such things) on the iOS device. This is the "baseline" for evaluating how Hey memoQ should transcribe text in a given language.

But where do you find this information? One of the best way might be a Google Advanced Search on Apple's support site. Like this one, for example:

The same search (or another) can be made by adding the site specification after your search terms in an ordinary Google search:

The results lists from these searches reveal quite a number of relevant articles about iOS dictation in English. And by hacking the URLs on certain pages and substituting the language code desired, one can get to the information page on commands available for that language. Examples include:
All the same page, with slightly modified URLs.

The Mac OS information pages are also a source of information on possible iOS commands that one might not find so easily otherwise. An English page with a lot of information on punctaution and symbols is here:

The same information (if available) for other languages is found just by tweaking the URL:

and so on. Some guidance on Apple's choice of codes for language variants is here, but I often end up getting to where I want to go by guesswork. The Microsoft Azure page for speech API support might be more helpful to figure out how to tweak the Apple Support URLs.

When you edit the commands list, you should be aware of a few things to avoid errors.
  • The current command lists in the first release may contain errors, such as mistakenly typing "phrase" in angular brackets as shown in the first example above; on editing, the commands that are followed by a phrase do not show the placeholder for that phrase, as you see in the example marked "2".
  • Commands must be entered without quotation marks! Compare the marked examples 1 and 2 above. If quotes are typed when editing a command, this will not be revealed by the appearance of the command; it will look OK but won't work at all until the quote marks are removed by editing.
  • Command creation is an iterative process that may entail a lot of frustrating failures. When I created my German command set, I started by copying some commands used for editing by Dragon NaturallySpeaking, but often the results were better if I chose other words. Sometimes iOS stubbornly insists on transcribing some other common expression, sometimes it just insists on interpreting your command as a word to transcribe. Just be patient and try something else.
The difficulties involved in command development at this stage are surely why only one finished command set (for the English variants) for memoQ-specific commands was released at first. But that makes it all the more important to make command sets "light resources" in memoQ, which can be easily exported and exchanged with others.

At the present stage, I see the need for developing and/or fixing the Hey memoQ app in the following ways:
  • Fix obvious bugs, which include: 
  • The apparently non-functional concordance insertions. In general, more voice control would be helpful in the memoQ Concordance.
  • Capitalization errors which may affect a variety of commands, like Roman numerals, ALL CAPS, title capitalization (if the first word of the title is not at the start of the segment), etc.
  • Dodgy responses to the commands to insert spaces, where it is often necessary to say the command twice and get stuck with two spaces, because a single command never responds properly by inserting a space. Why is that needed? Well, otherwise you have to type a space on the keyboard if you are going to use a Translation Results insertion command to insert specialized terminology, auto-translation rule results, etc. into your text. 
  • Address some potentially complicated issues, like considering what to do about source language text handling if there is no iOS support for the source language or the translator cannot dictate commands effectively in that language. I can manage in German or Portuguese, but I would be really screwed these days if I had to give commands in Russian or Japanese.
  • Expand dictation functionality in environments like the QA resolution lists, term entry dialog, alignment editor and other editors.
  • Look for simple ideas that could maximize returns for programming effort invested, like the "Press" command in Dragon NaturallySpeaking, which enables me to insert tags, for example, by saying "Press F9". This would eliminate the need for some commands (like confirmation and all the Translation Results insertion commands) and open up a host of possibilities by making keyboard shortcuts in any context controllable by voice. I've been thinking a lot about that since talking to a colleague with some pretty tough physical disabilities recently.
Overall, I think that Hey memoQ represents a great start in making speech recognition available in a useful way in a desktop translation environment tool and making the case for more extensive investments in speech recognition technology to improve accessibility and ergonomics for working translators.

Of course, speech recognition brings with it a number of different challenges for reviewing work: mistakes (or "dictos" as they are sometimes called, a riff on keyboard "typos") are often harder to catch, especially if one is reviewing directly after translating and the memory of intended text is perhaps fresh enough to override in perception what the eye actually sees. So maybe before long we'll see an integrated read-back feature in memoQ, which could also benefit people who don't work with speech recognition. 

Since I began using speech recognition a lot for my work (to cope with occasionally unbearable pain from gout), I have had to adopt the habit of reading everything out loud after I translate, because I have found this to be the best way to catch my errors or to recognize where the text could use a rhetorical makeover. (The read-back function of Dragon NaturallySpeaking in English is a nightmare, randomly confusing definite and indefinite articles, but other tools might be usable now for external review and should probably be applied to target columns in an exported RTF bilingual file to facilitate re-import of corrections to the memoQ environment, though the monolingual review feature for importing edited target text files and keeping project resources up-to-date is also a good option.)

As I have worked with the first release of Hey memoQ, I have noticed quite a few little details where small refinements or extensions to the app could help my workflow. And the same will be true, I am sure, with most others who use this tool. It is particularly important at this stage that those of us who are using and/or testing this early version communicate with the development team (in the form of e-mail to memoQ Support - - with suggestions or observations). This will be the fastest way to see improvements I think.

In the future, I would be surprised if applications like this did not develop to cover other input methods (besides an iOS device like an iPhone or iPad). But I think it's important to focus on taking this initial platform as far as it can go so that we can all see the working functionality that is missing, so that as the APIs for relevant operating systems develop further to support speech recognition (especially the Holy Grail for many of us, trainable vocabulary like we have in Dragon NaturallySpeaking and a very few other applications). Some of what we are looking for may be in the Nuance software development kits (SDKs) for speech recognition, which I suggested using some years ago because they offer customizable vocabularies at higher levels of licensing, but this would represent a much greater and more speculative investment in an area of technology that is still subject to a lot of misunderstanding and misrepresentation.

Dec 10, 2018

"Hey memoQ" command tests

In my last post on the release of memoQ 8.7 with its new, integrated speech recognition feature I included a link to a long, boring video record of my first tests of the speech recognition facility, most of which consisted of testing various spoken iOS commands to generate text symbols, change capitalization, etc. I tested some of the integrated commands that are specific to memoQ, but not in an organized way really.

In a new testing video, I attempt to show all the memoQ-specific spoken command types and how the commands are affected by the environment (in this case I mean whether the cursor is on the target text side or the source text side or in some other place in the concordance, for example).

Most of the spoken commands work rather well, except for insertion from the concordance, which I could not get to work at all. When the cursor is in a source text cell, commands have to be given in the source text language currently, which is sure to prove interesting for people who don't speak their source language with a clean accent. Right now it's even more interesting, because English is the only language with a ready-made command list; other languages have to "roll their own" for now, which is a bit of a trial-and-error thing. I don't even want to think how this is going to work if the source language isn't supported at all; I think some thought had to be given to how to use commands with source text. I assume if it's copied to the target side it will be difficult to select unless, with butchered pronunciation, the text also happens to make sense in the target language.

It's best to watch this video on YouTube (start it, then click "YouTube" at the bottom of the running video). There you'll find a time code index in the description (after you click SEE MORE) which will enable you to navigate to specific commands or other things shown in the test video.

My ongoing work with Hey memoQ make it clear that what I call "mixed mode" (dictation with concurrent use of the keyboard) is the best and (actually) necessary way to use this feature. The style for successful dictation is also quite different than the style I need to use with Dragon NaturallySpeaking for best results. I have to discipline myself to speak more in short phrases, less in longer ones, much less in long sentences, which may cause some text to be dropped.

There is also an issue with Translation Results insertions and the lack of spaces before them; the command to insert a space ("spacebar" in English) is dodgy, so I usually have to speak it twice and end up with a superfluous space. The video shows my workaround for this in one part: I speak a filler word (in one case I tried "dummy" which was rendered as "dumb he") and then select it later and insert an entry from the Translation Results pane over the selected text. This is in fact how we can deal with specialist terminology not recognized by the current speech dictionary until it becomes possible to train new words some day.

The sound in the video (spoken commands) is also of variable quality; with some commands I had to turn my head toward the iPhone on its little tripod next to my laptop, which caused the pickup of that speech to be bad on the built-in microphone on the laptop's screen. So this isn't a Hollywood-class recording; it's simply a slightly edited record of some of my tests to give other memoQ users some idea of what they can expect from the feature right now.

Those who will be dictating in supported languages other than English need some patience right now. It's not always easy coming up with commands that will be recognized easily but which are unlikely to occur as words to be transcribed in typical dictation work. During the beta test of Hey memoQ I used some bizarre and unusual German words which just happened to be recognized. I'm developing a set of more normal-sounding commands right now, but it's a work in progress.

The difficulties I am encountering making up new command phrases (or changing the English ones in some cases) simply reinforce my belief that these command lists should be made into portable light resources as soon as possible.

I am organizing summary tables of the memoQ-specific commands and useful iOS commands for symbols, capitals, spacing, etc. comparing their performance in other iOS apps with what we see right now in Hey memoQ.

Update: the summary file for English is available here. I will post links here for any other languages I can prepare later.

Migrating memoQ with Mac Parallels

A recurring complaint among memoQ users is the perceived (and actual) difficulty of moving all one's resources to a new computer when it's time to retire the old one. My favorite strategy is to create a single backup file for all my projects and restore that on a new machine, though this still leaves some details to clean up like moving light resources that don't happen to be included in any of those projects. Some people like to make a dummy project and attach all the heavy resources to that, but this is really only a viable option if you work in a single language pair. There are other strategies, of course, some better than others, given the particular situation.

Recently, a colleague who prefers to work on Apple Macintosh computers with Windows running in a Parallels virtual machine (VM) shared her new approach to migration. Apparently it's working well. And I suspect that the same approach could be used with a Windows VM under Windows as I used to do a lot in the days I tested more unstable software and needed to quarantine potential disasters.

Rather than re-install everything on the new machine, she simply
  1. installed Parallels on the new Mac,
  2. copied the VM file from the old Mac to the new one and
  3. copied resource folders to an identical path on the new Mac.
That's all. The fact that this is a virtual environment makes it all easier. So any memoQ user who runs the software using some virtual machine (VMware, Parallels, etc.) could do the same I suppose. Or any user with any operating system who runs Windows on a virtual machine.

That should have occurred to me earlier. Years ago I used to run several virtual machines with ancient versions of Windows to access old CD-based dictionaries that could not run under newer OS versions, but I've fallen out of that habit in recent years as the emphasis of my translation work shifted to other fields.