Read my review of ‘DTP for Translators’, hosted by the North West Translator’s Network in October 2022.
Hi everyone! Last month, I attended a webinar of desktop publishing (DTP) for translators, hosted by the North West Translator’s Network (NWTN). After a call for volunteers, I was then asked to write a short review of the session for the NWTN website. Click the link below to read my review!
Read my latest WMTS blog post on the perils of the 101 match, CAT tool errors, and the importance of human proofreading.
Today, computer-aided translation (CAT) tools are an integral part of a translator’s arsenal: increasing output, improving terminology accuracy, and streamlining the project management process.
This said, a CAT tool is only ever as good as the person using it. All the fancy bells and whistles that many tools offer are ultimately superfluous if thorough and careful proofreading falls by the wayside.
I was recently asked to proofread a translation of a clinical trial application form for a study taking place in France. A well-known CAT tool had been used to complete the task, but the translator’s apparent faith in the technology resulted in a fundamental mistranslation of the source text.
When authorising clinical trials to be conducted in France, the French health authorities make use two methods of authorisation when responding to applicants: expressed or implicit.
Expressed authorisation means written confirmation is sent to the applicant confirming the status of a clinical trial application, whereas written confirmation is not sent under the implicit regime. Indeed, implicit authorisation equates to the expression: “If you don’t hear from us, assume everything’s fine.”
In the translation I was asked to review, a ‘101’ or ‘perfect’ match appeared in a segment referring to the aforementioned authorisation methods. A perfect match is a segment of previously translated text stored in a translation memory (TM). When the corresponding, identical source text for this translation is recognised by the CAT tool as appearing in a subsequent task, the software auto-generates the previous ‘perfect’ translation of the source text into your translation. Unfortunately, the source text in the segment explaining implicit authorisation had been fundamentally mistranslated, stating that a lack of response indicated rejection of the application, rather than authorisation thereof.
This highlights a key CAT tool issue: The peril for the perfect match, and the intellectual malaise it induces. As a translator, it’s nice to see a green 101 symbol next to a given segment or segments of text; however, as any responsible translator knows, you can’t simply regard such segments as ‘translated’ and move on. All segments, including 101 matches, must be carefully reviewed for contextual accuracy regardless of match accuracy scores. Additionally, as a method of best practice, each client should have their own dedicated TM, as opposed to simply maintaining a generic ‘catch-all’ TM, which reduces the risk of contextual mistranslations.
This is where careful, human proofreading is essential. As discussed in my previous post, human proofreading should be a deliberate and careful process and is essential to catching such CAT tool errors and providing sufficient quality assurance for our clients.