In our three-part blog series "Machine Translation", we'd like to present various aspects of Machine Translation (MT). In this first part, we'll explain how Machine Translation is used at kothes. We'll also provide you with an overview of the various factors that can significantly influence the quality of Machine Translation.
Online translation services such as Google Translate or DeepL offer a way to translate texts into another language both quickly and easily. The advantages are obvious: short processing times and low costs – as compared with human translation – sound tempting. But these systems don't deliver perfect translations. On the contrary: strange and funny mistranslations such as the "nuclear soap" for curd soap ("Kernseife" in German) have probably been experienced by most people. And so the question arises as to how Machine Translation can be used when there are high quality demands for the end-product, such as in the case of a technical text translation. In the following, we'll explain which steps change in the Machine Translation process and what needs to be considered, well in advance.
Machine Translation is used in the translation process via a connection to the Translation Management System. This interface enables text modules that are not yet available in the customer-specific Translation Memory (TM) of the system to be translated automatically. Text modules from previous translations can therefore continue to be reused. Which content is taken from the Translation Memory and which is taken from the machine can be set individually. However, in order to prevent inappropriate translations from being reused, we recommend checking the quality of the Translation Memory beforehand.
As indicated at the beginning, the machine can't provide perfect translations. Therefore, after the automatic translation, we carry out the so-called post-editing. The aim of post-editing is to correct the errors of the machine, and thus produce a translation that meets customers' requirements. The post-editor checks the translations suggested to them by the system and adapts them, in terms of grammar, spelling and style, as well as being in accordance with customer-specific requirements such as specialist terminology. Numerous studies have demonstrated that post-editing throughput (depending on the quality of the Machine Translation results) can be noticeably higher, as compared to a new translation. As in the regular translation process, post-editing is followed by an automatic check and – if desired – a check in accordance with the "four-eyes principle", because post-editing is not a substitute for a check by another person. The posted and, if necessary, proofread text modules are then stored in the Translation Memory and can be reused in the next translation job. In this way, as a Language Service Provider (LSP), we can continue to guarantee the high quality of your translations.
However, before Machine Translation can be used successfully, a number of points must be clarified in advance, as the quality of the translation results is determined by various factors. Customers should therefore be aware of their expectations and options, so that they don't experience any unpleasant surprises at the end of the project. For example, it should be clarified which types of text are to be translated into which languages. For example, Machine Translation achieves better results for languages that are similar to each other, such as German and English, than for language pairs such as Russian and Chinese. With regard to text types, technical texts such as Operating Manuals or Service Instructions are particularly suitable, as long as they're written in an easily understandable and consistent manner – in accordance with established guidelines. Marketing texts, on the other hand, are less suitable because they're characterised by a creative linguistic style that the machine can't always handle. Furthermore, when looking at the source texts, it should be noted that the quality is largely determined by the result of the Machine Translation. The saying "Garbage in, Garbage out", which is often used in Computer Science, also proves true here. The machine is (unfortunately) not capable of transforming a poor source text into a high-quality translation. A bad MT result also means more work in post-editing. Frustration over overtime with comparatively low payment on the part of the post-editors and failure to meet the savings targets on the customer-side can end up being the result.
In addition, the question of the translation system to be used (the so-called "engine") still arises. You can choose between "generic" and "trainable". Generic systems such as DeepL or Google Translate are trained by the respective provider with data selected from various areas. The translations sound very natural, but they can't always reproduce technical content correctly. In comparison, trainable engines are trained with the company's own data. This allows subject-specific terminology and customer-specific style specifications to be taken into account. However, training such an engine requires both very large amounts of data and a correspondingly large IT infrastructure. We already have the appropriate translation systems in place, so there are no additional costs, if you wish to switch to Machine Translation in the future. Which type of engine is worthwhile in each individual case depends on several factors, such as your own quality requirements, text types, language combinations and translation volumes. The second instalment of our blog series on Machine Translation provides detailed insights into the special features of the various Machine Translation systems.
We'd be happy to support you right away, in weighing the various factors for the use of Machine Translation – and we'll remain at your side with our know-how in this area.