Skip to content
CristinaFebruary 2025

Harnessing the power of Large Language Models (LLMS) in translation

Harnessing the power of Large Language Models (LLMs) in translation

At tolingo, we are always on the lookout for emerging technologies that can enhance translation quality and efficiency. One of the most exciting developments in recent years has been the rise of Large Language Models (LLMs). These AI-driven systems have already shown potential in various applications, and we are actively exploring how they can be used in translation.

What are Large Language Models (LLMs)?

Large Language Models are advanced AI systems trained on massive amounts of text data. They use deep learning techniques, particularly neural networks, to analyse, generate, and refine texts. LLMs, such as OpenAI's GPT series or Claude from Anthropic, can analyze context, recognize patterns, and generate coherent responses to text-based inputs. This makes them incredibly powerful for a range of tasks, including translation, text summarization, and linguistic analysis.

Some use cases currently explored in the translation industry

 

  1. Machine Translation (MT) adjustment

    Raw MT output often requires post-editing to ensure accuracy and natural flow. LLMs can assist in refining MT results by correcting specific aspects of language. With the right prompting expertise and an understanding of both the source and target languages, one can, for example, adjust the level of formality towards readers, implement gender-neutral language corrections, or adapt data and number formats based on the convention of target countries.

  2. Quality Assurance checks

    Consistency and accuracy are key in translation, especially for technical or regulated industries. LLMs can be used to perform automated checks and to identify issues such as omissions, additions or wrong terminology in the target text. They can flag potential errors before final review, saving time and improving the overall quality of the translation.

  3. Terminology extraction

    LLMs are also being explored - along with other natural language processing tools - to extract key terms from documents and create domain-specific glossaries. One can for example prompt an LLM to extract all terms related to diabetes from a collection of medical texts and then use them to create a terminology database.

The role of prompts in LLM performance

A key factor in leveraging LLMs effectively is the use of well-structured prompts. A prompt is the instruction given to the LLM to guide its response. The clarity and specificity of a prompt can significantly impact the quality of the output. For instance, simply instructing an LLM to "improve a translation", without concrete indications or examples, might give the LLM too much room for interpretation and lead to hallucinations or incorrect adjustments.

 

Practical examples

To illustrate how LLMs can be used in translation and post-editing, let's examine a few practical cases:

1. Adjustment of formality level

In these examples we prompted the LLM to adjust the MT output by using the courtesy form in Spanish (third person singular) when addressing readers. As you can see, both personal pronouns and verbs were adjusted:

Example 1:

Source Text MT Output LLM Adjustment
So individuell wie Sie Tan individual como tú Tan individual como usted

 

Example 2:

Source Text MT Output LLM Adjustment
Wenn Sie einen Wunsch oder ein Anliegen haben, sind wir natürlich immer mit einem offenen Ohr für Sie da. Si tienes alguna petición o preocupación, por supuesto siempre estamos ahí para escucharte. Si tiene alguna petición o preocupación, por supuesto siempre estamos ahí para escucharlo.

 

 

2. Adjustment of date format

Different languages and cultures follow distinct date formats. In this example of a machine translation from English into German, we prompted the LLM to adjust the MT output in German by using the date format DD.MM.YYYY and we provided one example of how to achieve this. As you can see, the date format was consequently adjusted:

Source Text MT Output LLM Adjustment
26/05/2020 – The German Institute for Medical Documentation and Information was officially merged into the BfArM. 26/05/2020 - Das Deutsche Institut für Medizinische Dokumentation und Information wurde offiziell in das BfArM eingegliedert. 26.05.2020 - Das Deutsche Institut für Medizinische Dokumentation und Information wurde offiziell in das BfArM eingegliedert.

 

 

3. Adjustment of active and passive voice

Despite the clear potential of large language models (LLMs), their effectiveness heavily depends on well-crafted prompts. This becomes evident when examining the following case, where a prompt was used to improve the machine translation of an instruction manual by replacing passive formulations with active ones to enhance clarity.

Source Text MT Output LLM Adjustment
Holzplatten sind nicht begehbar! Wooden sheets cannot be walked on! Wooden sheets are not walkable!

However, while the LLM-generated adjustment improved readability, a more precise prompt might have yielded even better results, such as explicitly suggesting the use of imperative forms (i.e., "Do not walk on wooden sheets!").

The future of LLMs in translation: potential and challenges

The potential of LLMs in translation is vast. However, their successful implementation requires careful testing, expertise in prompt engineering, and a clear understanding of the intended use case. At tolingo, we are committed to exploring these technologies while maintaining rigorous quality control to ensure the best possible translations.

By continuously testing and refining how we use LLMs, we aim to strike the right balance between innovation and reliability—leveraging the power of AI while ensuring human oversight for precision and accuracy. The future of translation is evolving, and we are excited to be at the forefront of these advancements!

Verwandte Artikel