Meta is working on an AI-based technology capable of translating unwritten languages

PARIS, Oct. 20 (Benin News / EP) –

Meta said it is developing an artificial intelligence (AI) based technology capable of translating languages ​​that have no official grammar and spelling i.e., unwritten languages.

The company previewed the development of its Universal Speech Translator (UST) project, with which it aims to create a real-time translation model in all existing languages, in order to “breaking down barriers and bringing people together”, he said in a press release.

This project, which aims to foster oral communication in different scenarios, both in the real world and in the metaverse, aims to support all languages, whether written or spoken or exclusively spoken.

Meta noted that, until now, computer-assisted translation “has mainly focused on written languages” and acknowledged that there are more than 7,000 living languages ​​in the world, about half of which are written languages. “do not have a standard writing system”. or widely used.

He also reiterated that he was aware of the insufficiency of the latter due to the fact that machine learning models require more information to work out simultaneous translation. For instance, grammar and spelling.

To meet this challenge, he created his first translation system for a predominantly spoken language, as is Hokkien. This language, which is spoken in some Chinese regions, does not have a standard written form and therefore could not be translated initially. through its AI standards.

To work on this project, the company notes that data collection. “was a major obstacle”mainly because it didn’t have enough information to create machine learning models.

To overcome this lack of information, Meta took advantage of Mandarin Chinese as an intermediate language to construct relatively faithful translations. First, he translated the Hokkien language into Mandarin text. Following this intermediate step, the output of the function the translation has improved taking data from a similar language as a reference.

Then, using a training model, he analyzed the semantics and orality of this language and compared it to other languages ​​that have a written format, such as English. He then synthesized the English language from written texts and created a parallel language between Hokkien and English.

A SPEECH TO SPEECH SYSTEM

The company recalled that most translation systems are based on speech-to-text transcriptions and reformulated this concept to analyze speech-to-speech transcriptions.

To do this, he used speech-to-unit translation (S2UT) to translate speech into a sequence of acoustic units and generate different waveforms from them. UnitY was later adopted for a two-step decoding mechanism.

First, the decoder generates the translated text from the unwritten language into an English text. a related language (in this case, Mandarin) and then creates acoustic units. In this regard, the manufacturer acknowledged that it has developed a system that transcribes Hokkien speech into English. standardized phonetic notation called Tai-lo.

On the other hand, he recalled that speech translation systems are usually evaluated using the following elements a metric called ASR-BLUE. This system first transcribes the translated speech into text using machine learning and then calculates the transcription quality of this technology solution by comparing the text. translated by the machine with that translated by a person.

As this is not possible with Hokkien, because it has no written grammar, Meta has created its first set of reference data bidirectional translation data set. It is an open source system that researchers can participate in to advance Meta’s own translation system.

Finally, the company announced that this translation solution is still in its infancy and that it plans to be able to offer simultaneous translation between several languages ​​in the near future.

She also assured that this project can be extended to other languages, besides Hokkien, and therefore announced that will publish a corpus of translations from voice to voice called SpeechMatrix, which he extracted from his LASER tool (Language Agnostic Sentence Representations Tool).

Leave a Comment