Announcement
HLTCOE and JHU Researchers’ Novel Approach to Multilingual Language Models Featured in the Johns Hopkins Hub

December 12, 2023

An article in the Johns Hopkins Hub highlights HLTCOE and CLSP collaborators’ research on multilingual language models (MLMs). 

The research team, which includes Haoran Xu, Philipp Koehn, Kenton Murray, and Benjamin Van Durme, has developed a novel approach to optimizing MLMs for multiple languages. They recently presented their work at the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP). 

Their method, called Language-Specific Matrix Synthesis, employs low-rank matrices to reduce the number of parameters needed for a model to function in each new language. In tests with a model capable of understanding up to 95 different languages, the team has shown that their method allows for a significant reduction in a language model’s size without compromising its performance. 

The researchers’ objective is to apply their method to unwieldy MLMs and develop robust AI systems that can comprehend multiple languages while performing as effectively as they do in English. By reducing the size and hardware constraints of MLMs, Language-Specific Matrix Synthesis may also make it possible to deploy multilingual AI models that can handle hundreds of languages in devices of all sizes.

Human Language Technology Center of Excellence