MMS
MMS
Overview
The MMS model was proposed in Scaling Speech Technology to 1,000+ Languages by Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Yossi Adi, Xiaohui Zhang, Wei-Ning Hsu, Alexis Conneau, Michael Auli
The abstract from the paper is the following:
Expanding the language coverage of speech technology has the potential to improve access to information for many more people. However, current speech technology is restricted to about one hundred languages which is a small fraction of the over 7,000 languages spoken around the world. The Massively Multilingual Speech (MMS) project increases the number of supported languages by 10-40x, depending on the task. The main ingredients are a new dataset based on readings of publicly available religious texts and effectively leveraging self-supervised learning. We built pre-trained wav2vec 2.0 models covering 1,406 languages, a single multilingual automatic speech recognition model for 1,107 languages, speech synthesis models for the same number of languages, as well as a language identification model for 4,017 languages. Experiments show that our multilingual speech recognition model more than halves the word error rate of Whisper on 54 languages of the FLEURS benchmark while being trained on a small fraction of the labeled data.
Here are the different models open sourced in the MMS project. The models and code are originally released here. We have add them to the transformers
framework, making them easier to use.
Automatic Speech Recognition (ASR)
The ASR model checkpoints can be found here : mms-1b-fl102, mms-1b-l1107, mms-1b-all. For best accuracy, use the mms-1b-all
model.
Tips:
All ASR models accept a float array corresponding to the raw waveform of the speech signal. The raw waveform should be pre-processed with Wav2Vec2FeatureExtractor.
The models were trained using connectionist temporal classification (CTC) so the model output has to be decoded using Wav2Vec2CTCTokenizer.
You can load different language adapter weights for different languages via load_adapter(). Language adapters only consists of roughly 2 million parameters and can therefore be efficiently loaded on the fly when needed.
Loading
By default MMS loads adapter weights for English. If you want to load adapter weights of another language make sure to specify target_lang=<your-chosen-target-lang>
as well as "ignore_mismatched_sizes=True
. The ignore_mismatched_sizes=True
keyword has to be passed to allow the language model head to be resized according to the vocabulary of the specified language. Similarly, the processor should be loaded with the same target language
Copied
You can safely ignore a warning such as:
Copied
If you want to use the ASR pipeline, you can load your chosen target language as such:
Copied
Inference
Next, let’s look at how we can run MMS in inference and change adapter layers after having called ~PretrainedModel.from_pretrained
First, we load audio data in different languages using the Datasets.
Copied
Next, we load the model and processor
Copied
Now we process the audio data, pass the processed audio data to the model and transcribe the model output, just like we usually do for Wav2Vec2ForCTC.
Copied
We can now keep the same model in memory and simply switch out the language adapters by calling the convenient load_adapter() function for the model and set_target_lang() for the tokenizer. We pass the target language as an input - "fra"
for French.
Copied
In the same way the language can be switched out for all other supported languages. Please have a look at:
Copied
to see all supported languages.
To further improve performance from ASR models, language model decoding can be used. See the documentation here for further details.
Speech Synthesis (TTS)
MMS-TTS uses the same model architecture as VITS, which was added to 🌍Transformers in v4.33. MMS trains a separate model checkpoint for each of the 1100+ languages in the project. All available checkpoints can be found on the BOINC AI Hub: facebook/mms-tts, and the inference documentation under VITS.
Inference
To use the MMS model, first update to the latest version of the Transformers library:
Copied
Since the flow-based model in VITS is non-deterministic, it is good practice to set a seed to ensure reproducibility of the outputs.
For languages with a Roman alphabet, such as English or French, the tokenizer can be used directly to pre-process the text inputs. The following code example runs a forward pass using the MMS-TTS English checkpoint:
Copied
The resulting waveform can be saved as a .wav
file:
Copied
Or displayed in a Jupyter Notebook / Google Colab:
Copied
For certain languages with non-Roman alphabets, such as Arabic, Mandarin or Hindi, the uroman
perl package is required to pre-process the text inputs to the Roman alphabet.
You can check whether you require the uroman
package for your language by inspecting the is_uroman
attribute of the pre-trained tokenizer
:
Copied
If required, you should apply the uroman package to your text inputs prior to passing them to the VitsTokenizer
, since currently the tokenizer does not support performing the pre-processing itself.
To do this, first clone the uroman repository to your local machine and set the bash variable UROMAN
to the local path:
Copied
You can then pre-process the text input using the following code snippet. You can either rely on using the bash variable UROMAN
to point to the uroman repository, or you can pass the uroman directory as an argument to the uromaize
function:
Copied
Tips:
The MMS-TTS checkpoints are trained on lower-cased, un-punctuated text. By default, the
VitsTokenizer
normalizes the inputs by removing any casing and punctuation, to avoid passing out-of-vocabulary characters to the model. Hence, the model is agnostic to casing and punctuation, so these should be avoided in the text prompt. You can disable normalisation by settingnoramlize=False
in the call to the tokenizer, but this will lead to un-expected behaviour and is discouraged.The speaking rate can be varied by setting the attribute
model.speaking_rate
to a chosen value. Likewise, the randomness of the noise is controlled bymodel.noise_scale
:
Copied
Language Identification (LID)
Different LID models are available based on the number of languages they can recognize - 126, 256, 512, 1024, 2048, 4017.
Inference
First, we install transformers and some other libraries
Copied
Next, we load a couple of audio samples via datasets
. Make sure that the audio data is sampled to 16000 kHz.
Copied
Next, we load the model and processor
Copied
Now we process the audio data, pass the processed audio data to the model to classify it into a language, just like we usually do for Wav2Vec2 audio classification models such as ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition
Copied
To see all the supported languages of a checkpoint, you can print out the language ids as follows:
Copied
Audio Pretrained Models
Pretrained models are available for two different sizes - 300M , 1Bil. The architecture is based on the Wav2Vec2 model, so one can refer to Wav2Vec2’s documentation page for further details on how to finetune with models for various downstream tasks.
Last updated