MADLAD-400-7B-MT-BT (int8 quantized using CTranslate2)
ct2-transformers-converter --model ./madlad400-7b-mt-bt --quantization int8 --output_dir madlad400-7b-mt-bt-ct2-8bit --copy_files added_tokens.json generation_config.json model.safetensors.index.json special_tokens_map.json spiece.model tokenizer.json tokenizer_config.json
Original model card below
Model Card for MADLAD-400-7B-MT
Table of Contents
- TL;DR
- Model Details
- Usage
- Uses
- Bias, Risks, and Limitations
- Training Details
- Evaluation
- Environmental Impact
- Citation
TL;DR
MADLAD-400-7B-MT-BT is a multilingual machine translation model based on the T5 architecture that was trained on 250 billion tokens covering over 450 languages using publicly available data. It is competitive with models that are significantly larger.
It's a finetuned version of the 7.2B parameter model on backtranslated data. Authors say in the paper that:
While this setup is very likely sub-optimal, we see that back-translation greatly improves en2xx translation (by 3.0 chrf, in the case of Flores-200) in most cases.
Disclaimer: Juarez Bochi, who was not involved in this research, converted the original weights and wrote the contents of this model card based on the original paper and Flan-T5.
Model Details
Model Description
- Model type: Language model
- Language(s) (NLP): Multilingual (400+ languages)
- License: Apache 2.0
- Related Models: All MADLAD-400 Checkpoints
- Original Checkpoints: All Original MADLAD-400 Checkpoints
- Resources for more information:
Usage
Find below some example scripts on how to use the model:
Using the Pytorch model with transformers
Running the model on a CPU or GPU
Click to expand
First, install the Python packages that are required:
pip install transformers accelerate sentencepiece
from transformers import T5ForConditionalGeneration, T5Tokenizer
model_name = 'jbochi/madlad400-7b-mt-bt'
model = T5ForConditionalGeneration.from_pretrained(model_name, device_map="auto")
tokenizer = T5Tokenizer.from_pretrained(model_name)
text = "<2pt> I love pizza!"
input_ids = tokenizer(text, return_tensors="pt").input_ids.to(model.device)
outputs = model.generate(input_ids=input_ids)
tokenizer.decode(outputs[0], skip_special_tokens=True)
# Eu adoro pizza!
Running the model with Candle
Click to expand
Usage with candle:
$ cargo run --example t5 --release -- \
--model-id "jbochi/madlad400-7b-mt-bt" \
--prompt "<2de> How are you, my friend?" \
--decode --temperature 0
Uses
Direct Use and Downstream Use
Primary intended uses: Machine Translation and multilingual NLP tasks on over 400 languages. Primary intended users: Research community.
Out-of-Scope Use
These models are trained on general domain data and are therefore not meant to work on domain-specific models out-of-the box. Moreover, these research models have not been assessed for production usecases.
Bias, Risks, and Limitations
We note that we evaluate on only 204 of the languages supported by these models and on machine translation and few-shot machine translation tasks. Users must consider use of this model carefully for their own usecase.
Ethical considerations and risks
We trained these models with MADLAD-400 and publicly available data to create baseline models that support NLP for over 400 languages, with a focus on languages underrepresented in large-scale corpora. Given that these models were trained with web-crawled datasets that may contain sensitive, offensive or otherwise low-quality content despite extensive preprocessing, it is still possible that these issues to the underlying training data may cause differences in model performance and toxic (or otherwise problematic) output for certain domains. Moreover, large models are dual use technologies that have specific risks associated with their use and development. We point the reader to surveys such as those written by Weidinger et al. or Bommasani et al. for a more detailed discussion of these risks, and to Liebling et al. for a thorough discussion of the risks of machine translation systems.
Known Limitations
More information needed
Sensitive Use:
More information needed
Training Details
We train models of various sizes: a 3B, 32-layer parameter model, a 7.2B 48-layer parameter model and a 10.7B 32-layer parameter model. We share all parameters of the model across language pairs, and use a Sentence Piece Model with 256k tokens shared on both the encoder and decoder side. Each input sentence has a <2xx> token prepended to the source sentence to indicate the target language.
See the research paper for further details.
Training Data
For both the machine translation and language model, MADLAD-400 is used. For the machine translation model, a combination of parallel datasources covering 157 languages is also used. Further details are described in the paper.
Training Procedure
See the research paper for further details.
Evaluation
Testing Data, Factors & Metrics
For evaluation, we used WMT, NTREX, Flores-200 and Gatones datasets as described in Section 4.3 in the paper.
The translation quality of this model varies based on language, as seen in the paper, and likely varies on domain, though we have not assessed this.
Results
See the research paper for further details.
Environmental Impact
More information needed
Citation
BibTeX:
@misc{kudugunta2023madlad400,
title={MADLAD-400: A Multilingual And Document-Level Large Audited Dataset},
author={Sneha Kudugunta and Isaac Caswell and Biao Zhang and Xavier Garcia and Christopher A. Choquette-Choo and Katherine Lee and Derrick Xin and Aditya Kusupati and Romi Stella and Ankur Bapna and Orhan Firat},
year={2023},
eprint={2309.04662},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 12