CTranslate2 NLLB-200 Translation Example
This guide demonstrates how to use NLLB-finetuned model for bilingual translation between Portuguese (por_Latn
) and a target language (vmw_Latn
).
Prerequisites
- Install required packages:
pip install transformers torch
Inference
from transformers import AutoModelForSeq2SeqLM, NllbTokenizer, AutoTokenizer
import torch
src_lang="por_Latn"
tgt_lang="vmw_Latn"
text="Olá mundo das língua!"
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model_name="felerminoali/nllb200_pt_vmw_bilingual_ver1"
model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(device)
tokenizer = NllbTokenizer.from_pretrained(model_name)
tokenizer.src_lang = src_lang
tokenizer.tgt_lang = tgt_lang
inputs = tokenizer(
text, return_tensors='pt', padding=True, truncation=True,
max_length=1024
)
model.eval() # turn off training mode
result = model.generate(
**inputs.to(model.device),
forced_bos_token_id=tokenizer.convert_tokens_to_ids(tgt_lang)
)
print(tokenizer.batch_decode(result, skip_special_tokens=True)[0])
- Downloads last month
- 14
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for felerminoali/nllb200_pt_vmw_bilingual_ver1
Base model
facebook/nllb-200-distilled-600M