Hitz Center’s English-Basque machine translation model converted to CTranslate2

Model description

What is CTranslate2?

CTranslate2 is a C++ and Python library for efficient inference with Transformer models.

CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.

CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:

  • Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
  • Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
  • Encoder-only models: BERT, DistilBERT, XLM-RoBERTa

The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.

CTranslate2 Installation

pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0

ct2-transformers-converter Command Used:

ct2-transformers-converter --model HiTZ/mt-hitz-en-eu --output_dir ./ctranslate2/mt-hitz-en-eu-ct2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16

CTranslate2 Converted Checkpoint Information:

Compatible With:

Compute Type:

  • compute_type=int8_float16 for device="cuda"
  • compute_type=int8 for device="cpu"

Sample Code - ctranslate2

Clone the repository to the working directory or wherever you wish to store the model artifacts.

git clone https://huggingface.co/xezpeleta/mt-hitz-en-eu-ct2

Take the python code below and update the 'model_dir' variable to the location of the cloned repository.

from ctranslate2 import Translator
import transformers

model_dir = "./mt-hitz-en-eu-ct2" # Path to model directory.
translator = Translator(
            model_path=model_dir,
            device="cuda", # cpu, cuda, or auto.
            inter_threads=1, # Maximum number of parallel translations.
            intra_threads=4, # Number of OpenMP threads per translator.
            compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)

tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)

source = tokenizer.convert_ids_to_tokens(tokenizer.encode("Hello, world!"))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]

print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))

Licensing information

This work is licensed under a Apache License, Version 2.0

Disclaimer

Click to expand The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner and creator of the models (HiTZ Research Center) be liable for any results arising from the use made by third parties of these models.
Downloads last month
10
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for xezpeleta/mt-hitz-en-eu-ct2

Base model

HiTZ/mt-hitz-en-eu
Finetuned
(1)
this model