|
--- |
|
language: |
|
- nl |
|
license: apache-2.0 |
|
tags: |
|
- sentence-transformers |
|
- sentence-similarity |
|
- feature-extraction |
|
- generated_from_trainer |
|
- dataset_size:8066634 |
|
- loss:MultipleNegativesRankingLoss |
|
widget: |
|
- source_sentence: Er kwamen drie mysterieuze mannen ter hulp. |
|
sentences: |
|
- Drie vreemde lui hielpte ons dan. |
|
- Er kwamen drie zwarte vogels in onze tuin. |
|
- Er zijn mensen die hulpzaam zijn. |
|
- Een, twee, drie... Wie kan de volgende cijfers aanraden? |
|
pipeline_tag: sentence-similarity |
|
library_name: sentence-transformers |
|
--- |
|
|
|
# FMMB-BE-NL: The Fairly Multilingual ModernBERT Embedding Model (Belgian Edition): Monolingual Dutch version. |
|
|
|
🇳🇱 This monolingual Dutch version of the [Fairly Multilingual ModernBERT Embedding Model (Belgian Edition)](https://huggingface.co/Parallia/Fairly-Multilingual-ModernBERT-Embed-BE) is the perfect model for embedding texts up to 8192 tokens written in Dutch at the speed of light. It uses the exact same weights as the original FMMB-BE model, and therefore produces identical embeddings, but this version comes with only a Dutch-optimized tokenizer and its associated embedding table, thereby improving performance. |
|
|
|
🆘 This [sentence-transformers](https://www.SBERT.net) model was trained on a small parallel corpus containing English-French, English-Dutch, and English-German sentence pairs. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. The input texts can be used as-is, no need to use prefixes. |
|
|
|
🪄 Thanks to the magic of [Trans-Tokenization](https://huggingface.co/papers/2408.04303), monoligual English models such as [ModernBERT-Embed from Nomic AI](https://huggingface.co/nomic-ai/modernbert-embed-base) can be turned into embedding models for another language. And this, with almost no GPU compute involved! 🤯 |
|
|
|
⚖️ Each of the 5 FMMB-BE models are actually copies of the exact same model, paired with different tokenizers and embedding tables. Indeed, as all trans-tokenized models operate on embeddings in the same latent space, aligning them cross-lingually is a breeze: after creating a "super" model which can speak in all of the 4 tokenizers, this model can be finetuned to produce similar embeddings for sentences which are translation of each other. |
|
|
|
⚡ ModernBERT, developped last month by Answer Ai and LightOn, is about 3x to 6x faster at inference time than regular BERT/RoBERTa models, while providing us with superior results. This makes it a wonderful choice for many use cases. |
|
|
|
|
|
## Model Details |
|
|
|
### Model Description |
|
- **Model Type:** Sentence Transformer |
|
- **Base model:** [ModernBERT-Embed-Base](https://huggingface.co/nomic-ai/modernbert-embed-base) |
|
- **Maximum Sequence Length:** 8192 tokens |
|
- **Output Dimensionality:** 768 dimensions |
|
- **Similarity Function:** Cosine Similarity |
|
- **Training Dataset:** |
|
- parallel-sentences |
|
- **Languages:** nl |
|
- **License:** apache-2.0 |
|
|
|
### Full Model Architecture |
|
|
|
``` |
|
SentenceTransformer( |
|
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel |
|
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) |
|
) |
|
``` |
|
|
|
## Usage |
|
|
|
**IMPORTANT:** While waiting for the next stable release of the `transformers` library, please install the latest git release to use `modernbert` models: |
|
|
|
```bash |
|
pip install --upgrade git+https://github.com/huggingface/transformers.git |
|
``` |
|
|
|
The easiest way to use this model is to install the Sentence Transformers library: |
|
|
|
```bash |
|
pip install -U sentence-transformers |
|
``` |
|
|
|
Then you can load this model and run inference. |
|
```python |
|
from sentence_transformers import SentenceTransformer |
|
|
|
# Download from the 🤗 Hub |
|
model = SentenceTransformer("Parallia/Fairly-Multilingual-ModernBERT-Embed-BE-NL") |
|
# Run inference |
|
sentences = [ |
|
'Er kwamen drie mysterieuze mannen ter hulp.', |
|
'Drie vreemde lui hielpte ons dan.', |
|
'Er kwamen drie zwarte vogels in onze tuin.', |
|
'Er zijn mensen die hulpzaam zijn.', |
|
'Een, twee, drie... Wie kan de volgende cijfers aanraden?', |
|
] |
|
embeddings = model.encode(sentences) |
|
print(embeddings.shape) |
|
# [5, 768] |
|
|
|
# Get the similarity scores for the embeddings |
|
similarities = model.similarity(embeddings, embeddings) |
|
print(similarities.shape) |
|
# [5, 5] |
|
``` |
|
|
|
<!-- |
|
### Recommendations |
|
|
|
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* |
|
--> |
|
|
|
## Training Details |
|
|
|
### Training Dataset |
|
|
|
#### parallel-sentences |
|
|
|
* Dataset: parallel dataset |
|
* Size: 8,066,634 training samples |
|
* Columns: <code>sent1</code> and <code>sent2</code> |
|
* Approximate statistics based on the first 1000 samples: |
|
| | sent1 | sent2 | |
|
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| |
|
| type | string | string | |
|
| details | <ul><li>min: 6 tokens</li><li>mean: 17.86 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 18.87 tokens</li><li>max: 52 tokens</li></ul> | |
|
* Samples: |
|
| sent1 | sent2 | |
|
|:----------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |
|
| <code>The faces may change, but the essential views that have characterised Israel’s government for decades will remain the same after 9 April</code> | <code>Les visages peuvent changer, mais les opinions fondamentales qui caractérisent le gouvernement israélien depuis des décennies resteront les mêmes après le 9 avril</code> | |
|
| <code>- Yeah. My husband never talked about business.</code> | <code>M'n man had het nooit over z'n zaken.</code> | |
|
| <code>Or do they think that We hear not their secrets and their private counsels?</code> | <code>Oder meinen sie, daß Wir ihre Geheimnisse und heimlichen Beratungen nicht hören?</code> | |
|
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: |
|
```json |
|
{ |
|
"scale": 20.0, |
|
"similarity_fct": "cos_sim" |
|
} |
|
``` |
|
|
|
### Training Hyperparameters |
|
#### Non-Default Hyperparameters |
|
|
|
- `eval_strategy`: steps |
|
- `per_device_train_batch_size`: 256 |
|
- `per_device_eval_batch_size`: 256 |
|
- `learning_rate`: 2e-05 |
|
- `num_train_epochs`: 1 |
|
- `warmup_ratio`: 0.1 |
|
- `bf16`: True |
|
|
|
#### All Hyperparameters |
|
<details><summary>Click to expand</summary> |
|
|
|
- `overwrite_output_dir`: False |
|
- `do_predict`: False |
|
- `eval_strategy`: steps |
|
- `prediction_loss_only`: True |
|
- `per_device_train_batch_size`: 256 |
|
- `per_device_eval_batch_size`: 256 |
|
- `per_gpu_train_batch_size`: None |
|
- `per_gpu_eval_batch_size`: None |
|
- `gradient_accumulation_steps`: 1 |
|
- `eval_accumulation_steps`: None |
|
- `torch_empty_cache_steps`: None |
|
- `learning_rate`: 2e-05 |
|
- `weight_decay`: 0.0 |
|
- `adam_beta1`: 0.9 |
|
- `adam_beta2`: 0.999 |
|
- `adam_epsilon`: 1e-08 |
|
- `max_grad_norm`: 1.0 |
|
- `num_train_epochs`: 1 |
|
- `max_steps`: -1 |
|
- `lr_scheduler_type`: linear |
|
- `lr_scheduler_kwargs`: {} |
|
- `warmup_ratio`: 0.1 |
|
- `warmup_steps`: 0 |
|
- `log_level`: passive |
|
- `log_level_replica`: warning |
|
- `log_on_each_node`: True |
|
- `logging_nan_inf_filter`: True |
|
- `save_safetensors`: True |
|
- `save_on_each_node`: False |
|
- `save_only_model`: False |
|
- `restore_callback_states_from_checkpoint`: False |
|
- `no_cuda`: False |
|
- `use_cpu`: False |
|
- `use_mps_device`: False |
|
- `seed`: 42 |
|
- `data_seed`: None |
|
- `jit_mode_eval`: False |
|
- `use_ipex`: False |
|
- `bf16`: True |
|
- `fp16`: False |
|
- `fp16_opt_level`: O1 |
|
- `half_precision_backend`: auto |
|
- `bf16_full_eval`: False |
|
- `fp16_full_eval`: False |
|
- `tf32`: None |
|
- `local_rank`: 0 |
|
- `ddp_backend`: None |
|
- `tpu_num_cores`: None |
|
- `tpu_metrics_debug`: False |
|
- `debug`: [] |
|
- `dataloader_drop_last`: False |
|
- `dataloader_num_workers`: 0 |
|
- `dataloader_prefetch_factor`: None |
|
- `past_index`: -1 |
|
- `disable_tqdm`: False |
|
- `remove_unused_columns`: True |
|
- `label_names`: None |
|
- `load_best_model_at_end`: False |
|
- `ignore_data_skip`: False |
|
- `fsdp`: [] |
|
- `fsdp_min_num_params`: 0 |
|
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} |
|
- `fsdp_transformer_layer_cls_to_wrap`: None |
|
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} |
|
- `deepspeed`: None |
|
- `label_smoothing_factor`: 0.0 |
|
- `optim`: adamw_torch |
|
- `optim_args`: None |
|
- `adafactor`: False |
|
- `group_by_length`: False |
|
- `length_column_name`: length |
|
- `ddp_find_unused_parameters`: None |
|
- `ddp_bucket_cap_mb`: None |
|
- `ddp_broadcast_buffers`: False |
|
- `dataloader_pin_memory`: True |
|
- `dataloader_persistent_workers`: False |
|
- `skip_memory_metrics`: True |
|
- `use_legacy_prediction_loop`: False |
|
- `push_to_hub`: False |
|
- `resume_from_checkpoint`: None |
|
- `hub_model_id`: None |
|
- `hub_strategy`: every_save |
|
- `hub_private_repo`: None |
|
- `hub_always_push`: False |
|
- `gradient_checkpointing`: False |
|
- `gradient_checkpointing_kwargs`: None |
|
- `include_inputs_for_metrics`: False |
|
- `include_for_metrics`: [] |
|
- `eval_do_concat_batches`: True |
|
- `fp16_backend`: auto |
|
- `push_to_hub_model_id`: None |
|
- `push_to_hub_organization`: None |
|
- `mp_parameters`: |
|
- `auto_find_batch_size`: False |
|
- `full_determinism`: False |
|
- `torchdynamo`: None |
|
- `ray_scope`: last |
|
- `ddp_timeout`: 1800 |
|
- `torch_compile`: False |
|
- `torch_compile_backend`: None |
|
- `torch_compile_mode`: None |
|
- `dispatch_batches`: None |
|
- `split_batches`: None |
|
- `include_tokens_per_second`: False |
|
- `include_num_input_tokens_seen`: False |
|
- `neftune_noise_alpha`: None |
|
- `optim_target_modules`: None |
|
- `batch_eval_metrics`: False |
|
- `eval_on_start`: False |
|
- `use_liger_kernel`: False |
|
- `eval_use_gather_object`: False |
|
- `average_tokens_across_devices`: False |
|
- `prompts`: None |
|
- `batch_sampler`: batch_sampler |
|
- `multi_dataset_batch_sampler`: proportional |
|
|
|
</details> |
|
|
|
### Framework Versions |
|
- Python: 3.11.7 |
|
- Sentence Transformers: 3.3.1 |
|
- Transformers: 4.48.0.dev0 |
|
- PyTorch: 2.2.0+cu121 |
|
- Accelerate: 1.0.1 |
|
- Datasets: 3.2.0 |
|
- Tokenizers: 0.21.0 |
|
|
|
## Citation |
|
|
|
If you use or finetune this model, please consider citing this paper and the sentence-transformers library: |
|
|
|
### BibTeX |
|
|
|
### This model |
|
```bibtex |
|
@misc{henderson2017efficient, |
|
title={The Fairly Multilingual ModernBERT Embbeding Model -- Belgian Edition}, |
|
author={Francois Remy}, |
|
year={2025}, |
|
eprint={2501.99999}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |
|
|
|
#### Sentence Transformers |
|
```bibtex |
|
@inproceedings{reimers-2019-sentence-bert, |
|
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", |
|
author = "Reimers, Nils and Gurevych, Iryna", |
|
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", |
|
month = "11", |
|
year = "2019", |
|
publisher = "Association for Computational Linguistics", |
|
url = "https://arxiv.org/abs/1908.10084", |
|
} |
|
``` |