FremyCompany commited on
Commit
f8af4fc
·
verified ·
1 Parent(s): f068e59

Initial upload

Browse files
README.md CHANGED
@@ -1,3 +1,303 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - fr
4
+ - nl
5
+ - de
6
+ - en
7
+ license: apache-2.0
8
+ tags:
9
+ - sentence-transformers
10
+ - sentence-similarity
11
+ - feature-extraction
12
+ - generated_from_trainer
13
+ - dataset_size:8066634
14
+ - loss:MultipleNegativesRankingLoss
15
+ widget:
16
+ - source_sentence: These three mysterious men came to our help.
17
+ sentences:
18
+ - Three strange guys helped us then.
19
+ - These three black birds came in our garden.
20
+ - Some people are helpful.
21
+ - One, two, three... Who can guess the next digits?
22
+ pipeline_tag: sentence-similarity
23
+ library_name: sentence-transformers
24
+ ---
25
+
26
+ # FMMB-BE: The Fairly Multilingual ModernBERT Embedding Model (Belgian Edition)
27
+
28
+ 🇧🇪 The Fairly Multilingual ModernBERT Embedding Model (Belgian Edition) is the perfect model for embedding texts up to 8192 tokens written in French, Dutch, German or English at the speed of light. This model uses the most effecient tokenizer for each input text, thereby maximizing your GPU usage. Despite using 4 different tokenizers and 4 different embedding tables, this model can mix and match different languages in the same batch, and produces embeddings very similar across languages. That said: if you know the tokenizer you want to use in advance, you can use the monolingual variants for [French](https://huggingface.co/Parallia/Fairly-Multilingual-ModernBERT-Embed-BE-FR), [Dutch](https://huggingface.co/Parallia/Fairly-Multilingual-ModernBERT-Embed-BE-NL), [German](https://huggingface.co/Parallia/Fairly-Multilingual-ModernBERT-Embed-BE-DE) or [English](https://huggingface.co/Parallia/Fairly-Multilingual-ModernBERT-Embed-BE-EN) for a faster tokenization and lower memory footprint.
29
+
30
+ 🆘 This [sentence-transformers](https://www.SBERT.net) model was trained on a small parallel corpus containing English-French, English-Dutch, and English-German sentence pairs. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. The input texts can be used as-is, no need to use prefixes.
31
+
32
+ 🪄 Thanks to the magic of [Trans-Tokenization](https://huggingface.co/papers/2408.04303), monoligual English models such as [ModernBERT-Embed from Nomic AI](https://huggingface.co/nomic-ai/modernbert-embed-base) can be turned into embedding models for another language. And this, with almost no GPU compute involved! 🤯
33
+
34
+ ⚖️ Each of the 5 FMMB-BE models are actually copies of the exact same model, paired with different tokenizers and embedding tables. Indeed, as all trans-tokenized models operate on embeddings in the same latent space, aligning them cross-lingually is a breeze: after creating a "super" model which can speak in all of the 4 tokenizers, this model can be finetuned to produce similar embeddings for sentences which are translation of each other.
35
+
36
+ ⚡ ModernBERT, developped last month by Answer Ai and LightOn, is about 3x to 6x faster at inference time than regular BERT/RoBERTa models, while providing us with superior results. This makes it a wonderful choice for many use cases.
37
+
38
+
39
+ ## Model Details
40
+
41
+ ### Model Description
42
+ - **Model Type:** Sentence Transformer
43
+ - **Base model:** [ModernBERT-Embed-Base](https://huggingface.co/nomic-ai/modernbert-embed-base)
44
+ - **Maximum Sequence Length:** 8192 tokens
45
+ - **Output Dimensionality:** 768 dimensions
46
+ - **Similarity Function:** Cosine Similarity
47
+ - **Training Dataset:**
48
+ - parallel-sentences
49
+ - **Languages:** nl
50
+ - **License:** apache-2.0
51
+
52
+ ### Full Model Architecture
53
+
54
+ ```
55
+ SentenceTransformer(
56
+ (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel
57
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
58
+ )
59
+ ```
60
+
61
+ ## Usage
62
+
63
+ **IMPORTANT:** While waiting for the next stable release of the `transformers` library, please install the latest git release to use `modernbert` models:
64
+
65
+ ```bash
66
+ pip install --upgrade git+https://github.com/huggingface/transformers.git
67
+ ```
68
+
69
+ The easiest way to use this model is to install the Sentence Transformers library:
70
+
71
+ ```bash
72
+ pip install -U sentence-transformers
73
+ ```
74
+
75
+ Then you can load this model and run inference.
76
+ ```python
77
+ from sentence_transformers import SentenceTransformer
78
+
79
+ # Download from the 🤗 Hub
80
+ model = SentenceTransformer("Parallia/Fairly-Multilingual-ModernBERT-Embed-BE")
81
+ # Run inference
82
+ sentences = [
83
+ 'These three mysterious men came to our help.',
84
+ 'Three strange guys helped us then.',
85
+ 'These three black birds came in our garden.',
86
+ 'Some people are helpful.',
87
+ 'One, two, three... Who can guess the next digits?',
88
+ ]
89
+ embeddings = model.encode(sentences)
90
+ print(embeddings.shape)
91
+ # [5, 768]
92
+
93
+ # Get the similarity scores for the embeddings
94
+ similarities = model.similarity(embeddings, embeddings)
95
+ print(similarities.shape)
96
+ # [5, 5]
97
+ ```
98
+
99
+ <!--
100
+ ### Recommendations
101
+
102
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
103
+ -->
104
+
105
+ ## Training Details
106
+
107
+ ### Training Dataset
108
+
109
+ #### parallel-sentences
110
+
111
+ * Dataset: parallel dataset
112
+ * Size: 8,066,634 training samples
113
+ * Columns: <code>sent1</code> and <code>sent2</code>
114
+ * Approximate statistics based on the first 1000 samples:
115
+ | | sent1 | sent2 |
116
+ |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
117
+ | type | string | string |
118
+ | details | <ul><li>min: 6 tokens</li><li>mean: 17.86 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 18.87 tokens</li><li>max: 52 tokens</li></ul> |
119
+ * Samples:
120
+ | sent1 | sent2 |
121
+ |:----------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
122
+ | <code>The faces may change, but the essential views that have characterised Israel’s government for decades will remain the same after 9 April</code> | <code>Les visages peuvent changer, mais les opinions fondamentales qui caractérisent le gouvernement israélien depuis des décennies resteront les mêmes après le 9 avril</code> |
123
+ | <code>- Yeah. My husband never talked about business.</code> | <code>M'n man had het nooit over z'n zaken.</code> |
124
+ | <code>Or do they think that We hear not their secrets and their private counsels?</code> | <code>Oder meinen sie, daß Wir ihre Geheimnisse und heimlichen Beratungen nicht hören?</code> |
125
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
126
+ ```json
127
+ {
128
+ "scale": 20.0,
129
+ "similarity_fct": "cos_sim"
130
+ }
131
+ ```
132
+
133
+ ### Training Hyperparameters
134
+ #### Non-Default Hyperparameters
135
+
136
+ - `eval_strategy`: steps
137
+ - `per_device_train_batch_size`: 256
138
+ - `per_device_eval_batch_size`: 256
139
+ - `learning_rate`: 2e-05
140
+ - `num_train_epochs`: 1
141
+ - `warmup_ratio`: 0.1
142
+ - `bf16`: True
143
+
144
+ #### All Hyperparameters
145
+ <details><summary>Click to expand</summary>
146
+
147
+ - `overwrite_output_dir`: False
148
+ - `do_predict`: False
149
+ - `eval_strategy`: steps
150
+ - `prediction_loss_only`: True
151
+ - `per_device_train_batch_size`: 256
152
+ - `per_device_eval_batch_size`: 256
153
+ - `per_gpu_train_batch_size`: None
154
+ - `per_gpu_eval_batch_size`: None
155
+ - `gradient_accumulation_steps`: 1
156
+ - `eval_accumulation_steps`: None
157
+ - `torch_empty_cache_steps`: None
158
+ - `learning_rate`: 2e-05
159
+ - `weight_decay`: 0.0
160
+ - `adam_beta1`: 0.9
161
+ - `adam_beta2`: 0.999
162
+ - `adam_epsilon`: 1e-08
163
+ - `max_grad_norm`: 1.0
164
+ - `num_train_epochs`: 1
165
+ - `max_steps`: -1
166
+ - `lr_scheduler_type`: linear
167
+ - `lr_scheduler_kwargs`: {}
168
+ - `warmup_ratio`: 0.1
169
+ - `warmup_steps`: 0
170
+ - `log_level`: passive
171
+ - `log_level_replica`: warning
172
+ - `log_on_each_node`: True
173
+ - `logging_nan_inf_filter`: True
174
+ - `save_safetensors`: True
175
+ - `save_on_each_node`: False
176
+ - `save_only_model`: False
177
+ - `restore_callback_states_from_checkpoint`: False
178
+ - `no_cuda`: False
179
+ - `use_cpu`: False
180
+ - `use_mps_device`: False
181
+ - `seed`: 42
182
+ - `data_seed`: None
183
+ - `jit_mode_eval`: False
184
+ - `use_ipex`: False
185
+ - `bf16`: True
186
+ - `fp16`: False
187
+ - `fp16_opt_level`: O1
188
+ - `half_precision_backend`: auto
189
+ - `bf16_full_eval`: False
190
+ - `fp16_full_eval`: False
191
+ - `tf32`: None
192
+ - `local_rank`: 0
193
+ - `ddp_backend`: None
194
+ - `tpu_num_cores`: None
195
+ - `tpu_metrics_debug`: False
196
+ - `debug`: []
197
+ - `dataloader_drop_last`: False
198
+ - `dataloader_num_workers`: 0
199
+ - `dataloader_prefetch_factor`: None
200
+ - `past_index`: -1
201
+ - `disable_tqdm`: False
202
+ - `remove_unused_columns`: True
203
+ - `label_names`: None
204
+ - `load_best_model_at_end`: False
205
+ - `ignore_data_skip`: False
206
+ - `fsdp`: []
207
+ - `fsdp_min_num_params`: 0
208
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
209
+ - `fsdp_transformer_layer_cls_to_wrap`: None
210
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
211
+ - `deepspeed`: None
212
+ - `label_smoothing_factor`: 0.0
213
+ - `optim`: adamw_torch
214
+ - `optim_args`: None
215
+ - `adafactor`: False
216
+ - `group_by_length`: False
217
+ - `length_column_name`: length
218
+ - `ddp_find_unused_parameters`: None
219
+ - `ddp_bucket_cap_mb`: None
220
+ - `ddp_broadcast_buffers`: False
221
+ - `dataloader_pin_memory`: True
222
+ - `dataloader_persistent_workers`: False
223
+ - `skip_memory_metrics`: True
224
+ - `use_legacy_prediction_loop`: False
225
+ - `push_to_hub`: False
226
+ - `resume_from_checkpoint`: None
227
+ - `hub_model_id`: None
228
+ - `hub_strategy`: every_save
229
+ - `hub_private_repo`: None
230
+ - `hub_always_push`: False
231
+ - `gradient_checkpointing`: False
232
+ - `gradient_checkpointing_kwargs`: None
233
+ - `include_inputs_for_metrics`: False
234
+ - `include_for_metrics`: []
235
+ - `eval_do_concat_batches`: True
236
+ - `fp16_backend`: auto
237
+ - `push_to_hub_model_id`: None
238
+ - `push_to_hub_organization`: None
239
+ - `mp_parameters`:
240
+ - `auto_find_batch_size`: False
241
+ - `full_determinism`: False
242
+ - `torchdynamo`: None
243
+ - `ray_scope`: last
244
+ - `ddp_timeout`: 1800
245
+ - `torch_compile`: False
246
+ - `torch_compile_backend`: None
247
+ - `torch_compile_mode`: None
248
+ - `dispatch_batches`: None
249
+ - `split_batches`: None
250
+ - `include_tokens_per_second`: False
251
+ - `include_num_input_tokens_seen`: False
252
+ - `neftune_noise_alpha`: None
253
+ - `optim_target_modules`: None
254
+ - `batch_eval_metrics`: False
255
+ - `eval_on_start`: False
256
+ - `use_liger_kernel`: False
257
+ - `eval_use_gather_object`: False
258
+ - `average_tokens_across_devices`: False
259
+ - `prompts`: None
260
+ - `batch_sampler`: batch_sampler
261
+ - `multi_dataset_batch_sampler`: proportional
262
+
263
+ </details>
264
+
265
+ ### Framework Versions
266
+ - Python: 3.11.7
267
+ - Sentence Transformers: 3.3.1
268
+ - Transformers: 4.48.0.dev0
269
+ - PyTorch: 2.2.0+cu121
270
+ - Accelerate: 1.0.1
271
+ - Datasets: 3.2.0
272
+ - Tokenizers: 0.21.0
273
+
274
+ ## Citation
275
+
276
+ If you use or finetune this model, please consider citing this paper and the sentence-transformers library:
277
+
278
+ ### BibTeX
279
+
280
+ ### This model
281
+ ```bibtex
282
+ @misc{henderson2017efficient,
283
+ title={The Fairly Multilingual ModernBERT Embbeding Model -- Belgian Edition},
284
+ author={Francois Remy},
285
+ year={2025},
286
+ eprint={2501.99999},
287
+ archivePrefix={arXiv},
288
+ primaryClass={cs.CL}
289
+ }
290
+ ```
291
+
292
+ #### Sentence Transformers
293
+ ```bibtex
294
+ @inproceedings{reimers-2019-sentence-bert,
295
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
296
+ author = "Reimers, Nils and Gurevych, Iryna",
297
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
298
+ month = "11",
299
+ year = "2019",
300
+ publisher = "Association for Computational Linguistics",
301
+ url = "https://arxiv.org/abs/1908.10084",
302
+ }
303
+ ```
config.json ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "Parallia/Fairly-Multilingual-ModernBERT-Embed-BE-NL",
3
+ "additional_special_tokens_ids": [],
4
+ "architectures": [
5
+ "ModernBertModel"
6
+ ],
7
+ "attention_bias": false,
8
+ "attention_dropout": 0.0,
9
+ "bos_token_id": 1,
10
+ "classifier_activation": "gelu",
11
+ "classifier_bias": false,
12
+ "classifier_dropout": 0.0,
13
+ "classifier_pooling": "mean",
14
+ "cls_token_id": null,
15
+ "decoder_bias": true,
16
+ "deterministic_flash_attn": false,
17
+ "embedding_dropout": 0.0,
18
+ "eos_token_id": 2,
19
+ "global_attn_every_n_layers": 3,
20
+ "global_rope_theta": 160000.0,
21
+ "gradient_checkpointing": false,
22
+ "hidden_activation": "gelu",
23
+ "hidden_size": 768,
24
+ "initializer_cutoff_factor": 2.0,
25
+ "initializer_range": 0.02,
26
+ "intermediate_size": 1152,
27
+ "layer_norm_eps": 1e-05,
28
+ "local_attention": 128,
29
+ "local_rope_theta": 10000.0,
30
+ "mask_token_id": null,
31
+ "max_position_embeddings": 8192,
32
+ "mlp_bias": false,
33
+ "mlp_dropout": 0.0,
34
+ "model_type": "modernbert",
35
+ "norm_bias": false,
36
+ "norm_eps": 1e-05,
37
+ "num_attention_heads": 12,
38
+ "num_hidden_layers": 22,
39
+ "pad_token_id": null,
40
+ "position_embedding_type": "absolute",
41
+ "reference_compile": false,
42
+ "sep_token_id": null,
43
+ "sparse_pred_ignore_index": -100,
44
+ "sparse_prediction": false,
45
+ "tokenizer_class": "LlamaTokenizerFast",
46
+ "torch_dtype": "float32",
47
+ "transformers_version": "4.48.0.dev0",
48
+ "unk_token_id": 0,
49
+ "vocab_size": 200368
50
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.3.1",
4
+ "transformers": "4.48.0.dev0",
5
+ "pytorch": "2.2.0+cu121"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": "cosine"
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:acf9abd7143dd25b19fd0149f78f3d0f4c7f302358c04ee43f26685585cfa8f7
3
+ size 1056870168
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
pick_best_tokenizer.py ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import json
3
+ import random
4
+ from typing import List, Dict
5
+ from transformers import PreTrainedTokenizer, AutoTokenizer
6
+
7
+ #pbt_log = []
8
+ class PickBestTokenizer(PreTrainedTokenizer):
9
+ def __init__(self, tokenizers: List[PreTrainedTokenizer], **kwargs):
10
+ self.model_input_names = ["input_ids", "attention_mask"]
11
+ self.tokenizers = [AutoTokenizer.from_pretrained(tokenizer) if isinstance(tokenizer,str) else tokenizer for tokenizer in tokenizers]
12
+ self.tokenizers_offsets = []
13
+ self.vocab = {}
14
+ self._vocab_size = sum(len(tokenizer) for tokenizer in self.tokenizers)
15
+
16
+ offset = 0
17
+ for i, tokenizer in enumerate(self.tokenizers):
18
+ tokenizer_id = f"[{i}]"
19
+ self.tokenizers_offsets.append(offset)
20
+ for token, token_id in tokenizer.get_vocab().items():
21
+ self.vocab[tokenizer_id + token] = token_id + offset
22
+ offset += len(tokenizer)
23
+
24
+ super().__init__(**kwargs)
25
+
26
+ @property
27
+ def vocab_size(self) -> int:
28
+ return self._vocab_size
29
+
30
+ def get_vocab(self) -> Dict[str, int]:
31
+ return self.vocab
32
+
33
+ def tokenize(self, text: str, **kwargs) -> List[str]:
34
+ # Tokenize the text with all possible tokenizers
35
+ tokenized_texts = [
36
+ [f"[{i}]" + tok for tok in tokenizer.tokenize(text, **kwargs)]
37
+ for i, tokenizer in enumerate(self.tokenizers)
38
+ ]
39
+
40
+ # Ensure that in case of equal lengths, no tokenizer is favored
41
+ random.shuffle(tokenized_texts)
42
+
43
+ # Return the list of tokens which is shortest
44
+ best_tokenization = min(tokenized_texts, key=len)
45
+
46
+ # Log the output
47
+ #pbt_log.append((text, best_tokenization))
48
+
49
+ # Return the output
50
+ return best_tokenization
51
+
52
+ def convert_tokens_to_ids(self, tokens: List[str], **kwargs) -> List[int]:
53
+ if isinstance(tokens, str): return self.convert_tokens_to_ids([tokens])[0]
54
+ ids = []
55
+ for token in tokens:
56
+ tokenizer_id = int(token[1])
57
+ token_stripped = token[3:]
58
+ offset = self.tokenizers_offsets[tokenizer_id]
59
+ ids.append(self.tokenizers[tokenizer_id].convert_tokens_to_ids(token_stripped, **kwargs) + offset)
60
+ return ids
61
+
62
+ def convert_ids_to_tokens(self, ids: List[int], **kwargs) -> List[str]:
63
+ if isinstance(ids, int): return self.convert_ids_to_tokens([ids])[0]
64
+ tokens = []
65
+ for id in ids:
66
+ for i, offset in enumerate(self.tokenizers_offsets):
67
+ if id < offset + len(self.tokenizers[i]):
68
+ token_id = id - offset
69
+ tokens.append(f"[{i}]{self.tokenizers[i].convert_ids_to_tokens(token_id, **kwargs)}")
70
+ break
71
+ else:
72
+ raise ValueError(f"ID {id} is out of range for any tokenizer.")
73
+ return tokens
74
+
75
+ def _convert_token_to_id(self, token: str) -> int:
76
+ raise NotImplementedError("This method should not be used in this class.")
77
+
78
+ def _convert_id_to_token(self, index: int) -> str:
79
+ raise NotImplementedError("This method should not be used in this class.")
80
+
81
+ def save_pretrained(self, path, *args, **kwargs):
82
+ # ensure the save path exists
83
+ os.makedirs(path, exist_ok=True)
84
+ # save this file in the repository as `pick_best_tokenizer.py`
85
+ from pathlib import Path
86
+ source = Path(__file__)
87
+ destination = Path(path+'/pick_best_tokenizer.py')
88
+ destination.write_bytes(source.read_bytes())
89
+ # save the config
90
+ config = {
91
+ "tokenizer_class": "PickBestTokenizer",
92
+ "auto_map": ["pick_best_tokenizer.PickBestTokenizer", None],
93
+ "tokenizers": [tokenizer.name_or_path for tokenizer in self.tokenizers]
94
+ }
95
+ with open(path+'/tokenizer_config.json', 'w') as f:
96
+ json.dump(config, f)
97
+
98
+ # Example usage
99
+ #tokenizer_fr = AutoTokenizer.from_pretrained("tokenizers/fineweb2_fr")
100
+ #tokenizer_nl = AutoTokenizer.from_pretrained("tokenizers/fineweb2_nl")
101
+ #tokenizer_de = AutoTokenizer.from_pretrained("tokenizers/fineweb2_de")
102
+ #tokenizer_en = AutoTokenizer.from_pretrained("answerdotai/ModernBERT-base")
103
+ #pick_best_tokenizer = PickBestTokenizer([tokenizer_fr, tokenizer_nl, tokenizer_de, tokenizer_en])
104
+
105
+ PickBestTokenizer.register_for_auto_class("AutoTokenizer")
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 8192,
3
+ "do_lower_case": false
4
+ }
tokenizer_config.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "tokenizer_class": "PickBestTokenizer",
3
+ "auto_map": ["pick_best_tokenizer.PickBestTokenizer", null],
4
+ "tokenizers": ["Parallia/Fairly-Multilingual-ModernBERT-Embed-BE-FR","Parallia/Fairly-Multilingual-ModernBERT-Embed-BE-NL","Parallia/Fairly-Multilingual-ModernBERT-Embed-BE-DE","Parallia/Fairly-Multilingual-ModernBERT-Embed-BE-EN"]
5
+ }