modernbert-fineweb-is
This model is a fine-tuned version of answerdotai/ModernBERT-base on the Icelandic portion of Fineweb-2.
Model description
More information needed
Intended uses & limitations
This model is intended to be a baseline for research purposes. It uses the original ModernBERT tokenizer trained on predominantly English data, which will obviously not be ideal.
Training and evaluation data
Trained on Fineweb-2.
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 256
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1
Training results
Framework versions
- Transformers 4.49.0.dev0
- Pytorch 2.5.1
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 27
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for jekunz/modernbert-fineweb-is
Base model
answerdotai/ModernBERT-base