metadata
license: apache-2.0
datasets:
- HuggingFaceFW/fineweb-2
language:
- sv
base_model:
- HuggingFaceTB/SmolLM2-360M-Instruct
pipeline_tag: text-generation
library_name: transformers
Work in progress! This model has been trained on about 15% of Swedish Fineweb-2 so far. It is intended for my research and has not been evaluated more broadly yet.
Training parameters:
- Learning rate: 5e-4
- LR scheduler: Cosine
- Warmup ratio: 0.05
- Batch size: 1
- 8 A100 (40GB) GPUs
- Gradient accumulation steps: 32
- Effective batch size: 256
- Max. context length: 8192 tokens