Reynaerde-7B-v3

This model is a fine-tuned version of mistralai/Mistral-7B-v0.3-Instruct on the ReBatch/ultrachat_400k_nl, the BramVanroy/stackoverflow-chat-dutch and the BramVanroy/no_robots_dutch datasets.

Model description

This model is a Dutch chat model, originally developed from Mistral 7B v0.3 Instruct and further finetuned first with SFT on multiple datasets.

Intended uses & limitations

The model could generate wrong, misleading, and potentially even offensive content. Use at your own risk. Use with mistrals chat template.

Training and evaluation data

It achieves the following results on the evaluation set:

  • Loss: 0.8596

Training procedure

This model was trained with QLoRa in bfloat16 with Flash Attention 2 on one A100 PCIe, using the sft script from the alignment handbook on RunPod.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 3
  • eval_batch_size: 6
  • seed: 42
  • distributed_type: multi-GPU
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 6
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 1

Framework versions

  • PEFT 0.11.1
  • Transformers 4.41.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1

Model Developer

The Mistral-7B-v0.3-Instruct model, on which this model is based, was created by Mistral AI. The finetuning was done by Julien Van den Avenne.

Downloads last month
238
Safetensors
Model size
7.25B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ReBatch/Reynaerde-7B-Instruct

Finetuned
(127)
this model
Adapters
1 model

Datasets used to train ReBatch/Reynaerde-7B-Instruct