Trained on all the 6 different languages so it should hopefully be useful for all of them though the quality of the datasets probably vary a lot.

Uses ChatML as usual.

LoRA: mpasila/Viking-SlimInstruct-LoRA-V1-7B

Uses the following datasets:

saillab/alpaca-icelandic-cleaned, kobprof/skolegpt-instruct, tollefj/nor-instruct-cleaned, skvarre/sv-instruct-v1, Gryphe/Sonnet3.5-SlimOrcaDedupCleaned-20k, LumiOpen/instruction-collection-fin, neph1/Alpaca-Lora-GPT4-Swedish-Refined

Uploaded Viking-SlimInstruct-V1-7B model

  • Developed by: mpasila
  • License: apache-2.0
  • Finetuned from model : LumiOpen/Viking-7B

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
11
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mpasila/Viking-SlimInstruct-V1-7B

Quantizations
2 models

Datasets used to train mpasila/Viking-SlimInstruct-V1-7B