Text Generation
Transformers
Safetensors
llama
text-generation-inference
unsloth
trl
sft
conversational
Inference Endpoints
Edit model card

This is a test model because the previous attempt failed. So it turns out I also trained this incorrectly due to the dataset using wrongly formatted ShareGPT so after I trained it again "correctly" for another full epoch I noticed that so.. I will have to train it yet again.. but at least now it should be fixed. But this model is using the wrongly formatted dataset. The next model should be better.

Prompt format is: ChatML

LoRA: mpasila/Viking-SlimSonnet-v0.2-LoRA-7B

Trained with regular LoRA (not quantized/QLoRA) and LoRA rank was 128 and Alpha set to 32. Trained for 5000 steps (0.11 epoch).

Uploaded model

  • Developed by: mpasila
  • License: apache-2.0
  • Finetuned from model : LumiOpen/Viking-7B

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
6
Safetensors
Model size
7.55B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mpasila/Viking-SlimSonnet-v0.2-7B

Base model

LumiOpen/Viking-7B
Finetuned
(19)
this model
Quantizations
2 models

Datasets used to train mpasila/Viking-SlimSonnet-v0.2-7B