Trained for 2 epochs using mpasila/Finnish-ShareGPT-Tiny-V1-1 in 2048 context with LoRA Rank set to 256 with Alpha set to 512.

Prompt format: ChatML

Works better with a system prompt.

Uploaded model

  • Developed by: mpasila
  • License: apache-2.0
  • Finetuned from model : LumiOpen/Viking-7B

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
0
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for mpasila/Finnish-Chatty-Tiny-V1-1-LoRA-7B

Base model

LumiOpen/Viking-7B
Adapter
(9)
this model

Dataset used to train mpasila/Finnish-Chatty-Tiny-V1-1-LoRA-7B

Collection including mpasila/Finnish-Chatty-Tiny-V1-1-LoRA-7B