Uploaded model

  • Compute sponsored by: Nvidia, Arrow ECS Denmark through Danish Data Science Community
  • Developed by: ThatsGroes
  • License: apache-2.0
  • Finetuned from model : meta-llama/Llama-3.1-8B-Instruct

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Fine tuned LoRA adapter in fp16 for 1 epoch on kobprof/skolegpt-instruct and Mabeck/Danish-SlimOrca with rank=alpha=64

[codecarbon INFO @ 10:41:13] Energy consumed for RAM : 2.822621 kWh. RAM Power : 188.78840446472168 W [codecarbon INFO @ 10:41:13] Energy consumed for all GPUs : 4.379013 kWh. Total GPU Power : 260.7733742516678 W [codecarbon INFO @ 10:41:13] Energy consumed for all CPUs : 0.635721 kWh. Total CPU Power : 42.5 W [codecarbon INFO @ 10:41:13] 7.837356 kWh of electricity used since the beginning.

Downloads last month
23
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ThatsGroes/Llama-3.1-8B-Instruct-SkoleGPT-DaSlimOrca

Finetuned
(586)
this model

Datasets used to train ThatsGroes/Llama-3.1-8B-Instruct-SkoleGPT-DaSlimOrca