|
--- |
|
language: |
|
- en |
|
datasets: |
|
- teknium/OpenHermes-2.5 |
|
license: other |
|
license_name: llama3 |
|
base_model: yaystevek/llama-3-8b-Instruct-OpenHermes-2.5-QLoRA |
|
tags: |
|
- unsloth |
|
- facebook |
|
- meta |
|
- pytorch |
|
- llama |
|
- llama-3 |
|
- GGUF |
|
- trl |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
# QLoRA Finetune Llama 3 Instruct 8B + OpenHermes 2.5 |
|
|
|
This model is based on Llama-3-8b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE) |
|
|
|
Llama 3 Instruct 8B 4-bit from unsloth, finetuned with the OpenHermes 2.5 dataset on my home PC on one 24GB 4090. |
|
|
|
Special care was taken to preserve and reinforce proper eos token structure. |
|
|
|
[Source Model](https://huggingface.co/yaystevek/llama-3-8b-Instruct-OpenHermes-2.5-QLoRA) |
|
|
|
* [F16_GGUF](https://huggingface.co/yaystevek/llama-3-8b-Instruct-OpenHermes-2.5-QLoRA-GGUF/blob/main/llama-3-8b-Instruct-OpenHermes-2.5-QLoRA.f16.gguf) |
|
* [Q4_K_M_GGUF](https://huggingface.co/yaystevek/llama-3-8b-Instruct-OpenHermes-2.5-QLoRA-GGUF/blob/main/llama-3-8b-Instruct-OpenHermes-2.5-QLoRA.Q4_K_M.gguf) |
|
|
|
**Chat with llama.cpp** |
|
|
|
`llama.cpp/main -ngl 33 -c 0 --interactive-first --color -e --in-prefix '<|start_header_id|>user<|end_header_id|>\n\n' --in-suffix '<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n' -r '<|eot_id|>' -m ./llama-3-8b-Instruct-OpenHermes-2.5-QLoRA.Q4_K_M.gguf` |