|
--- |
|
library_name: transformers |
|
base_model: |
|
- Sao10K/MN-12B-Lyra-v1 |
|
datasets: |
|
- jondurbin/gutenberg-dpo-v0.1 |
|
license: apache-2.0 |
|
--- |
|
Mostly quanting this to try it out, didn't see any other quants for EXL2 on this so here we are. |
|
|
|
[This is the 6bpw version of this model. Find the original here.](https://huggingface.co/nbeerbower/Lyra-Gutenberg-mistral-nemo-12B) |
|
<br> |
|
[For the 8bpw version, go here](https://huggingface.co/Statuo/Lyra-Gutenberg-12b-EXL2-8bpw) |
|
<br> |
|
[For the 4bpw version, go here](https://huggingface.co/Statuo/Lyra-Gutenberg-12b-EXL2-4bpw) |
|
<br> |
|
|
|
# mistral-nemo-gutenberg-12B-v4 |
|
|
|
[Sao10K/MN-12B-Lyra-v1](https://huggingface.co/Sao10K/MN-12B-Lyra-v1) finetuned on [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1). |
|
|
|
### Method |
|
|
|
Finetuned using an A100 on Google Colab for 3 epochs. |
|
|
|
[Fine-tune Llama 3 with ORPO](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) |