Update README.md
Browse files
README.md
CHANGED
@@ -10,6 +10,44 @@ base_model: nbeerbower/Dumpling-Mistral-Nemo-8B
|
|
10 |
This model was converted to GGUF format from [`nbeerbower/Dumpling-Mistral-Nemo-8B`](https://huggingface.co/nbeerbower/Dumpling-Mistral-Nemo-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
11 |
Refer to the [original model card](https://huggingface.co/nbeerbower/Dumpling-Mistral-Nemo-8B) for more details on the model.
|
12 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
## Use with llama.cpp
|
14 |
Install llama.cpp through brew (works on Mac and Linux)
|
15 |
|
|
|
10 |
This model was converted to GGUF format from [`nbeerbower/Dumpling-Mistral-Nemo-8B`](https://huggingface.co/nbeerbower/Dumpling-Mistral-Nemo-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
11 |
Refer to the [original model card](https://huggingface.co/nbeerbower/Dumpling-Mistral-Nemo-8B) for more details on the model.
|
12 |
|
13 |
+
---
|
14 |
+
🧪 Experimental
|
15 |
+
|
16 |
+
An attempt to recover intelligence with a quick train, results are meh
|
17 |
+
|
18 |
+
Dumpling-Mistral-Nemo-8B
|
19 |
+
|
20 |
+
nbeerbower/mistral-nemo-kartoffel-PRUNE3 finetuned on:
|
21 |
+
|
22 |
+
-nbeerbower/GreatFirewall-DPO
|
23 |
+
|
24 |
+
-nbeerbower/Schule-DPO
|
25 |
+
|
26 |
+
-nbeerbower/Purpura-DPO
|
27 |
+
|
28 |
+
-nbeerbower/Arkhaios-DPO
|
29 |
+
|
30 |
+
-jondurbin/truthy-dpo-v0.1
|
31 |
+
|
32 |
+
-antiven0m/physical-reasoning-dpo
|
33 |
+
|
34 |
+
-flammenai/Date-DPO-NoAsterisks
|
35 |
+
|
36 |
+
-flammenai/Prude-Phi3-DPO
|
37 |
+
|
38 |
+
-Atsunori/HelpSteer2-DPO (1,000 samples)
|
39 |
+
|
40 |
+
-jondurbin/gutenberg-dpo-v0.1
|
41 |
+
|
42 |
+
-nbeerbower/gutenberg2-dpo
|
43 |
+
|
44 |
+
-nbeerbower/gutenberg-moderne-dpo.
|
45 |
+
|
46 |
+
Method
|
47 |
+
---
|
48 |
+
QLoRA ORPO tune with 2x RTX 3090 for 2 epochs.
|
49 |
+
|
50 |
+
---
|
51 |
## Use with llama.cpp
|
52 |
Install llama.cpp through brew (works on Mac and Linux)
|
53 |
|