Transformers
GGUF
llama-cpp
gguf-my-repo
Inference Endpoints
conversational
Triangle104 commited on
Commit
f33076f
·
verified ·
1 Parent(s): 92c3fd8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -0
README.md CHANGED
@@ -15,6 +15,16 @@ tags:
15
  This model was converted to GGUF format from [`nbeerbower/EVA-Gutenberg3-Qwen2.5-32B`](https://huggingface.co/nbeerbower/EVA-Gutenberg3-Qwen2.5-32B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
16
  Refer to the [original model card](https://huggingface.co/nbeerbower/EVA-Gutenberg3-Qwen2.5-32B) for more details on the model.
17
 
 
 
 
 
 
 
 
 
 
 
18
  ## Use with llama.cpp
19
  Install llama.cpp through brew (works on Mac and Linux)
20
 
 
15
  This model was converted to GGUF format from [`nbeerbower/EVA-Gutenberg3-Qwen2.5-32B`](https://huggingface.co/nbeerbower/EVA-Gutenberg3-Qwen2.5-32B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
16
  Refer to the [original model card](https://huggingface.co/nbeerbower/EVA-Gutenberg3-Qwen2.5-32B) for more details on the model.
17
 
18
+ ---
19
+ Model details:
20
+ -
21
+ EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2 finetuned on jondurbin/gutenberg-dpo-v0.1, nbeerbower/gutenberg2-dpo, and nbeerbower/gutenberg-moderne-dpo.
22
+
23
+ Method
24
+ -
25
+ ORPO tuned with 8x A100 for 2 epochs.
26
+
27
+ ---
28
  ## Use with llama.cpp
29
  Install llama.cpp through brew (works on Mac and Linux)
30