fuzzy-mittenz
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -2,15 +2,14 @@
|
|
2 |
license: apache-2.0
|
3 |
tags:
|
4 |
- llama-cpp
|
5 |
-
- gguf-my-repo
|
6 |
base_model: newsbang/Homer-v0.5-Qwen2.5-7B
|
7 |
language:
|
8 |
- en
|
9 |
---
|
10 |
|
11 |
-
#
|
12 |
-
This model was converted to GGUF format from [`newsbang/Homer-v0.5-Qwen2.5-7B`](https://huggingface.co/newsbang/Homer-v0.5-Qwen2.5-7B) using llama.cpp
|
13 |
-
Refer to the [original model card](https://huggingface.co/newsbang/Homer-v0.5-Qwen2.5-7B) for more details on the model.
|
14 |
---
|
15 |
Model Named for personal system use, after multiple Quants this turned out to be the most functional for me,
|
16 |
---
|
|
|
2 |
license: apache-2.0
|
3 |
tags:
|
4 |
- llama-cpp
|
|
|
5 |
base_model: newsbang/Homer-v0.5-Qwen2.5-7B
|
6 |
language:
|
7 |
- en
|
8 |
---
|
9 |
|
10 |
+
# IntelligentEstate/Sakura_Warding-Qw2.5-7B-Q4_K_M-GGUF
|
11 |
+
This model was converted to GGUF format from [`newsbang/Homer-v0.5-Qwen2.5-7B`](https://huggingface.co/newsbang/Homer-v0.5-Qwen2.5-7B) using llama.cpp
|
12 |
+
Refer to the [original model card](https://huggingface.co/newsbang/Homer-v0.5-Qwen2.5-7B) for more details on the model. Took a few Quantizations to get everything perfect.
|
13 |
---
|
14 |
Model Named for personal system use, after multiple Quants this turned out to be the most functional for me,
|
15 |
---
|