fuzzy-mittenz commited on
Commit
00dd64d
·
verified ·
1 Parent(s): 96c04ab

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -7,9 +7,10 @@ language:
7
  - en
8
  ---
9
 
10
- # IntelligentEstate/Sakura_Warding-Qw2.5-7B-Q4_K_M-GGUF
 
11
  This model was converted to GGUF format from [`newsbang/Homer-v0.5-Qwen2.5-7B`](https://huggingface.co/newsbang/Homer-v0.5-Qwen2.5-7B) using llama.cpp
12
- Refer to the [original model card](https://huggingface.co/newsbang/Homer-v0.5-Qwen2.5-7B) for more details on the model. Took a few Quantizations to get everything perfect.
13
  ---
14
  Model Named for personal system use, after multiple Quants this turned out to be the most functional for me,
15
  ---
 
7
  - en
8
  ---
9
 
10
+ # IntelligentEstate/Sakura_Warding_H0.5-Qw2.5-7B-Q4_K_M-GGUF
11
+ *Great all around functionality*
12
  This model was converted to GGUF format from [`newsbang/Homer-v0.5-Qwen2.5-7B`](https://huggingface.co/newsbang/Homer-v0.5-Qwen2.5-7B) using llama.cpp
13
+ This model is Great for code work and most other stuff Refer to the [original model card](https://huggingface.co/newsbang/Homer-v0.5-Qwen2.5-7B) for more details on the model. Took a few Quantizations to get everything perfect.
14
  ---
15
  Model Named for personal system use, after multiple Quants this turned out to be the most functional for me,
16
  ---