fuzzy-mittenz commited on
Commit
85b3ef2
·
verified ·
1 Parent(s): bf6dc35

Update README.md

Browse files

![pheonix rising.png](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/W-9YhmI7O7H8b-BezDD3N.png)

Files changed (1) hide show
  1. README.md +8 -1
README.md CHANGED
@@ -10,7 +10,14 @@ datasets:
10
  - IntelligentEstate/The_Key
11
  ---
12
 
13
- # fuzzy-mittenz/3Blarenegv3-ECE-PRYMMAL-Martial-Q6_K-GGUF
 
 
 
 
 
 
 
14
  This model was converted to GGUF format from [`brgx53/3Blarenegv3-ECE-PRYMMAL-Martial`](https://huggingface.co/brgx53/3Blarenegv3-ECE-PRYMMAL-Martial) using llama.cpp
15
 
16
  ## Use with llama.cpp
 
10
  - IntelligentEstate/The_Key
11
  ---
12
 
13
+ # IntelligentEstate/Prymmal-From_The_Ashes-Q6_k-GGUF
14
+
15
+ ## The best local model out for CPU hands down
16
+
17
+ Brought back from the verge of some crazy voodoo frankenmerge with a QAT/TT* Imatrix vector smoothing after thousands of failed models and hundreds of training practices this the new frontier.
18
+
19
+ ![pheonix rising.png](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/W-9YhmI7O7H8b-BezDD3N.png)
20
+
21
  This model was converted to GGUF format from [`brgx53/3Blarenegv3-ECE-PRYMMAL-Martial`](https://huggingface.co/brgx53/3Blarenegv3-ECE-PRYMMAL-Martial) using llama.cpp
22
 
23
  ## Use with llama.cpp