fuzzy-mittenz
commited on
Commit
•
13ea666
1
Parent(s):
28cd77b
Update README.md
Browse files
README.md
CHANGED
@@ -34,5 +34,5 @@ Jinja templates should be fixed in GPT4ALL for Ollama use standard Qwen template
|
|
34 |
## My Ideal settings
|
35 |
Context length 4096, Max Length 8192, Batch 192, temp .6-.9, Top-K 60, Top-P .5 -.6
|
36 |
|
37 |
-
# IntelligentEstate/
|
38 |
This model was converted to GGUF format from [`jeffmeloy/Qwen2.5-7B-olm-v1.0`](https://huggingface.co/jeffmeloy/Qwen2.5-7B-olm-v1.0)
|
|
|
34 |
## My Ideal settings
|
35 |
Context length 4096, Max Length 8192, Batch 192, temp .6-.9, Top-K 60, Top-P .5 -.6
|
36 |
|
37 |
+
# IntelligentEstate/OLM_Warding-JMeloy-Mittens-Qwn-Q4_NL.GGUF
|
38 |
This model was converted to GGUF format from [`jeffmeloy/Qwen2.5-7B-olm-v1.0`](https://huggingface.co/jeffmeloy/Qwen2.5-7B-olm-v1.0)
|