Update README.md
Browse files
README.md
CHANGED
@@ -23,7 +23,7 @@ pipeline_tag: text-generation
|
|
23 |
# We are very proud to announce, PHI-4 CoT, but you can just call it o1 mini 😉
|
24 |
Please check the examples we provided: https://huggingface.co/Pinkstack/PARM-V2-phi-4-16k-CoT-o1-gguf#%F0%9F%A7%80-examples
|
25 |
|
26 |
-
Unlike previous models we've uploaded, this one is the best one we've published! Answers in two steps: Reasoning -> Final answer like
|
27 |
# 🧀 Which quant is right for you? (all tested!)
|
28 |
- ***Q3:*** This quant should be used on most high-end devices like rtx 2080TI's, Responses are very high quality, but its slightly slower than Q4. (Runs at ~1 tokens per second or less on a Samsung z fold 5 smartphone.)
|
29 |
- ***Q4:*** This quant should be used on high-end modern devices like rtx 3080's or any GPU,TPU,CPU that is powerful enough and has at minimum 15gb of available memory, (On servers and high-end computers we personally use it.) reccomened.
|
|
|
23 |
# We are very proud to announce, PHI-4 CoT, but you can just call it o1 mini 😉
|
24 |
Please check the examples we provided: https://huggingface.co/Pinkstack/PARM-V2-phi-4-16k-CoT-o1-gguf#%F0%9F%A7%80-examples
|
25 |
|
26 |
+
Unlike previous models we've uploaded, this one is the best one we've published! Answers in two steps: Reasoning -> Final answer like o1 mini and other similar reasoning ai models. This model is our new flagship. Please note that this is an experimental Cot model by us, if there are any issues report them! a system prompt is very important but it would do everything in two steps (Reasoning -> Final answer) regardless.
|
27 |
# 🧀 Which quant is right for you? (all tested!)
|
28 |
- ***Q3:*** This quant should be used on most high-end devices like rtx 2080TI's, Responses are very high quality, but its slightly slower than Q4. (Runs at ~1 tokens per second or less on a Samsung z fold 5 smartphone.)
|
29 |
- ***Q4:*** This quant should be used on high-end modern devices like rtx 3080's or any GPU,TPU,CPU that is powerful enough and has at minimum 15gb of available memory, (On servers and high-end computers we personally use it.) reccomened.
|