MarsupialAI
commited on
Commit
•
baf6b5e
1
Parent(s):
732a5d3
Update README.md
Browse files
README.md
CHANGED
@@ -4,7 +4,7 @@ license_name: yi-other
|
|
4 |
---
|
5 |
# Yeet 51b 200k
|
6 |
|
7 |
-
This model is a rotating-stack merge of three Yi 34b 200k models in a 51b (90 layer) configuration. See My reasoning behind this merge was twofold: I'd never seen a stacked merge made from 34b models, and I thought that maybe this could give near-70b performance but with a much larger context window, but still fitting within 48GB of VRAM. I think the results are quite good. The model
|
8 |
|
9 |
The gotcha here is speed. While it inferences as you'd expect for the model size, it's much slower than a similarly-sized 8x7b MoE. And while I personally find the output of this model to outperform any mixtral finetune I've seen so far, those finetunes are getting better all the time, and this really is achingly slow with a lot of context. I'm getting less than half a token per second on a pair of P40s with a full 32k prompt.
|
10 |
|
@@ -15,7 +15,7 @@ Component models for the rotating stack are
|
|
15 |
- brucethemoose/Yi-34B-200K-DARE-megamerge-v8
|
16 |
- taozi555/RpBird-Yi-34B-200k
|
17 |
|
18 |
-
This model is uncensored and
|
19 |
|
20 |
FP16 and Q4_K_S GGUFs are located here: https://huggingface.co/MarsupialAI/Yeet_51b_200k_GGUF_Q4KS_FP16
|
21 |
|
|
|
4 |
---
|
5 |
# Yeet 51b 200k
|
6 |
|
7 |
+
This model is a rotating-stack merge of three Yi 34b 200k models in a 51b (90 layer) configuration. See My reasoning behind this merge was twofold: I'd never seen a stacked merge made from 34b models, and I thought that maybe this could give near-70b performance but with a much larger context window, but still fitting within 48GB of VRAM. I think the results are quite good. The model performs on par with many 70b models at RP, chat, and storywriting. At Q4_K_S it will fit into a pair of 24GB GPUs with 32k context. Coherency at 32k is excellent, and will probably remain very good well beyond that thanks to the 200k base training.
|
8 |
|
9 |
The gotcha here is speed. While it inferences as you'd expect for the model size, it's much slower than a similarly-sized 8x7b MoE. And while I personally find the output of this model to outperform any mixtral finetune I've seen so far, those finetunes are getting better all the time, and this really is achingly slow with a lot of context. I'm getting less than half a token per second on a pair of P40s with a full 32k prompt.
|
10 |
|
|
|
15 |
- brucethemoose/Yi-34B-200K-DARE-megamerge-v8
|
16 |
- taozi555/RpBird-Yi-34B-200k
|
17 |
|
18 |
+
This model is uncensored and capable of generating objectionable material. However, it is not an explicitely-NSFW model, and it has never "gone rogue" and tried to insert NSFW content into SFW prompts in my experience. As with any LLM, no factual claims made by the model should be taken at face value. You know that boilerplate safety disclaimer that most professional models have? Assume this has it too. This model is for entertainment purposes only.
|
19 |
|
20 |
FP16 and Q4_K_S GGUFs are located here: https://huggingface.co/MarsupialAI/Yeet_51b_200k_GGUF_Q4KS_FP16
|
21 |
|