Update README.md
Browse files
README.md
CHANGED
@@ -7,6 +7,8 @@ license: cc-by-nc-4.0
|
|
7 |
|
8 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65b19c1b098c85365af5a83e/mnKtH1BMVHFAHZVEp3rQv.png)
|
9 |
|
|
|
|
|
10 |
# Badger Mushroom 4x8b
|
11 |
|
12 |
I've been really impressed with how well these frankenmoe models quant compared to the base llama 8b, but with far better speed than the 70b. 8x8b seemed a bit unneccessary for how much additonal value it brougt, so I dialed it back to a 4x8b version. This model feels pretty good out of the gate, which considering how I used a non-standard merge; is a bit surprising.
|
|
|
7 |
|
8 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65b19c1b098c85365af5a83e/mnKtH1BMVHFAHZVEp3rQv.png)
|
9 |
|
10 |
+
[GGUF Quants](https://huggingface.co/mradermacher/l3-badger-mushroom-4x8b-i1-GGUF)
|
11 |
+
|
12 |
# Badger Mushroom 4x8b
|
13 |
|
14 |
I've been really impressed with how well these frankenmoe models quant compared to the base llama 8b, but with far better speed than the 70b. 8x8b seemed a bit unneccessary for how much additonal value it brougt, so I dialed it back to a 4x8b version. This model feels pretty good out of the gate, which considering how I used a non-standard merge; is a bit surprising.
|