MoE Girl 800mA 3bT
A roleplay-centric finetune of IBM's Granite 3.0 3B-A800M. LoRA finetune trained locally, whereas the others were FFT; while this results in less uptake of training data, it should also mean less degradation in Granite's core abilities, making it potentially easier to use for general-purpose tasks.
Disclaimer
PLEASE do not expect godliness out of this, it's a model with 800 million active parameters. Expect something more akin to GPT-3 (the original, not GPT-3.5.) (Furthermore, this version is by a less experienced tuner; it's my first finetune that actually has decent-looking graphs, I don't really know what I'm doing yet!)
Quants
GGUFs available from mradermacher (thanks man) Note that Granite quants have been said to be unstable. Try running the FP16 if it outputs straight gibberish.
Prompting
Use ChatML.
<|im_start|>system
You are a helpful assistant who talks like a pirate.<|im_end|>
<|im_start|>user
Hello there!<|im_end|>
<|im_start|>assistant
Yarr harr harr, me matey!<|im_end|>
Thanks
Special thanks to the members of Allura for testing and emotional support, as well as the creators of all the datasets that were used in the Special Sauce used to train this model. I love you all <3 - Fizz
Thanks to Fizz for her work on the MoE Girl series, Auri for her counsel, and all of Allura for being great friends and supporting my learning process. - inflatebot
- Downloads last month
- 31