image/png MaidenlessNoMore-7B-GGUF was my first attempt at merging an LLM I decided to use one of the first models I really enjoyed that not many people know of: https://huggingface.co/cookinai/Valkyrie-V1 with my other favorite model that has been my fallback model for a long time: https://huggingface.co/SanjiWatsuki/Kunoichi-7B

This was more of an experiment than anything else. Hopefully this will lead to some more interesting merges and who knows what else in the future. I mean we have to start somewhere right?

Alpaca or Alpaca roleplay is recommended.

GlobalMeltdown/MaidenlessNoMore-7B-GGUF

This model was converted to GGUF format from GlobalMeltdown/MaidenlessNoMore-7B using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

Downloads last month
10
GGUF
Model size
7.24B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support