Umbra-v3-MoE-4x11b

Creator: SteelSkull

About Umbra-v3-MoE-4x11b: A Mixture of Experts model designed for general assistance with a special knack for storytelling and RP/ERP

Integrates models from notable sources for enhanced performance in diverse tasks.

Source Models:

Update-Log:

The [Umbra Series] keeps rolling out from the [Lumosia Series] garage, aiming to be your digital Alfred with a side of Shakespeare for those RP/ERP nights.

What's Fresh in v3?

Didnโ€™t reinvent the wheel, just slapped on some fancier rims. Upgraded the models and tweaked the prompts a bit. Now, Umbra's not just a general use LLM; it's also focused on spinning stories and "Stories".

Negative Prompt Minimalism

Got the prompts to do a bit of a diet and gym routineโ€”more beef on the positives, trimming down the negatives as usual with a dash of my midnight musings.

Still Guessing, Arenโ€™t We?

Just so we're clear, "v3" is not the messiah of updates. Itโ€™s another experiment in the saga.

Dive into Umbra v3 and toss your two cents my way. Your feedback is the caffeine in my code marathon.

Exl2 available by:

EXL2-Rpcal = AzureBlack

GGUF = mradermacher

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 73.09
AI2 Reasoning Challenge (25-Shot) 68.43
HellaSwag (10-Shot) 87.83
MMLU (5-Shot) 65.99
TruthfulQA (0-shot) 69.30
Winogrande (5-shot) 83.90
GSM8k (5-shot) 63.08
Downloads last month
6
Safetensors
Model size
36.1B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for SteelStorage/Umbra-v3-MoE-4x11b

Evaluation results