--- base_model: - EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0 - Sao10K/L3.1-70B-Hanami-x1 - Sao10K/L3.3-70B-Euryale-v2.3 - LatitudeGames/Wayfarer-Large-70B-Llama-3.3 - TheDrummer/Anubis-70B-v1 - TheSkullery/L3.1x3.3-Hydroblated-R1-70B-v4.4 - SicariusSicariiStuff/Negative_LLAMA_70B - Sao10K/70B-L3.3-Cirrus-x1 library_name: transformers ---
L3.3-Electra-R1-70b is the newest release of the Unnamed series, this is the 6th iteration based of user feedback.
Built on a custom DeepSeek R1 Distill base (TheSkullery/L3.1x3.3-Hydroblated-R1-70B-v4.4), Electra-R1 integrates specialized components through the SCE merge method. The model uses float32 dtype during processing with a bfloat16 output dtype for optimized performance.
Electra-R1 serves newest gold standard and baseline. User feedback consistently highlights its superior intelligence, coherence, and unique ability to provide deep character insights. Through proper prompting, the model demonstrates advanced reasoning capabilities and unprompted exploration of character inner thoughts and motivations.
The model utilizes the custom Hydroblated-R1 base, created for stability and enhanced reasoning. The SCE merge method's settings are precisely tuned based on extensive community feedback (of over 10 diffrent models from Nevoria to Cu-Mai), ensuring optimal component integration while maintaining model coherence and reliability. This foundation establishes Electra-R1 as the benchmark upon which its variant models build and expand.
'<think> OK, as an objective, detached narrative analyst, let's think this through carefully:'
'<think> OK, the user is asking'
I wish I could add everyone but im pretty sure it would be as long as the card!