base_model:
- EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0
- Sao10K/L3.1-70B-Hanami-x1
- Sao10K/L3.3-70B-Euryale-v2.3
- LatitudeGames/Wayfarer-Large-70B-Llama-3.3
- TheDrummer/Anubis-70B-v1
- TheSkullery/L3.1x3.3-Hydroblated-R1-70B-v4.4
- SicariusSicariiStuff/Negative_LLAMA_70B
- Sao10K/70B-L3.3-Cirrus-x1
library_name: transformers
L3.3-Electra-R1-70b

⚡ Top Sponsors








Model Information
L3.3-Electra-R1-70b | v0.6.A
Model Composition
- EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0 Core capabilities
- LatitudeGames/Wayfarer-Large-70B-Llama-3.3 Enhanced Storytelling and RP
- Sao10K/L3.3-70B-Euryale-v2.3 Improved all rounder capabilities
- Sao10K/70B-L3.3-Cirrus-x1 Improved coherence
- Sao10K/L3.1-70B-Hanami-x1 Balanced responses
- TheDrummer/Anubis-70B-v1 Enhanced detail
- SicariusSicariiStuff/Negative_LLAMA_70B Reduced bias - Base
- TheDrummer/Fallen-Llama-3.3-R1-70B-v1 Reduced bias - Base
Model Series Overview
L3.3-Electra-R1-70b is the newest release of the Unnamed series, this is the 6th iteration based of user feedback.
Technical Architecture
Built on a custom DeepSeek R1 Distill base (TheSkullery/L3.1x3.3-Hydroblated-R1-70B-v4.4), Electra-R1 integrates specialized components through the SCE merge method. The model uses float32 dtype during processing with a bfloat16 output dtype for optimized performance.
Core Capabilities
Electra-R1 serves newest gold standard and baseline. User feedback consistently highlights its superior intelligence, coherence, and unique ability to provide deep character insights. Through proper prompting, the model demonstrates advanced reasoning capabilities and unprompted exploration of character inner thoughts and motivations.
Base Architecture
The model utilizes the custom Hydroblated-R1 base, created for stability and enhanced reasoning. The SCE merge method's settings are precisely tuned based on extensive community feedback (of over 10 diffrent models from Nevoria to Cu-Mai), ensuring optimal component integration while maintaining model coherence and reliability. This foundation establishes Electra-R1 as the benchmark upon which its variant models build and expand.
Recommended Sampler Settings
⚡ By: @Geechan
Good Starting Templates & Prompts
⚡ ST REASONING CONFIGURATION:
Start Reply With: (Either)
'<think> OK, as an objective, detached narrative analyst, let's think this through carefully:'
'<think> OK, the user is asking'
Reasoning Formatting (no spaces):
Support & Community:
Special Thanks
- @Geechan for feedback and sampler settings
- @Konnect for their feedback and templates
- @Kistara for their feedback and help with the model mascot design on past model's
- @Thana Alt for their feedback
- @Lightning_missile for their feedback
- The Arli community for feedback and testers
- The BeaverAI communty for feedback and testers
I wish I could add everyone but im pretty sure it would be as long as the card!