--- base_model: - EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0 - Sao10K/L3.1-70B-Hanami-x1 - Sao10K/L3.3-70B-Euryale-v2.3 - LatitudeGames/Wayfarer-Large-70B-Llama-3.3 - TheDrummer/Anubis-70B-v1 - TheSkullery/L3.1x3.3-Hydroblated-R1-70B-v4.4 - SicariusSicariiStuff/Negative_LLAMA_70B - Sao10K/70B-L3.3-Cirrus-x1 library_name: transformers --- L3.3-Electra-R1-70b

L3.3-Electra-R1-70b

Electra Model Mascot
⚡ Top Sponsors

Model Information

L3.3-Electra-R1-70b | v0.6.A

L3.3 = Llama 3.3 SCE Merge R1 = Deepseek R1 70b Parameters v0.6.A

Model Composition

Model Series Overview

L3.3-Electra-R1-70b is the newest release of the Unnamed series, this is the 6th iteration based of user feedback.

Technical Architecture

Built on a custom DeepSeek R1 Distill base (TheSkullery/L3.1x3.3-Hydroblated-R1-70B-v4.4), Electra-R1 integrates specialized components through the SCE merge method. The model uses float32 dtype during processing with a bfloat16 output dtype for optimized performance.

Core Capabilities

Electra-R1 serves newest gold standard and baseline. User feedback consistently highlights its superior intelligence, coherence, and unique ability to provide deep character insights. Through proper prompting, the model demonstrates advanced reasoning capabilities and unprompted exploration of character inner thoughts and motivations.

Base Architecture

The model utilizes the custom Hydroblated-R1 base, created for stability and enhanced reasoning. The SCE merge method's settings are precisely tuned based on extensive community feedback (of over 10 diffrent models from Nevoria to Cu-Mai), ensuring optimal component integration while maintaining model coherence and reliability. This foundation establishes Electra-R1 as the benchmark upon which its variant models build and expand.

Recommended Sampler Settings

⚡ By: @Geechan

Static Temperature: 1.0
Dynamic Temp (Alternative): 0.8 - 1.05
Min P: 0.025-0.03
DRY:
- Multiplier: 0.8
- Base: 1.74
- Length: 4-6

Good Starting Templates & Prompts

LeCeption v2 by @Steel > A revamped XML version of Llam@ception 1.5.2 with stepped thinking and Reasoning added

⚡ ST REASONING CONFIGURATION:

Start Reply With: (Either)

'<think> OK, as an objective, detached narrative analyst, let's think this through carefully:'

'<think> OK, the user is asking'

Reasoning Formatting (no spaces):

Prefix: '<think>'
Suffix: '</think>'

Support & Community:

Special Thanks

  • @Geechan for feedback and sampler settings
  • @Konnect for their feedback and templates
  • @Kistara for their feedback and help with the model mascot design on past model's
  • @Thana Alt for their feedback
  • @Lightning_missile for their feedback
  • The Arli community for feedback and testers
  • The BeaverAI communty for feedback and testers

I wish I could add everyone but im pretty sure it would be as long as the card!