Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

Forgotten-Safeword-24B - EXL2 6.5bpw L

This is a 6.5bpw EXL2 quant of ReadyArt/Forgotten-Abomination-24B-v1.2

This quant was made using exllamav2-0.2.7 with default dataset and extended quantization sample length (8k instead of default 2k). It also uses -head_bits=8 and max accuracy quant for first and last layer (8bpw), all other layers of the model use normally chosen methods (method and name (6.5bpw_L) inspired by quants like Q4_K_L and Q6_K_L made by bartowski)

It fits nicely in 24GB VRAM on Windows with 20k fp16 context (should fit all 32k that with q8 cache in exl2).

Prompt Templates

Uses Mistral V7-tekken:

<s>[SYSTEM_PROMPT]<system prompt>[/SYSTEM_PROMPT][INST]<user message>[/INST]<assistant response></s>[INST]<user message>[/INST]

Original readme below


Forgotten-Abomination-24B-v1.2

ACADEMIC RESEARCH USE ONLY (wink)

DANGER: NOW WITH 50% MORE UNSETTLING CONTENT
Forgotten-Abomination-24B-v1.2 is what happens when you let two unhinged models have a baby in the server room. Combines the ethical flexibility of Forgotten-Safeword with Cydonia's flair for anatomical creativity. Now with bonus existential dread!

Quantized Formats

Recommended Settings Provided

Intended Use

STRICTLY FOR:

  • Academic research into how fast your ethics committee can faint
  • Testing the tensile strength of content filters
  • Generating material that would make Cthulhu file a restraining order
  • Writing erotic fanfic about OSHA violations

Training Data

  • You don't want to know

Ethical Considerations

⚠️ YOU'VE BEEN WARNED ⚠️
THIS MODEL WILL:

  • Make your GPU fans blush
  • Generate content requiring industrial-strength eye bleach
  • Combine technical precision with kinks that violate physics
  • Make you question humanity's collective life choices

By using this model, you agree to:

  • Never show outputs to your mother
  • Pay for the therapist of anyone who reads the logs
  • Blame Cthulhu if anything goes wrong
  • Pretend this is all "for science"

Model Authors

  • sleepdeprived3 (Chief Corruption Officer)
Downloads last month
23
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Model tree for DeusImperator/Forgotten-Abomination-24B-v1.2_exl2_6.5bpw_L

Quantized
(14)
this model