Configuration Parsing
Warning:
In config.json: "quantization_config.bits" must be an integer
Mistral-Small-Gutenberg-Doppel-22B - EXL2 6.8bpw
This is a 6.8bpw EXL2 quant of nbeerbower/Mistral-Small-Gutenberg-Doppel-22B
This quant was made using exllamav2-0.2.2 with default dataset.
I tested this quant shortly in some random RPs (including some RPs with 8k+ context) and it seems to work fine.
Prompt Templates
Uses Mistral v2/v3 format.
Original readme below
Mistral-Small-Gutenberg-Doppel-22B
mistralai/Mistral-Small-Instruct-2409 finetuned on jondurbin/gutenberg-dpo-v0.1 and nbeerbower/gutenberg2-dpo.
Method
ORPO tuned with an A40 on RunPod (plz sponsor me) for 3 epochs.
- Downloads last month
- 10
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for DeusImperator/Mistral-Small-Gutenberg-Doppel-22B_exl2_6.8bpw
Base model
mistralai/Mistral-Small-Instruct-2409