EVA-Rombos1-Qwen2.5-32B - EXL2 4.5bpw L
This is a 4.5bpw EXL2 quant of nbeerbower/EVA-Rombos1-Qwen2.5-32B
This quant was made using exllamav2-0.2.7 with default dataset and extended quantization sample length (4k instead of default 2k). It also uses -head_bits=8 and max accuracy quant for first and last layer (8bpw), all other layers of the model use normally chosen methods (method and name (4.5bpw_L) inspired by quants like Q4_K_L and Q6_K_L made by bartowski)
I tested it some some RPs (also ones over 12k context) and it seems to work. It fits nicely in 24GB VRAM on Windows with 16k fp16 context (should fit 2x that with q8 cache in exl2).
Prompt Templates
Uses ChatML
Original readme below
EVA-Rombos1-Qwen2.5-32B
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Model Stock merge method using DeepSeek-R1-Qwen-lorablated-32B as a base.
Models Merged
The following models were included in the merge:
- Rombos-Qwen2.5-32B-lorablated
- EVA-UNIT-01_EVA-Qwen2.5-32B-v0.2
- EVA-Gutenberg3-Qwen2.5-32B
Configuration
The following YAML configuration was used to produce this model:
models:
- model: nbeerbower/Rombos-Qwen2.5-32B-lorablated
- model: EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
- model: nbeerbower/EVA-Gutenberg3-Qwen2.5-32B
merge_method: model_stock
base_model: nbeerbower/DeepSeek-R1-Qwen-lorablated-32B
dtype: bfloat16
- Downloads last month
- 15
Model tree for DeusImperator/EVA-Rombos1-Qwen2.5-32B_exl2_4.5bpw_L
Base model
nbeerbower/EVA-Rombos1-Qwen2.5-32B