Saw this model recommended as a good example of a RP model in the ST Discord. Figured I'd quant it to see what's up. The same advice for the Qwen model I posted last time holds up, but consider raising the Min P value if you feel the model isn't coherent enough.

This is the 6bpw version of this model. For the original model, go here
For the 8bpw version, go here
For the 4bpw version, go here


Image by CalamitousFelicitousness

Qwen2.5-14B Aletheia v1

RP/Story hybrid model, merge of Sugarquill and Neon. As with Gemma version, I wanted to preserve Sugarquill's creative spark, while making the model more steerable for RP. It proved to be more difficult this time, but I quite like the result regardless, even if the model is still somewhat temperamental.

Should work for both RP and storywriting, either on raw completion or with back-and-forth cowriting in chat mode. Seems to be quite sensitive to low depth instructions and samplers.

Thanks to Toasty and Fizz for testing and giving feedback

Model was created by Auri.


Notes about merging

It took me 20 something attempts to make this model. TIES didn't work at all, producing broken or nearly broken results every time. SLERP worked much better and after just 3 attempts I got something I like. Sugarquill was really prone to overtaking the merge, so I had to reduce it's part a lot, and still model has a lot of influence from it.

Format

Model responds to ChatML instruct formatting, exactly like it's base model.

<|im_start|>system
{system message}<|im_end|>
<|im_start|>user
{user message}<|im_end|>
<|im_start|>assistant
{response}<|im_end|>

Recommended Samplers

This one is a bit of a special snowflake, with special tastes. Those seem to work pretty well:

Temperature - 0.8
Top-A - 0.3
TFS - 0.75
DRY - Multiplier 0.8 - Base 1.75 - Allowed length 3 - Range 1024

As a starting point, you can try this ST Master Import

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: allura-org/TQ2.5-14B-Sugarquill-v1
dtype: bfloat16
merge_method: slerp
parameters:
  t:
  - value: 0.7
slices:
- sources:
  - layer_range: [0, 48]
    model: allura-org/TQ2.5-14B-Neon-v1
  - layer_range: [0, 48]
    model: allura-org/TQ2.5-14B-Sugarquill-v1
Downloads last month
10
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Statuo/Aletheia-14b-EXL2-6bpw