Lamarck-14B Qwen 2.5 and relatives
Collection
Lamarck's public releases, plus significant related merges and finetunes
•
8 items
•
Updated
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Model Stock merge method using sometimesanotion/Qwenvergence-14B-v9 as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
name: Qwenvergence-14B-v10
merge_method: model_stock
base_model: sometimesanotion/Qwenvergence-14B-v9
tokenizer_source: base
dtype: float32
out_dtype: bfloat16
parameters:
int8_mask: true
normalize: true
rescale: false
models:
- model: sometimesanotion/Lamarck-14B-v0.7
- model: sometimesanotion/Qwenvergence-14B-v3-Prose+sometimesanotion/LoRA-la128
- model: huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated
- model: sometimesanotion/Lamarck-14B-v0.3+sometimesanotion/LoRA-la128
- model: huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated
- model: Krystalan/DRT-o1-14B
- model: sometimesanotion/Qwenvergence-14B-v9+sometimesanotion/LoRA-la128
- model: sometimesanotion/Qwenvergence-14B-v3-Prose+sometimesanotion/LoRA-la128