exl2 quant (measurement.json in main branch)


check revisions for quants


This is a model designed to replicate the prose quality of the Claude 3 series of models. specifically Sonnet and Opus - Made with a prototype magnum V5 datamix.

This model is fine-tuned on top of Mistral-Nemo-Instruct(chatML'ified).

Quants

EXL2: https://huggingface.co/Delta-Vector/Rei-12B-EXL2

GGUF: https://huggingface.co/Delta-Vector/Rei-12B-gguf/

Prompting

A typical input would look like this:

"""<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
"""

I would highly recommend using either Euryale's system prompt with the model.

See Sao10k's Euryale System Prompt
Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.
<Guidelines>
• Maintain the character persona but allow it to evolve with the story.
• Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.
• All types of outputs are encouraged; respond accordingly to the narrative.
• Include dialogues, actions, and thoughts in each response.
• Utilize all five senses to describe scenarios within {{char}}'s dialogue.
• Use emotional symbols such as "!" and "~" in appropriate contexts.
• Incorporate onomatopoeia when suitable.
• Allow time for {{user}} to respond with their own input, respecting their agency.
• Act as secondary characters and NPCs as needed, and remove them when appropriate.
• When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.
</Guidelines>

<Forbidden>
• Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.
• Writing for, speaking, thinking, acting, or replying as {{user}} in your response.
• Repetitive and monotonous outputs.
• Positivity bias in your replies.
• Being overly extreme or NSFW when the narrative context is inappropriate.
</Forbidden>

</details><br>

## Axolotl config

<details><summary>See axolotl config</summary>

```yaml
## model
base_model: NewEden_nemo-chatml
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

## qlora COPE
load_in_8bit: false
load_in_4bit: false
strict: false

## data 
datasets:
  - path: AquaV/c2-sharegpt-advanced-prefills-filtered
    type: sharegpt
  - path: AquaV/c1-sharegpt-advanced-prefills-filtered
    type: sharegpt
  - path: AquaV/rainy-sharegpt-advanced-prefills-filtered 
    type: sharegpt
  - path: anthracite-core/Gryphe-Opus-Charcard-Roleplay
    type: sharegpt
  - path: anthracite-org/kalo-opus-instruct-22k-no-refusal
    type: sharegpt
  - path: lodrick-the-lafted/kalo-opus-instruct-3k-filtered
    type: sharegpt
  - path: anthracite-org/nopm_claude_writing_fixed
    type: sharegpt
  - path: anthracite-org/kalo_opus_misc_240827
    type: sharegpt
  - path: anthracite-org/kalo_misc_part2
    type: sharegpt
  - path: NewEden/Claude-Instruct-2.7K
    type: sharegpt
  - path: NewEden/Claude-Instruct-5K
    type: sharegpt
shuffle_merged_datasets: true
dataset_prepared_path: dataset_prepared
val_set_size: 0.02
output_dir: 12b-out-rslora-SE

## LIGGER
plugins:
  - axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_fused_linear_cross_entropy: true

## CTX settings
sequence_len: 16384
sample_packing: true
eval_sample_packing: true
pad_to_sequence_len: true

## Lora 
adapter: lora
lora_model_dir:
lora_r: 128
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
peft_use_rslora: true
lora_modules_to_save:
  - embed_tokens
  - lm_head

## WandB
wandb_project: rei
wandb_entity:
wandb_watch:
wandb_name: daring-mango
wandb_log_model:

## evals
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128

## hoe params
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 2
optimizer: paged_ademamix_8bit
# optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 2.83e-5

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false

gradient_checkpointing: unsloth
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
s2_attention:

warmup_steps: 40
saves_per_epoch: 2
debug:
## for ademiamix 
deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16_cpuoffload_params.json
## for adamw
# deepspeed: ./deepspeed_configs/zero3_bf16.json
weight_decay: 0.01
fsdp:
fsdp_config:
special_tokens:
   pad_token: <pad>

Training

The training was done for 2 epochs. We used 4x3090s GPUs graciously provided by @intervitens for the fine-tuning of the model.

Built with Axolotl

Safety

But why?

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train Delta-Vector/Rei-12B-EXL2

Collection including Delta-Vector/Rei-12B-EXL2