You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Built with Axolotl

See axolotl config

axolotl version: 0.8.1

base_model: TheDrummer/Cydonia-24B-v2.1
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

model_config:
  trust_remote_code: true
  tokenizer:
    pad_token: "</s>"
    padding_side: "right"
    add_bos_token: true
    add_eos_token: false

datasets:
  - path: data/data.jsonl
    type: chat_template
    chat_template_strategy: tokenizer
    field_messages: conversations
    message_property_mappings:
      role: role
      content: content
    roles:
      user: ["user"]
      assistant: ["assistant"]
      system: ["system"]

load_in_4bit: true
adapter: qlora
lora_r: 64
lora_alpha: 32
lora_dropout: 0.1
lora_target_modules:
  - q_proj
  - k_proj
  - v_proj
  - o_proj
  - gate_proj
  - up_proj
  - down_proj

bf16: true
flash_attention: true
gradient_checkpointing: true
deepspeed: deepspeed_configs/zero2.json

gradient_accumulation_steps: 4
micro_batch_size: 8
num_epochs: 3
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 3e-6
warmup_ratio: 0.02

max_seq_length: 8192
pad_to_sequence_len: true
sample_packing: true
max_grad_norm: 1.0

output_dir: ./output
save_steps: 100
logging_steps: 10
save_safetensors: true

special_tokens:
  pad_token: "</s>"

output

This model is a fine-tuned version of TheDrummer/Cydonia-24B-v2.1 on the data/data.jsonl dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-06
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 33
  • num_epochs: 3.0

Training results

Framework versions

  • PEFT 0.15.1
  • Transformers 4.51.0
  • Pytorch 2.6.0+cu124
  • Datasets 3.5.0
  • Tokenizers 0.21.1
Downloads last month
3
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for sleepdeprived3/Scifi-Roleplay-24B-v1.0-LoRA

Adapter
(3)
this model