cryptgpt-large / README.md
diwank's picture
End of training
5a3f6db verified
metadata
base_model: diwank/cryptgpt-large
tags:
  - axolotl
  - generated_from_trainer
model-index:
  - name: cryptgpt-large
    results: []

Built with Axolotl

See axolotl config

axolotl version: 0.4.1

# See:
# - https://github.com/karpathy/nanoGPT/blob/master/config/train_gpt2.py#L1
# - https://github.com/OpenAccess-AI-Collective/axolotl/blob/main/examples/tiny-llama/pretrain.yml#L14
# - https://github.com/karpathy/nanoGPT/blob/master/train.py#L35

base_model: diwank/cryptgpt-large
hub_model_id: diwank/cryptgpt-large

model_type: GPT2LMHeadModel
tokenizer_type: AutoTokenizer
trust_remote_code: true  # required for CryptGPTTokenizer
resize_token_embeddings_to_32x: true
output_dir: ./outputs/model-out

datasets:
  - path: diwank/encrypted-openwebtext
    type: completion

dataset_prepared_path: ./cryptgpt-prepared-dataset
val_set_size: 0.04
shuffle_merged_datasets: false

sequence_len: 1024
pad_to_sequence_len: true
sample_packing: false
pretrain_multipack_attn: false
train_on_inputs: true

gradient_accumulation_steps: 1
micro_batch_size: 128
optimizer: adamw_bnb_8bit
adam_beta1: 0.9
adam_beta2: 0.95
seed: 42

lr_scheduler: cosine
learning_rate: 6e-4
cosine_min_lr_ratio: 0.1  # min: 6e-5
weight_decay: 0.15

bf16: auto
tf32: true
flash_attention: true
torch_compile: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: true

deepspeed: deepspeed_configs/zero2.json

epochs: 20  # overriden by max_steps
max_steps: 600000
eval_steps: 12000
save_steps: 12000
save_total_limit: 3
early_stopping_patience: 3
auto_resume_from_checkpoints: true
logging_steps: 1
eval_max_new_tokens: 128
eval_causal_lm_metrics: 
  - sacrebleu

wandb_project: cryptgpt-large-0.1
wandb_name: cryptgpt-large-run-04

cryptgpt-large

This model is a fine-tuned version of diwank/cryptgpt-large on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.8034

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0006
  • train_batch_size: 128
  • eval_batch_size: 128
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • total_train_batch_size: 1024
  • total_eval_batch_size: 1024
  • optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 100
  • training_steps: 20456

Training results

Training Loss Epoch Step Validation Loss
15.7656 0.0000 1 15.4910
1.8545 0.5866 12000 1.8034

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.1.2+cu118
  • Datasets 2.19.1
  • Tokenizers 0.19.1