We finetuned gpt2 on tatsu-lab/alpaca Dataset for 5 epochs using MonsterAPI no-code LLM finetuner.
This dataset is HuggingFaceH4/tatsu-lab/alpaca unfiltered, removing 36 instances of blatant alignment.
The finetuning session got completed in 20 minutes and costed us only $3
for the entire finetuning run!
Hyperparameters & Run details:
- Model: gpt2
- Dataset: tatsu-lab/alpaca
- Learning rate: 0.0003
- Number of epochs: 5
- Data split: Training: 90% / Validation: 10%
- Gradient accumulation steps: 1
license: apache-2.0
- Downloads last month
- 23
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no pipeline_tag.
Model tree for monsterapi/gpt2_alpaca-lora
Base model
openai-community/gpt2