Model Card for PIPPA ShareGPT Subset Lora 7b
It is an experimental Lora focused on Roleplay that uses a subset of PIPPA ShareGPT
Usage
Custom
SYSTEM: Do thing
USER: {prompt}
CHARACTER:
Bias, Risks, and Limitations
This Lora is not intended for supplying factual information or advice in any form
Training Details
Training Data
1k of conversation from PIPPA ShareGPT
Training Procedure
The version of this Lora uploaded on this repository was trained using a 8x RTX A6000 cluster in 8-bit with regular LoRA adapters and 32-bit AdamW optimizer.
Training Hyperparameters
Training using a fork of Axolotl with two paths Patch 1 Patch 2
- load_in_8bit: true
- lora_r: 16
- lora_alpha: 16
- lora_dropout: 0.01
- gradient_accumulation_steps: 6
- micro_batch_size: 4
- num_epochs: 3
- learning_rate: 0.000065
Environmental Impact
Finetuning this model on 8xNVIDIA A6000 48GB in parallel takes about 30 minutes (7B)
- Downloads last month
- 2