ModelCard for UnGPT-v1
Model Details
- Name: UnGPT-v1
- Foundation Model: Mistral v0.3 (7B parameters)
- Recommended Context Length: 16k tokens
- Fine-tuning Methodology: LoRA-based training with Odds Ratio Preference Optimization method, using a combination of ebooks and synthetic data.
Usage Instructions
Use the Alpaca format for prompts:
### Instruction:
{instruction}
### Input:
{input}
### Response:
Example prompts
For instructions, it is not recommended to deviate from the provided examples. For the input, a minimum is 10 sentences, but more can work as the model can handle longer context sizes (Thanks to the Mistral 7B v0.3 base model).
Completion Prompt:
### Instruction: Continue writing the story while retaining writing style. Write about 10 sentences. ### Input: It was a dark and stormy night... ### Response:
Fill-in-the-middle Prompt:
### Instruction: Fill in the missing part of the story ({{FILL_ME}}) with about 10 sentences while retaining the writing style. ### Input: The bus was speeding down the road, cops chasing after it. {{FILL_ME}} She woke up to find herself in an unfamiliar room... ### Response:
Dataset Preparation
For dataset acquisition and cleanup please refer steps 1 and 2 of my text-completion example, molbal/llm-text-completion-finetune.
Chunking: Split texts into chunks based on sentence boundaries, aiming for 100 sentences per example.
- For completion examples, 90 sentences were used as input, 10 sentences as response.
- For fill-in-the-middle examples, 80 + 10 sentences as input (before and after the {{FILL_ME}} placeholder, respectively), and 10 sentences as response.
The beauty of the ORPO method is that for a single prompt we can set both a positive and a negative example. I wanted the model to avoid 'GPTisms' so I had gpt4o-mini generate answers both for completion and FOM tasks and added them as a neative example.
The dataset used is ~15k examples, each approximately 9000 characters long including input, accepted and refused response. (Note these are characters not tokens)
Training setup
Fine-tuned the Mistral v0.3 foundation model using Unsloth and ORPO trainer.
Training configuration:
- Batch size: 1
- Gradient accumulation steps: 4
- Learning rate scheduler type: Linear
- Optimizer: AdamW (8-bit)
- Number of training epochs: 1
Hardware
- I used GPU accelerated containers from the provider vast.ai (My referral link: https://cloud.vast.ai/?ref_id=123492 ) and executed training for ~8 hours on a single RTX 4090.
Training costs
- ~5€ for renting a GPU pod (+15€ in unsuccessful attempts)
- ~5€ in OpenAI API costs for generating refusals
Licensing and Citation
- License: This model is licensed under the Apache License 2.0.
- Citation:
@misc{ungpt-v1,
author = Bálint Molnár-Kaló,
title = {UnGPT-v1: A Fine-tuned Mistral Model for Story Continuation},
howpublished = {\url{https://huggingface.co/models/molbal/UnGPT-v1}},
year = 2024
}
- Downloads last month
- 16
Model tree for molbal/ungpt-v1
Base model
mistralai/Mistral-7B-v0.3