|
--- |
|
license: cc-by-nc-4.0 |
|
datasets: |
|
- AdamCodd/Civitai-8m-prompts |
|
metrics: |
|
- rouge |
|
base_model: t5-small |
|
model-index: |
|
- name: t5-small-negative-prompt-generator |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
metrics: |
|
- type: loss |
|
value: 0.14079 |
|
- type: rouge-1 |
|
value: 68.7527 |
|
name: Validation ROUGE-1 |
|
- type: rouge-2 |
|
value: 53.8612 |
|
name: Validation ROUGE-2 |
|
- type: rouge-l |
|
value: 67.3497 |
|
name: Validation ROUGE-L |
|
widget: |
|
- text: masterpiece, 1girl, looking at viewer, sitting, tea, table, garden |
|
example_title: Prompt |
|
pipeline_tag: text2text-generation |
|
inference: false |
|
tags: |
|
- art |
|
extra_gated_prompt: "To get access to this model, send an email to [email protected] and provide a brief description of your project or application. Requests without this information will not be considered, and access will not be granted under any circumstances." |
|
extra_gated_fields: |
|
Company/University: text |
|
Country: country |
|
--- |
|
## t5-small-negative-prompt-generator |
|
This model [t5-small](https://huggingface.co/google-t5/t5-small) has been finetuned on a subset of the [AdamCodd/Civitai-8m-prompts](https://huggingface.co/datasets/AdamCodd/Civitai-8m-prompts) dataset (~800K prompts) focused on the top 10% prompts according to Civitai's positive engagement ("stats" field in the dataset). |
|
|
|
It achieves the following results on the evaluation set: |
|
* Loss: 0.14079 |
|
* Rouge1: 68.7527 |
|
* Rouge2: 53.8612 |
|
* Rougel: 67.3497 |
|
* Rougelsum: 67.3552 |
|
|
|
The idea behind this is to automatically generate negative prompts that improve the end result according to the positive prompt input. I believe it could be useful to display suggestions for new users who use stable-diffusion or similar. |
|
|
|
The license is **cc-by-nc-4.0**. For commercial use rights, please contact me ([email protected]). |
|
|
|
## Usage |
|
|
|
The length of the negative prompt is adjustable with the `max_new_tokens` parameter. The `repetition_penalty` and `no_repeat_ngram_size` are both needed as it'll start to repeat itself very quickly without it. You can use `temperature` and `top_k` to improve the creativity of the outputs. |
|
|
|
```python |
|
from transformers import pipeline |
|
|
|
text2text_generator = pipeline("text2text-generation", model="AdamCodd/t5-small-negative-prompt-generator") |
|
|
|
generated_text = text2text_generator( |
|
"masterpiece, 1girl, looking at viewer, sitting, tea, table, garden", |
|
max_new_tokens=50, |
|
repetition_penalty=1.2, |
|
no_repeat_ngram_size=2 |
|
) |
|
print(generated_text) |
|
# [{'generated_text': '(worst quality, low quality:1.4), EasyNegative'}] |
|
``` |
|
This model has been trained exclusively on stable-diffusion prompts (SD1.4, SD1.5, SD2.1, SDXL...) so it might not work as well on non-stable-diffusion models. |
|
|
|
NB: The dataset includes negative embeddings, so they're present in the output as you can see. |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 3e-05 |
|
- train_batch_size: 16 |
|
- eval_batch_size: 32 |
|
- seed: 42 |
|
- optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08 |
|
- Mixed precision |
|
- num_epochs: 2 |
|
- weight_decay: 0.01 |
|
|
|
### Framework versions |
|
|
|
- Transformers 4.36.2 |
|
- Datasets 2.16.1 |
|
- Tokenizers 0.15.0 |
|
- Evaluate 0.4.1 |
|
|
|
If you want to support me, you can [here](https://ko-fi.com/adamcodd). |