this finetune is based on llama3 8b and operates under the same license

the model aims to convert non-natural language prompts to natural language automatically while retaining the general idea of the prompt because SD3 performs poorly on non-natural language prompts.

this model was trained for 1500 steps on an RTX 3090 for 2.5 hours, training any longer gave a deminishing result because of the low batch size. the dataset has 90k+ original prompts (non-natural language questions) and 220k modified prompts (natural language answers). it has not been trained on the whole dataset because of lack of compute but the results are already amazing.

Downloads last month
19
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for matrixglitch/SD3_prompt-llama_8b

Quantizations
1 model