SDPrompter4b / README.md
Delta-Vector's picture
Update README.md
e90a97b verified
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Model README</title>
<style>
body {
background: linear-gradient(-45deg, #0a0a0a, #121212, #1a1a1a);
color: #E0E0E0;
font-family: 'Segoe UI', system-ui;
margin: 0;
padding: 20px;
min-height: 100vh;
animation: gradient 15s ease infinite;
background-size: 400% 400%;
text-align: center;
}
@keyframes gradient {
0% { background-position: 0% 50%; }
50% { background-position: 100% 50%; }
100% { background-position: 0% 50%; }
}
.container {
max-width: 800px;
margin: auto;
}
.model-image {
width: 100%;
border-radius: 12px;
filter: drop-shadow(0 0 10px rgba(255, 255, 255, 0.1));
animation: float 6s ease-in-out infinite;
}
@keyframes float {
0%, 100% { transform: translateY(0); }
50% { transform: translateY(-20px); }
}
.box {
background: rgba(30, 30, 30, 0.9);
border-radius: 12px;
padding: 20px;
margin: 25px 0;
backdrop-filter: blur(10px);
border: 1px solid rgba(255, 255, 255, 0.1);
text-align: left;
}
h2 {
border-left: 4px solid #0ff;
padding-left: 15px;
margin: 0 0 15px 0;
background: linear-gradient(90deg, transparent, rgba(0, 255, 255, 0.1));
text-transform: uppercase;
letter-spacing: 2px;
color: #fff;
}
.yaml-content {
background: #191919;
border-radius: 8px;
padding: 10px;
margin-top: 10px;
font-family: monospace;
white-space: pre-wrap;
color: #E0E0E0;
border-left: 4px solid #0ff;
}
/* Custom Scrollbar */
::-webkit-scrollbar { width: 8px; }
::-webkit-scrollbar-track { background: #121212; }
::-webkit-scrollbar-thumb {
background: #333;
border-radius: 4px;
}
</style>
</head>
<body>
<div class="container">
<img src="https://cdn-uploads.huggingface.co/production/uploads/66c26b6fb01b19d8c3c2467b/tqI2XfovbkA_0ss6IKlPq.png" class="model-image" alt="Model Visualization">
<div class="box">
<h2>πŸ” Overview</h2>
<p>This is the second in a line of models dedicated to creating Stable-Diffusion prompts when given a character appearance. Made for the CharGen Project, This has been finetuned ontop of Delta-Vector/Holland-4B-V1</>
</div>
<div class="box">
<h2>βš–οΈ Quants</h2>
<p>Available quantization formats:</p>
<ul>
<li>GGUF: https://huggingface.co/mradermacher/SDPrompter4b-GGUF</li>
<li>EXL2: https://huggingface.co/</li>
</ul>
</div>
<div class="box">
<h2>πŸ’¬ Prompting</h2>
<p><strong>Recommended format: ChatML, Use the following system prompt for the model. I'd advise against setting a high amount of output tokens as the model loops, use 0.1 min-p and temp-1 to keep it coherent.</strong></p>
<code>Create a prompt for Stable Diffusion based on the information below.</code>
</div>
<div class="box">
<h2>🌟 Credits</h2>
<p>Finetuned on 1xRTX6000 provided by Kubernetes_bad, All credits goes to Kubernetes_bad, LucyKnada and the rest of Anthracite.</p>
</div>
<div class="box">
<h2>πŸ› οΈ Axolotl Config)</h2>
<pre>
base_model: Delta-Vector/Holland-4B-V1
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
trust_remote_code: true
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: NewEden/CivitAI-SD-Prompts
datasets:
- path: NewEden/CivitAI-Prompts-Sharegpt
type: chat_template
chat_template: chatml
roles_to_train: ["gpt"]
field_messages: conversations
message_field_role: from
message_field_content: value
train_on_eos: turn
dataset_prepared_path:
val_set_size: 0.02
output_dir: ./outputs/out2
sequence_len: 8192
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true
wandb_project: SDprompter-final
wandb_entity:
wandb_watch:
wandb_name: SDprompter-final
wandb_log_model:
gradient_accumulation_steps: 16
micro_batch_size: 1
num_epochs: 4
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 0.00001
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_ratio: 0.05
evals_per_epoch: 4
saves_per_epoch: 1
debug:
weight_decay: 0.01
special_tokens:
pad_token: <|finetune_right_pad_id|>
eos_token: <|eot_id|>
auto_resume_from_checkpoints: true
</pre>
</div>
</div>
</div>
</body>
</html>