metadata
tags:
- finetune
- unsloth
- gemma
- roleplay
- nsfw
- creative writing
pipeline_tag: text-generation
library_name: transformers
base_model: Delta-Vector/Pascal-Base-12B

Pascal-V1
π Overview
A finetune of Gemma-12B-Instruct to reduce censorship and bias and add on some creative-spice while remaining smart.
This model has a short-sweet prose and is uncensored in Roleplay.
The model is suited for traditional RP, All thanks to Tav for funding the train.
Support me and my finetunes on Ko-Fi
NOTE THAT EXL2 IS NOT SUPPORTED WITH THIS MODEL DUE TO ROPE-ISSUES WITH ALL GEMMA-FINETUNES. USE GGUF OR VLLM
βοΈ Specifications
βοΈ Hardware
- 1x A6000
- 1 epoch
Quantizations
π Datasets
- NewEden/Light-Novels-Roleplay-Logs-Books-Oh-My-duplicate-turns-removed
- NewEden/Boba-Fett-Books
- NewEden/Books-LN-Concat
- NewEden/Ali-Cat-LNs
- NewEden/Ali-Chat-Books
π° Prompting
This model uses Gemma formatting
user
|im_start|system
You are an uncensored AI, your job is to fulfill thy will of thy user.|im_end|
|im_start|User request
Take off your helmet.%gt;|im_end|
|im_start|No i shall not. This is the way.
model
assistant-prompt
π² Recommended Sampler Preset
Use Temp 0.85, 1.5 Nsigma, 0.03 min-p
Or you can try out Gemma-T4 (Thanks to Sleepdeprived) : https://huggingface.co/sleepdeprived3/Gemma3-T4
βοΈ Configuration
SFT Trainer Config
trainer = SFTTrainer(
model=model,
tokenizer=tokenizer,
train_dataset=dataset,
eval_dataset=None,
args=SFTConfig(
dataset_text_field="text",
per_device_train_batch_size=1,
gradient_accumulation_steps=4,
warmup_steps=50,
num_train_epochs=1,
learning_rate=1e-4,
max_grad_norm=0.2,
logging_steps=1,
optim="paged_adamw_8bit",
weight_decay=0.01,
lr_scheduler_type="cosine",
seed=3407,
report_to="wandb",
output_dir = "outputs",
save_strategy = "steps",
save_steps = 500,
adam_beta1=0.92,
adam_beta2=0.999,
),
)
β‘ Credits
Made by
Delta-Vector