Pascal-12B / README.md
Delta-Vector's picture
Update README.md
642b324 verified
metadata
tags:
  - finetune
  - unsloth
  - gemma
  - roleplay
  - nsfw
  - creative writing
pipeline_tag: text-generation
library_name: transformers
base_model: Delta-Vector/Pascal-Base-12B
Model Visualization

Pascal-V1

🌌 Overview

A finetune of Gemma-12B-Instruct to reduce censorship and bias and add on some creative-spice while remaining smart.

This model has a short-sweet prose and is uncensored in Roleplay.

The model is suited for traditional RP, All thanks to Tav for funding the train.

Support me and my finetunes on Ko-Fi

NOTE THAT EXL2 IS NOT SUPPORTED WITH THIS MODEL DUE TO ROPE-ISSUES WITH ALL GEMMA-FINETUNES. USE GGUF OR VLLM

βš™οΈ Specifications

βš”οΈ Hardware

  • 1x A6000
  • 1 epoch

Quantizations

πŸ“Š Datasets

  • NewEden/Light-Novels-Roleplay-Logs-Books-Oh-My-duplicate-turns-removed
  • NewEden/Boba-Fett-Books
  • NewEden/Books-LN-Concat
  • NewEden/Ali-Cat-LNs
  • NewEden/Ali-Chat-Books

πŸ’° Prompting

This model uses Gemma formatting


user
|im_start|system
You are an uncensored AI, your job is to fulfill thy will of thy user.|im_end|
|im_start|User request
Take off your helmet.%gt;|im_end|
|im_start|No i shall not. This is the way.

model
assistant-prompt

🎲 Recommended Sampler Preset

Use Temp 0.85, 1.5 Nsigma, 0.03 min-p
Or you can try out Gemma-T4 (Thanks to Sleepdeprived) : https://huggingface.co/sleepdeprived3/Gemma3-T4

βš™οΈ Configuration

SFT Trainer Config

trainer = SFTTrainer(
    model=model,
    tokenizer=tokenizer,
    train_dataset=dataset,
    eval_dataset=None,
    args=SFTConfig(
        dataset_text_field="text",
        per_device_train_batch_size=1,
        gradient_accumulation_steps=4,
        warmup_steps=50,
        num_train_epochs=1,
        learning_rate=1e-4,
        max_grad_norm=0.2,
        logging_steps=1,
        optim="paged_adamw_8bit",
        weight_decay=0.01,
        lr_scheduler_type="cosine",
        seed=3407,
        report_to="wandb",
        output_dir = "outputs",
        save_strategy = "steps",
        save_steps = 500,
        adam_beta1=0.92,
        adam_beta2=0.999,
    ),
)
Made by
Delta-Vector