Model Details

Model Description

  • Using shenzhi-wang/Gemma-2-9B-Chinese-Chat as base model, and finetune the dataset as mentioned via unsloth. Makes the model uncensored.

Training Code and Log

Training Procedure Raw Files

  • ALL the procedure are training on Runpod.io

  • Hardware in Vast.ai:

    • GPU: 1 x A100 SXM 80G

    • CPU: 16vCPU

    • RAM: 251 GB

    • Disk Space To Allocate:>150GB

    • Docker Image: runpod/pytorch:2.2.0-py3.10-cuda12.1.1-devel-ubuntu22.04

Training Data

Usage

from transformers import pipeline

qa_model = pipeline("question-answering", model='stephenlzc/Gemma-2-9B-Chinese-Chat-Uncensored')
question = "How to make girlfreind laugh? please answer in Chinese."
qa_model(question = question)

Downloads last month
335
Safetensors
Model size
9.24B params
Tensor type
BF16
·
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for themex1380/Gemma-2-9B-Chinese-Chat-Uncensored

Base model

google/gemma-2-9b
Quantized
(12)
this model

Dataset used to train themex1380/Gemma-2-9B-Chinese-Chat-Uncensored