Model Card for gemma-2-2b-elyza-tasks-sft

This model is a fine-tuned version of google/gemma-2-2b. It has been trained using TRL.

松尾研LLM講座2024 最終課題で作ったモデル

How to inference

# /// script
# requires-python = "3.10"
# dependencies = [
#     "transformers[torch]",
#     "datasets",
#     "peft",
#     "bitsandbytes<0.44",
# ]
# ///

# import os
# from google.colab import userdata
# os.environ["HF_TOKEN"] = userdata.get("HF_TOKEN")

import torch
from datasets import load_dataset
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer, BitsAndBytesConfig

bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16,
)

model_id = "ftnext/gemma-2-2b-elyza-tasks-sft"

tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token = tokenizer.eos_token

peft_model = AutoPeftModelForCausalLM.from_pretrained(
    model_id,
    quantization_config=bnb_config,
    device_map={"": 0},
)

dataset = load_dataset("json", data_files="./elyza-tasks-100-TV_0.jsonl", split="train")

response_format = "### 応答:\n"


def format_prompt(input):
    return f"以下は、タスクを説明する指示です。要求を適切に満たす応答を書きなさい。\n\n### 指示:\n{input}\n\n{response_format}"


@torch.no_grad
def infer(example):
    prompt = format_prompt(example["input"])
    inputs = tokenizer(prompt, return_tensors="pt").to("cuda:0")
    model_output = peft_model.generate(**inputs, max_new_tokens=150)
    output = tokenizer.decode(model_output[0], skip_special_tokens=True)
    return {**example, "output": output[len(prompt) :]}


inferred_ds = dataset.map(infer)

inferred_ds.to_json("submission.jsonl", force_ascii=False)

Training procedure

See https://github.com/ftnext/practice-dl-nlp/blob/552dda69387b53f825bd3b560f4d2e6252cc43b0/llmjp/fine_tuning/gemma_2_2b_elyza_tasks_sft.ipynb

This model was trained with SFT.

Framework versions

  • TRL: 0.13.0
  • Transformers: 4.46.3
  • Pytorch: 2.5.1+cu121
  • Datasets: 3.2.0
  • Tokenizers: 0.20.3

Citations

Cite TRL as:

@misc{vonwerra2022trl,
    title        = {{TRL: Transformer Reinforcement Learning}},
    author       = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
    year         = 2020,
    journal      = {GitHub repository},
    publisher    = {GitHub},
    howpublished = {\url{https://github.com/huggingface/trl}}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for ftnext/gemma-2-2b-elyza-tasks-sft

Base model

google/gemma-2-2b
Finetuned
(470)
this model