Uploaded model

  • Developed by: taka-too
  • License: CC-BY-NC-SA-4.0
  • Finetuned from model : llm-jp/llm-jp-3-13b
  • Training Dataset: Ichikara Instruction (LLM-jp)

This LLaMA-based model has been fine-tuned for enhanced instruction-following capabilities using the Ichikara Instruction dataset provided by LLM-jp, which was trained 2x faster with Unsloth and Huggingface's TRL library.

関根聡, 安藤まや, 後藤美知子, 鈴木久美, 河原大輔, 井之上直也, 乾健太郎. ichikara-instruction: LLMのための日本語インストラクションデータの構築. 言語処理学会第30回年次大会(2024)

How to Use the Model

You can load the model via the Hugging Face transformers library:

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("taka-too/llm-jp-3-13b-it")
model = AutoModelForCausalLM.from_pretrained("taka-too/llm-jp-3-13b-it")

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for taka-too/llm-jp-3-13b-it

Finetuned
(1132)
this model