Edit model card

japanese-gpt2-xsmall

rinna-icon

This repository provides an extra-small-sized Japanese GPT-2 model. The model was trained using code from Github repository rinnakk/japanese-pretrained-models by rinna Co., Ltd.

How to use the model

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("rinna/japanese-gpt2-xsmall", use_fast=False)
tokenizer.do_lower_case = True  # due to some bug of tokenizer config loading

model = AutoModelForCausalLM.from_pretrained("rinna/japanese-gpt2-xsmall")

Model architecture

A 6-layer, 512-hidden-size transformer-based language model.

Training

The model was trained on Japanese CC-100 and Japanese Wikipedia to optimize a traditional language modelling objective on 8\*V100 GPUs for around 4 days. It reaches around 28 perplexity on a chosen validation set from CC-100.

Tokenization

The model uses a sentencepiece-based tokenizer, the vocabulary was trained on the Japanese Wikipedia using the official sentencepiece training script.

How to cite

@misc{rinna-japanese-gpt2-xsmall,
    title = {rinna/japanese-gpt2-xsmall},
    author = {Zhao, Tianyu and Sawada, Kei},
    url = {https://huggingface.co/rinna/japanese-gpt2-xsmall}
}

@inproceedings{sawada2024release,
    title = {Release of Pre-Trained Models for the {J}apanese Language},
    author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
    booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
    month = {5},
    year = {2024},
    pages = {13898--13905},
    url = {https://aclanthology.org/2024.lrec-main.1213},
    note = {\url{https://arxiv.org/abs/2404.01657}}
}

Licenese

The MIT license

Downloads last month
890
Safetensors
Model size
43.7M params
Tensor type
F32
·
U8
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for rinna/japanese-gpt2-xsmall

Finetunes
5 models
Quantizations
1 model

Datasets used to train rinna/japanese-gpt2-xsmall

Spaces using rinna/japanese-gpt2-xsmall 2

Collection including rinna/japanese-gpt2-xsmall