roberta-large-japanese-aozora-char

Model Description

This is a RoBERTa model pre-trained on 青空文庫 texts with character tokenizer. You can fine-tune roberta-large-japanese-aozora-char for downstream tasks, such as POS-tagging, dependency-parsing, and so on.

How to Use

from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-large-japanese-aozora-char")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-large-japanese-aozora-char")

Reference

安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.

Downloads last month
15
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for KoichiYasuoka/roberta-large-japanese-aozora-char

Finetunes
2 models