deberta-base-belarusian

Model Description

This is a DeBERTa(V2) model pre-trained on Belarusian Wikipedia and CC-100 texts. You can fine-tune deberta-base-belarusian for downstream tasks, such as POS-tagging, dependency-parsing, and so on.

How to Use

from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-base-belarusian")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/deberta-base-belarusian")
Downloads last month
15
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for KoichiYasuoka/deberta-base-belarusian

Finetunes
1 model

Datasets used to train KoichiYasuoka/deberta-base-belarusian