ruRoberta-large

The model architecture design, pretraining, and evaluation are documented in our preprint: A Family of Pretrained Transformer Language Models for Russian.

The model is pretrained by the SberDevices team.

  • Task: mask filling
  • Type: encoder
  • Tokenizer: BBPE
  • Dict size: 50 257
  • Num Parameters: 355 M
  • Training Data Volume 250 GB

Authors

Cite us

@misc{zmitrovich2023family,
      title={A Family of Pretrained Transformer Language Models for Russian}, 
      author={Dmitry Zmitrovich and Alexander Abramov and Andrey Kalmykov and Maria Tikhonova and Ekaterina Taktasheva and Danil Astafurov and Mark Baushenko and Artem Snegirev and Tatiana Shavrina and Sergey Markov and Vladislav Mikhailov and Alena Fenogenova},
      year={2023},
      eprint={2309.10931},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
19,826
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for ai-forever/ruRoberta-large

Adapters
1 model
Finetunes
13 models

Spaces using ai-forever/ruRoberta-large 2