File size: 632 Bytes
f015296 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
---
license: cc-by-4.0
language: te
---
## TeluguBERT-Scratch
TeluguBERT is a Telugu BERT model trained on publicly available Telugu monolingual datasets from scratch.
Preliminary details on the dataset, models, and baseline results can be found in our [<a href='https://arxiv.org/abs/2211.11418'> paper </a>] (<a href='http://dx.doi.org/10.13140/RG.2.2.14606.84809'> pdf </a>)
```
@article{joshi2022l3cubehind,
title={L3Cube-HindBERT and DevBERT: Pre-Trained BERT Transformer models for Devanagari based Hindi and Marathi Languages},
author={Joshi, Raviraj},
journal={arXiv preprint arXiv:2211.11418},
year={2022}
}
``` |