|
--- |
|
language: ta |
|
--- |
|
|
|
# TaMillion |
|
|
|
This is a first attempt at a Tamil language model trained with |
|
Google Research's [ELECTRA](https://github.com/google-research/electra). |
|
|
|
Tokenization and pre-training CoLab: https://colab.research.google.com/drive/1GngBFn_Ge5Hd2XI2febBhZyU7GDiqw5w |
|
|
|
V2 (current): 190,000 steps; (V1 was 100,000 steps) |
|
|
|
## Classification |
|
|
|
Sudalai Rajkumar's Tamil-NLP page contains classification and regression tasks: |
|
https://www.kaggle.com/sudalairajkumar/tamil-nlp |
|
|
|
Notebook: https://colab.research.google.com/drive/1_rW9HZb6G87-5DraxHvhPOzGmSMUc67_?usp=sharin |
|
|
|
The model outperformed mBERT on news classification: |
|
(Random: 16.7%, mBERT: 53.0%, TaMillion: 69.6%) |
|
|
|
The model slightly outperformed mBERT on movie reviews: |
|
(RMSE - mBERT: 0.657, TaMillion: 0.627) |
|
|
|
Equivalent accuracy on the Tirukkural topic task. |
|
|
|
## Question Answering |
|
|
|
I didn't find a Tamil-language question answering dataset, but this model could be used |
|
to train a QA model. See Hindi and Bengali examples here: https://colab.research.google.com/drive/1i6fidh2tItf_-IDkljMuaIGmEU6HT2Ar |
|
|
|
## Corpus |
|
|
|
Trained on a web crawl from https://oscar-corpus.com/ (deduped version, 5.1GB) and 1 July 2020 dump of ta.wikipedia.org (476MB) |
|
|
|
## Vocabulary |
|
|
|
Included as vocab.txt in the upload - vocab_size is 40161 |
|
|