File size: 1,290 Bytes
5229414
 
 
 
c0eac38
 
 
 
 
 
 
ecf0125
c0eac38
5229414
d0b2725
 
 
 
 
 
 
ecf0125
d0b2725
 
ecf0125
d0b2725
 
 
5229414
 
 
 
 
c0eac38
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
---
language: ta
---

# TaMillion

This is a first attempt at a Tamil language model trained with
Google Research's [ELECTRA](https://github.com/google-research/electra).

Tokenization and pre-training CoLab: https://colab.research.google.com/drive/1GngBFn_Ge5Hd2XI2febBhZyU7GDiqw5w

V2 (current): 190,000 steps;  (V1 was 100,000 steps)

## Classification

Sudalai Rajkumar's Tamil-NLP page contains classification and regression tasks:
https://www.kaggle.com/sudalairajkumar/tamil-nlp

Notebook: https://colab.research.google.com/drive/1_rW9HZb6G87-5DraxHvhPOzGmSMUc67_?usp=sharin

The model outperformed mBERT on news classification:
(Random: 16.7%, mBERT: 53.0%, TaMillion: 69.6%)

The model slightly outperformed mBERT on movie reviews:
(RMSE - mBERT: 0.657, TaMillion: 0.627)

Equivalent accuracy on the Tirukkural topic task.

## Question Answering

I didn't find a Tamil-language question answering dataset, but this model could be used
to train a QA model. See Hindi and Bengali examples here: https://colab.research.google.com/drive/1i6fidh2tItf_-IDkljMuaIGmEU6HT2Ar

## Corpus

Trained on a web crawl from https://oscar-corpus.com/ (deduped version, 5.1GB) and 1 July 2020 dump of ta.wikipedia.org (476MB)

## Vocabulary

Included as vocab.txt in the upload - vocab_size is 40161