keshan commited on
Commit
c544656
1 Parent(s): f42b10e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -1
README.md CHANGED
@@ -7,4 +7,29 @@ tags:
7
  datasets:
8
  - mc4
9
  ---
10
- # Sinhala GPT2 trained on MC4 (manually cleaned)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  datasets:
8
  - mc4
9
  ---
10
+ # Sinhala GPT2 trained on MC4 (manually cleaned)
11
+
12
+ ### Overview
13
+
14
+ This is a smaller GPT2 model trained on [MC4](https://github.com/allenai/allennlp/discussions/5056) Sinhala dataset. As Sinhala is one of those low resource languages, there are only a handful of models been trained. So, this would be a great place to start training for more downstream tasks.
15
+
16
+ This model uses a manually cleaned version of MC4 dataset which can be found [here](https://huggingface.co/datasets/keshan/clean-si-mc4). Although the dataset is relatively small ~3GB. The finetuned model on [news articles](https://huggingface.co/keshan/sinhala-gpt2-newswire) generate good results. although not amazingly good :).
17
+
18
+ ## Model Specification
19
+
20
+
21
+ The model chosen for training is GPT2 with the following specifications:
22
+ 1. vocab_size=50257
23
+ 2. n_embd=768
24
+ 3. n_head=12
25
+ 4. n_layer=12
26
+ 5. n_positions=1024
27
+
28
+ ## How to Use
29
+ You can use this model directly with a pipeline for casual language modeling:
30
+
31
+ ```py
32
+ from transformers import pipeline
33
+ generator = pipeline('text-generation', model='flax-community/Sinhala-gpt2')
34
+ generator("මම", max_length=50, num_return_sequences=5)
35
+ ```