duongttr commited on
Commit
b304301
1 Parent(s): 5d4ed33

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -0
README.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - duongttr/vi-dataset-for-pretrain
5
+ language:
6
+ - vi
7
+ metrics:
8
+ - perplexity
9
+ pipeline_tag: text-generation
10
+ model-index:
11
+ - name: chronopt-research/vietnamese-gpt2-medium
12
+ results:
13
+ - task:
14
+ type: text-generation
15
+ metrics:
16
+ - type: perplexity
17
+ value: 17.5948
18
+ verified: true
19
+ ---
20
+
21
+ # Vietnamese `gpt2-medium`
22
+
23
+ <!-- Provide a quick summary of what the model is/does. -->
24
+
25
+ This is a pretrained `gpt2-medium` for Vietnamese language using casual language modeling (CLM) objective. It was introduced in
26
+ [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
27
+ and first released at [this page](https://openai.com/blog/better-language-models/).
28
+
29
+ ## Model Description
30
+ GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.
31
+
32
+ This is the **medium version** of GPT-2, with 380M parameters.
33
+
34
+ You could've found other pretrained version from here: [gpt2-base](), [gpt2-large]()
35
+
36
+ ## Dataset used for pretraining
37
+ This is a combination of multiple Vietnamese dataset for pretraining CLMs such as GPT, GPT2, etc.
38
+
39
+ The dataset consists of:
40
+ - [`vietgpt/covid_19_news_vi`](https://huggingface.co/datasets/vietgpt/covid_19_news_vi)
41
+ - [`hieunguyen1053/binhvq-news-corpus`](https://huggingface.co/datasets/hieunguyen1053/binhvq-news-corpus)
42
+ - [`oscar (unshuffled_deduplicated_vi)`](https://huggingface.co/datasets/oscar)
43
+ - [`vietgpt/wikipedia_vi`](https://huggingface.co/datasets/vietgpt/wikipedia_vi)
44
+
45
+ You can find out the combined version here: [duongttr/vi-dataset-for-pretrain](https://huggingface.co/datasets/duongttr/vi-dataset-for-pretrain)
46
+
47
+ ## Hyperparamters & Results
48
+ We trained the model ~100k steps, with `lr=1e-4`, `bs=1920`, `optimizer=adamw` on TPU-VM-3.8 from [TRC Program](https://sites.research.google/trc/about/). The training costs around **2.5 days**.
49
+ |Model|Eval Loss|Eval Perplexity|
50
+ |---|---|---|
51
+ |gpt2-base|-|-|
52
+ |gpt2-medium|2.8676|17.5948|
53
+ |gpt2-large|-|-|
54
+
55
+ ## Contacts
56
+ Feel free to contact us via: [email]()