license: apache-2.0
datasets:
- duongttr/vi-dataset-for-pretrain
language:
- vi
metrics:
- perplexity
pipeline_tag: text-generation
widget:
- text: Việt Nam là quốc gia có
- text: Hoàng Sa, Trường Sa là của
model-index:
- name: chronopt-research/vietnamese-gpt2-medium
results:
- task:
type: text-generation
metrics:
- type: perplexity
value: 17.5948
verified: true
Vietnamese gpt2-medium
This is a pretrained gpt2-medium
for Vietnamese language using casual language modeling (CLM) objective. It was introduced in
this paper
and first released at this page.
Model Description
GPT-2 (at first) is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.
This is the medium version of GPT-2, with 380M parameters.
You could've found other pretrained version from here: gpt2-base, gpt2-large
Dataset used for pretraining
This is a combination of multiple Vietnamese dataset for pretraining CLMs such as GPT, GPT2, etc.
The dataset consists of:
vietgpt/covid_19_news_vi
hieunguyen1053/binhvq-news-corpus
oscar (unshuffled_deduplicated_vi)
vietgpt/wikipedia_vi
You can find out the combined version here: duongttr/vi-dataset-for-pretrain
Hyperparamters & Results
We trained the model ~100k steps, with lr=1e-4
, bs=1920
, optimizer=adamw
on TPU-VM-3.8 from TRC Program. The training costs around 2.5 days.
Model | Eval Loss | Eval Perplexity |
---|---|---|
gpt2-base | 3.939 | 51.35 |
gpt2-medium | 2.8676 | 17.5948 |
gpt2-large | - | - |
Contacts
Feel free to contact us via: email