Changed very long line to short multi lines to make it easier for editing in IDE.
Browse files
README.md
CHANGED
@@ -6,9 +6,12 @@ widget:
|
|
6 |
|
7 |
# GPT2-medium-indonesian
|
8 |
|
9 |
-
This is a pretrained model on Indonesian language using a causal language modeling (CLM) objective, which was first
|
|
|
|
|
10 |
|
11 |
-
This model was trained using HuggingFace's Flax framework and is part of the [JAX/Flax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104)
|
|
|
12 |
|
13 |
The demo can be found [here](https://huggingface.co/spaces/flax-community/gpt2-indonesian).
|
14 |
|
@@ -50,16 +53,28 @@ output = model(encoded_input)
|
|
50 |
```
|
51 |
|
52 |
## Limitations and bias
|
53 |
-
The training data used for this model are Indonesian websites of [OSCAR](https://oscar-corpus.com/)
|
|
|
|
|
|
|
|
|
54 |
|
55 |
As the openAI team themselves point out in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
|
56 |
|
57 |
-
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
|
|
|
58 |
|
59 |
-
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we
|
|
|
|
|
|
|
|
|
60 |
|
61 |
## Training data
|
62 |
-
The model was trained on a combined dataset of [OSCAR](https://oscar-corpus.com/)
|
|
|
|
|
|
|
63 |
|
64 |
## Training procedure
|
65 |
The model was trained on a TPUv3-8 VM provided by the Google Cloud team. The training duration was `6d 3h 7m 26s`.
|
@@ -69,7 +84,7 @@ The model achieves the following results without any fine-tuning (zero-shot):
|
|
69 |
|
70 |
| dataset | train loss | eval loss | eval perplexity |
|
71 |
| ---------- | ---------- | -------------- | ---------- |
|
72 |
-
| ID OSCAR+mc4 (29GB) | 2.79 | 2.696 | 14.826 |
|
73 |
|
74 |
### Tracking
|
75 |
The training process was tracked in [TensorBoard](https://huggingface.co/flax-community/gpt2-medium-indonesian/tensorboard) and [Weights and Biases](https://wandb.ai/wandb/hf-flax-gpt2-indonesian?workspace=user-cahya).
|
@@ -81,4 +96,9 @@ The training process was tracked in [TensorBoard](https://huggingface.co/flax-co
|
|
81 |
- Galuh Sahid ([@Galuh](https://huggingface.co/Galuh))
|
82 |
- Muhammad Agung Hambali ([@AyameRushia](https://huggingface.co/AyameRushia))
|
83 |
- Muhammad Fhadli ([@muhammadfhadli](https://huggingface.co/muhammadfhadli))
|
84 |
-
- Samsul Rahmadani ([@munggok](https://huggingface.co/munggok))
|
|
|
|
|
|
|
|
|
|
|
|
6 |
|
7 |
# GPT2-medium-indonesian
|
8 |
|
9 |
+
This is a pretrained model on Indonesian language using a causal language modeling (CLM) objective, which was first
|
10 |
+
introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
|
11 |
+
and first released at [this page](https://openai.com/blog/better-language-models/).
|
12 |
|
13 |
+
This model was trained using HuggingFace's Flax framework and is part of the [JAX/Flax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104)
|
14 |
+
organized by [HuggingFace](https://huggingface.co). All training was done on a TPUv3-8 VM sponsored by the Google Cloud team.
|
15 |
|
16 |
The demo can be found [here](https://huggingface.co/spaces/flax-community/gpt2-indonesian).
|
17 |
|
|
|
53 |
```
|
54 |
|
55 |
## Limitations and bias
|
56 |
+
The training data used for this model are Indonesian websites of [OSCAR](https://oscar-corpus.com/),
|
57 |
+
[mc4](https://huggingface.co/datasets/mc4) and [Wikipedia](https://huggingface.co/datasets/wikipedia). The datasets
|
58 |
+
contain a lot of unfiltered content from the internet, which is far from neutral. While we have done some filtering on
|
59 |
+
the dataset (see the **Training data** section), the filtering is by no means a thorough mitigation of biased content
|
60 |
+
that is eventually used by the training data. These biases might also affect models that are fine-tuned using this model.
|
61 |
|
62 |
As the openAI team themselves point out in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
|
63 |
|
64 |
+
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
|
65 |
+
> that require the generated text to be true.
|
66 |
|
67 |
+
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we
|
68 |
+
> do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry
|
69 |
+
> out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender,
|
70 |
+
> race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with
|
71 |
+
> similar levels of caution around use cases that are sensitive to biases around human attributes.
|
72 |
|
73 |
## Training data
|
74 |
+
The model was trained on a combined dataset of [OSCAR](https://oscar-corpus.com/), [mc4](https://huggingface.co/datasets/mc4)
|
75 |
+
and Wikipedia for the Indonesian language. We have filtered and reduced the mc4 dataset so that we end up with 29 GB
|
76 |
+
of data in total. The mc4 dataset was cleaned using [this filtering script](https://github.com/Wikidepia/indonesian_datasets/blob/master/dump/mc4/cleanup.py)
|
77 |
+
and we also only included links that have been cited by the Indonesian Wikipedia.
|
78 |
|
79 |
## Training procedure
|
80 |
The model was trained on a TPUv3-8 VM provided by the Google Cloud team. The training duration was `6d 3h 7m 26s`.
|
|
|
84 |
|
85 |
| dataset | train loss | eval loss | eval perplexity |
|
86 |
| ---------- | ---------- | -------------- | ---------- |
|
87 |
+
| ID OSCAR+mc4+Wikipedia (29GB) | 2.79 | 2.696 | 14.826 |
|
88 |
|
89 |
### Tracking
|
90 |
The training process was tracked in [TensorBoard](https://huggingface.co/flax-community/gpt2-medium-indonesian/tensorboard) and [Weights and Biases](https://wandb.ai/wandb/hf-flax-gpt2-indonesian?workspace=user-cahya).
|
|
|
96 |
- Galuh Sahid ([@Galuh](https://huggingface.co/Galuh))
|
97 |
- Muhammad Agung Hambali ([@AyameRushia](https://huggingface.co/AyameRushia))
|
98 |
- Muhammad Fhadli ([@muhammadfhadli](https://huggingface.co/muhammadfhadli))
|
99 |
+
- Samsul Rahmadani ([@munggok](https://huggingface.co/munggok))
|
100 |
+
|
101 |
+
## Future work
|
102 |
+
|
103 |
+
We would like to pre-train further the models with larger and cleaner datasets and fine-tune it to specific domains
|
104 |
+
if we can get the necessary hardware resources.
|