Datasets:
gramirez-prompsit
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -973,6 +973,40 @@ For a detailed description of the dataset, please refer to https://hplt-project.
|
|
973 |
This is the ```cleaned``` variant of the HPLT Datasets v2.0 converted to the Parquet format semi-automatically when being uploaded here.
|
974 |
The original JSONL files (which take ~4x fewer disk space than this HF version) and the larger non-cleaned version can be found at https://hplt-project.org/datasets/v2.0.
|
975 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
976 |
***Languages***
|
977 |
|
978 |
The ```cleaned``` version of HPLT Datasets v2.0 consists of subsets corresponding to 191 language codes.
|
|
|
973 |
This is the ```cleaned``` variant of the HPLT Datasets v2.0 converted to the Parquet format semi-automatically when being uploaded here.
|
974 |
The original JSONL files (which take ~4x fewer disk space than this HF version) and the larger non-cleaned version can be found at https://hplt-project.org/datasets/v2.0.
|
975 |
|
976 |
+
**Dataset Performance**
|
977 |
+
|
978 |
+
***External Evaluation***
|
979 |
+
|
980 |
+
The HuggingFace team has [compared the utility of various multilingual corpora for training large language models in their FineWeb2 initiative](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2).
|
981 |
+
They found that the HPLT v2 datasets are next to their FineWeb 2, on par with the CulturaX dataset as shown in this figure produced by HuggingFace:
|
982 |
+
|
983 |
+
<img src="https://huggingface.co/datasets/HuggingFaceFW/admin/resolve/main/multilingual_datasets_comparison.png" width="800" height="800" />
|
984 |
+
|
985 |
+
This is a massive improvement compared to the HPLT v1 datasets, as can be seen on the plot above.
|
986 |
+
In fact, it’s even better: if one looks at the language-specific results, it becomes clear that on
|
987 |
+
Arabic, Hindi, Russian, Thai and Turkish (5 out of 9 languages HuggingFace evaluated on), [HPLT v2 is on par or better than FineWeb 2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2#comparison-with-other-datasets).
|
988 |
+
The average score is lower mostly because of Chinese, so we have some work ahead for this language!
|
989 |
+
Note that the source of the FineWeb 2 (and CulturaX) data is exclusively CommonCrawl, while the HPLT datasets are to a large extent composed of Internet Archive crawls.
|
990 |
+
Thus, **FineWeb 2 and HPLTv2 are complementary to each other and should be used together**.
|
991 |
+
|
992 |
+
***Internal Evaluation***
|
993 |
+
|
994 |
+
|
995 |
+
We also conducted FineWeb-style evaluations within the HPLT project, for now limited to English.
|
996 |
+
It confirmed the findings of HuggingFace in that HPLT v2 datasets are of much better quality than HPLT v1.2 data, which was released almost a year ago.
|
997 |
+
|
998 |
+
We replicated the FineWeb evaluation setting, training large language models with the same architecture and pretraining configuration
|
999 |
+
(e.g. 1.82B parameters, Llama architecture with a sequence length of 2048 tokens, GPT 2 tokenizer, and a global batch size of ~2 million tokens), with the only difference between the models being the training data.
|
1000 |
+
We randomly sampled approximately 100B tokens from different versions of HPLT as well as FineWeb-data and trained a separate model on each of these datasets.
|
1001 |
+
|
1002 |
+
Each model was trained with the GPT-NeoX framework on 8 nodes on the LUMI cluster, where each node has 4 MI250X GPUs.
|
1003 |
+
For evaluation, we use the HuggingFace LightEval in a zero-shot setting with the tasks ARC (Easy and Challenge), Hellaswag, PICA, and OpenbookQA.
|
1004 |
+
The figure shows the macro average of the acc_norm values for these evaluations.
|
1005 |
+
|
1006 |
+
|
1007 |
+
<img src="https://huggingface.co/datasets/HPLT/HPLT2.0_cleaned/resolve/3c6ded1865c1918b899ea8634897f4f6fc5a20b6/english-comparison-datasets-by-HPLT.png" width="800" height="800" />
|
1008 |
+
|
1009 |
+
|
1010 |
***Languages***
|
1011 |
|
1012 |
The ```cleaned``` version of HPLT Datasets v2.0 consists of subsets corresponding to 191 language codes.
|