Update README.md
Browse files
README.md
CHANGED
@@ -24,9 +24,10 @@ This dataset has Wikipedia articles pertaining to Indian context.
|
|
24 |
### Dataset Description
|
25 |
|
26 |
<!-- Provide a longer summary of what this dataset is. -->
|
27 |
-
|
28 |
-
|
29 |
-
|
|
|
30 |
|
31 |
|
32 |
- **Curated by:** [Soket AI Labs](https://soket.ai/)
|
@@ -41,17 +42,46 @@ The dataset is focussed on Indian factual content for pre-training LLMs where In
|
|
41 |
## Dataset Structure
|
42 |
|
43 |
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
44 |
Each row corresponds to a wikipedia article with the decription of article in source language(english) and translations in 6 indian languages.
|
45 |
The title is in english and descriptions in different languages is represented by column name of format "language_code"_"script".
|
46 |
Each description column in different languages is a list of sentences/multiple sentences and can be concatenated to get cleaned article decription.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
47 |
|
48 |
## Dataset Creation
|
49 |
|
50 |
### Curation Rationale
|
51 |
|
52 |
<!-- Motivation for the creation of this dataset. -->
|
|
|
|
|
53 |
|
54 |
-
[More Information Needed]
|
55 |
|
56 |
### Source Data
|
57 |
|
@@ -61,8 +91,13 @@ Wikpedia english articles from [wikimedia/wikipedia](https://huggingface.co/data
|
|
61 |
#### Data Collection and Processing
|
62 |
|
63 |
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
|
|
|
|
|
|
|
|
|
|
|
|
|
64 |
|
65 |
-
[More Information Needed]
|
66 |
|
67 |
|
68 |
|
|
|
24 |
### Dataset Description
|
25 |
|
26 |
<!-- Provide a longer summary of what this dataset is. -->
|
27 |
+
The dataset is built from Wikipedia articles taken from [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia).
|
28 |
+
We filtered, cleaned and translated English articles related to India and Indian context out of entire dataset.
|
29 |
+
|
30 |
+
\Each example has contents of a full cleaned wikipedia article and it's translations in 6 Indian languages.
|
31 |
|
32 |
|
33 |
- **Curated by:** [Soket AI Labs](https://soket.ai/)
|
|
|
42 |
## Dataset Structure
|
43 |
|
44 |
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
45 |
+
Total number of rows: 200820 \
|
46 |
+
It has approximately **1.54** billion tokens for all languages with almost similar number of tokens for each language when tokenized
|
47 |
+
with our Indic tokenizer we created which can be found in our model repository [Pragna-1b](https://huggingface.co/soketlabs/pragna-1b).
|
48 |
+
Here are token counts for each language:
|
49 |
+
- English: 196.2 millions
|
50 |
+
- Hindi: 225 millions
|
51 |
+
- Bengali: 286.2 millions
|
52 |
+
- Gujarati: 204 millions
|
53 |
+
- Tamil: 231.3 millions
|
54 |
+
- Kannada: 201.3 millions
|
55 |
+
- Urdu: 204.9 millions
|
56 |
+
|
57 |
+
\These numbers were extrapolated from calculations on 10% of randomly sampled dataset.\
|
58 |
+
|
59 |
Each row corresponds to a wikipedia article with the decription of article in source language(english) and translations in 6 indian languages.
|
60 |
The title is in english and descriptions in different languages is represented by column name of format "language_code"_"script".
|
61 |
Each description column in different languages is a list of sentences/multiple sentences and can be concatenated to get cleaned article decription.
|
62 |
+
\\Each row is of the format:
|
63 |
+
|
64 |
+
{'id': '1',
|
65 |
+
'url': 'https://simple.wikipedia.org/sample_article',
|
66 |
+
'title': 'Sample article',
|
67 |
+
'eng_Latn': ['This is a sample...', 'and more information'],
|
68 |
+
'hin_Deva': ['यह एक नमूना है'..., 'और अधिक जानकारी'],
|
69 |
+
'kan_Knda': ['ಇದು ಒಂದು ಮಾದರಿ...', 'ಮತ್ತು ಹೆಚ್ಚಿನ ಮಾಹಿತಿ'],
|
70 |
+
'ben_Beng': ['এটি একটি নমুনা...', 'এবং আরও তথ্য'],
|
71 |
+
'guj_Gujr': ['આ એક નમૂનો છે...', 'અને વધુ માહિતી'],
|
72 |
+
'tam_Taml': ['இது ஒரு மாதிரி...', 'மேலும் தகவல்'],
|
73 |
+
'urd_Arab': ['...یہ ایک نمونہ ہے۔', 'اور مزید معلومات']
|
74 |
+
}
|
75 |
+
|
76 |
|
77 |
## Dataset Creation
|
78 |
|
79 |
### Curation Rationale
|
80 |
|
81 |
<!-- Motivation for the creation of this dataset. -->
|
82 |
+
We needed to induce knowledge regarding India and Indian context while training our LLM, for which we gathered available Indic
|
83 |
+
content data and also filtered factual data from Wikipedia.
|
84 |
|
|
|
85 |
|
86 |
### Source Data
|
87 |
|
|
|
91 |
#### Data Collection and Processing
|
92 |
|
93 |
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
|
94 |
+
We filtered out Indian context data from [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset's English
|
95 |
+
articles by select keywords.
|
96 |
+
Further we trained a few shot classification model to classify for Indian content vs Not Indian content to narrow down filtered English
|
97 |
+
articles.
|
98 |
+
We cleaned the articles and removed unwanted paragraphs for References etc.
|
99 |
+
We then translated these artices to 6 Indian languages (Hindi, Bengali, Gujarati, Tamil, Kannada, Urdu) using AI4Bharat's [IndicTrans2](https://huggingface.co/ai4bharat/indictrans2-en-indic-1B). The dataset has been cleaned and can be used for pre-training multilingual LLMs.
|
100 |
|
|
|
101 |
|
102 |
|
103 |
|