Update README.md
Browse files
README.md
CHANGED
@@ -217,115 +217,98 @@ dataset_info:
|
|
217 |
dataset_size: 543259766
|
218 |
---
|
219 |
|
220 |
-
#
|
221 |
|
222 |
<!-- Provide a quick summary of the dataset. -->
|
223 |
-
|
224 |
-
|
225 |
## Dataset Details
|
226 |
-
|
227 |
### Dataset Description
|
228 |
|
229 |
<!-- Provide a longer summary of what this dataset is. -->
|
230 |
-
|
|
|
231 |
|
|
|
232 |
|
233 |
-
- **Curated by:** [Soket AI labs](https://soket.ai/)
|
234 |
-
- **Language(s) (NLP):** Hindi, Bengali, Gujarati, Tamil, Kannada, Urdu
|
235 |
-
- **License:** cc-by-sa-4.0
|
236 |
|
|
|
|
|
|
|
237 |
|
238 |
## Uses
|
239 |
|
240 |
<!-- Address questions around how the dataset is intended to be used. -->
|
241 |
-
|
242 |
-
|
243 |
|
244 |
## Dataset Structure
|
245 |
|
246 |
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
247 |
|
248 |
-
[More Information Needed]
|
249 |
|
250 |
## Dataset Creation
|
251 |
|
252 |
### Curation Rationale
|
253 |
|
254 |
<!-- Motivation for the creation of this dataset. -->
|
|
|
|
|
255 |
|
256 |
-
[More Information Needed]
|
257 |
|
258 |
### Source Data
|
259 |
|
260 |
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
|
261 |
-
|
262 |
|
263 |
#### Data Collection and Processing
|
264 |
|
265 |
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
|
|
|
|
|
|
|
|
|
|
|
|
|
266 |
|
267 |
-
[More Information Needed]
|
268 |
-
|
269 |
-
#### Who are the source data producers?
|
270 |
-
|
271 |
-
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
|
272 |
-
|
273 |
-
[More Information Needed]
|
274 |
-
|
275 |
-
### Annotations [optional]
|
276 |
-
|
277 |
-
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
|
278 |
-
|
279 |
-
#### Annotation process
|
280 |
-
|
281 |
-
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
|
282 |
-
|
283 |
-
[More Information Needed]
|
284 |
-
|
285 |
-
#### Who are the annotators?
|
286 |
-
|
287 |
-
<!-- This section describes the people or systems who created the annotations. -->
|
288 |
-
|
289 |
-
[More Information Needed]
|
290 |
|
291 |
-
#### Personal and Sensitive Information
|
292 |
|
293 |
-
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
|
294 |
-
|
295 |
-
[More Information Needed]
|
296 |
-
|
297 |
-
## Bias, Risks, and Limitations
|
298 |
-
|
299 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
300 |
-
|
301 |
-
[More Information Needed]
|
302 |
|
303 |
### Recommendations
|
304 |
|
305 |
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
306 |
|
307 |
-
|
308 |
-
|
309 |
-
## Glossary [optional]
|
310 |
-
|
311 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
|
312 |
-
|
313 |
-
[More Information Needed]
|
314 |
-
|
315 |
-
## More Information [optional]
|
316 |
-
|
317 |
-
[More Information Needed]
|
318 |
-
|
319 |
-
## Dataset Card Authors [optional]
|
320 |
-
|
321 |
-
[More Information Needed]
|
322 |
-
|
323 |
-
## Dataset Card Contact
|
324 |
-
|
325 |
-
[More Information Needed]
|
326 |
-
|
327 |
-
### Licensing Information
|
328 |
-
|
329 |
|
330 |
|
331 |
### Citation Information
|
|
|
217 |
dataset_size: 543259766
|
218 |
---
|
219 |
|
220 |
+
# Bhasha Wiki Indic Context
|
221 |
|
222 |
<!-- Provide a quick summary of the dataset. -->
|
223 |
+
This dataset has Wikipedia articles pertaining to Indian context.
|
|
|
224 |
## Dataset Details
|
225 |
+
|
226 |
### Dataset Description
|
227 |
|
228 |
<!-- Provide a longer summary of what this dataset is. -->
|
229 |
+
The dataset is built from Wikipedia articles taken from [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia).
|
230 |
+
We filtered, cleaned and translated English articles related to India and Indian context out of entire dataset.
|
231 |
|
232 |
+
Each example has contents of a full cleaned wikipedia article and it's translations in 6 Indian languages.
|
233 |
|
|
|
|
|
|
|
234 |
|
235 |
+
- **Curated by:** [Soket AI Labs](https://soket.ai/)
|
236 |
+
- **Language(s) (NLP):** [English, Hindi, Bengali, Gujarati, Tamil, Kannada, Urdu]
|
237 |
+
- **License:** [cc-by-sa-3.0]
|
238 |
|
239 |
## Uses
|
240 |
|
241 |
<!-- Address questions around how the dataset is intended to be used. -->
|
242 |
+
The dataset is focussed on Indian factual content for pre-training LLMs where Indian knowledge and contextual understanding is required.
|
|
|
243 |
|
244 |
## Dataset Structure
|
245 |
|
246 |
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
247 |
+
Total number of rows: 200820
|
248 |
+
It has approximately **1.56** billion tokens for all languages. The ratio for number of tokens for each language is roughly same
|
249 |
+
when tokenized
|
250 |
+
with our Indic tokenizer we created which can be found in our model repository [Pragna-1b](https://huggingface.co/soketlabs/pragna-1b).
|
251 |
+
Here are token counts for each language:
|
252 |
+
- English: 197.7 millions
|
253 |
+
- Hindi: 227.5 millions
|
254 |
+
- Bengali: 289.1 millions
|
255 |
+
- Gujarati: 206.2 millions
|
256 |
+
- Tamil: 233.8 millions
|
257 |
+
- Kannada: 203.5 millions
|
258 |
+
- Urdu: 207 millions
|
259 |
+
|
260 |
+
Each row corresponds to a wikipedia article with the decription of article in source language(english) and translations in 6 indian languages.
|
261 |
+
The title is in english and descriptions in different languages is represented by column name of format "language_code"_"script".
|
262 |
+
Each description column in different languages is a list of sentences/multiple sentences and can be concatenated to get cleaned article decription.
|
263 |
+
|
264 |
+
Each row is of the format:
|
265 |
+
```yaml
|
266 |
+
{'id': '1',
|
267 |
+
'url': 'https://simple.wikipedia.org/sample_article',
|
268 |
+
'title': 'Sample article',
|
269 |
+
'eng_Latn': ['This is a sample...', 'and more information'],
|
270 |
+
'hin_Deva': ['यह एक नमूना है'..., 'और अधिक जानकारी'],
|
271 |
+
'kan_Knda': ['ಇದು ಒಂದು ಮಾದರಿ...', 'ಮತ್ತು ಹೆಚ್ಚಿನ ಮಾಹಿತಿ'],
|
272 |
+
'ben_Beng': ['এটি একটি নমুনা...', 'এবং আরও তথ্য'],
|
273 |
+
'guj_Gujr': ['આ એક નમૂનો છે...', 'અને વધુ માહિતી'],
|
274 |
+
'tam_Taml': ['இது ஒரு மாதிரி...', 'மேலும் தகவல்'],
|
275 |
+
'urd_Arab': ['...یہ ایک نمونہ ہے۔', 'اور مزید معلومات']
|
276 |
+
}
|
277 |
+
```
|
278 |
|
|
|
279 |
|
280 |
## Dataset Creation
|
281 |
|
282 |
### Curation Rationale
|
283 |
|
284 |
<!-- Motivation for the creation of this dataset. -->
|
285 |
+
We needed to induce knowledge regarding India and Indian context while training our LLM, for which we gathered available Indic
|
286 |
+
content data and also filtered factual data from Wikipedia.
|
287 |
|
|
|
288 |
|
289 |
### Source Data
|
290 |
|
291 |
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
|
292 |
+
Wikpedia english articles from [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia)
|
293 |
|
294 |
#### Data Collection and Processing
|
295 |
|
296 |
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
|
297 |
+
We filtered out Indian context data from [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset's English
|
298 |
+
articles by select keywords.
|
299 |
+
Further we trained a few shot classification model to classify for Indian content vs Not Indian content to narrow down filtered English
|
300 |
+
articles.
|
301 |
+
We cleaned the articles and removed unwanted paragraphs for References etc.
|
302 |
+
We then translated these artices to 6 Indian languages (Hindi, Bengali, Gujarati, Tamil, Kannada, Urdu) using AI4Bharat's [IndicTrans2](https://huggingface.co/ai4bharat/indictrans2-en-indic-1B). The dataset has been cleaned and can be used for pre-training multilingual LLMs.
|
303 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
304 |
|
|
|
305 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
306 |
|
307 |
### Recommendations
|
308 |
|
309 |
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
310 |
|
311 |
+
Though we tried to filter as much Indic context articles as possible with high Recall, there might be some non indic articles mixed in them as well.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
312 |
|
313 |
|
314 |
### Citation Information
|