|
--- |
|
license: cc-by-sa-3.0 |
|
dataset_info: |
|
- config_name: wiki_indic_cleaned |
|
configs: |
|
- config_name: wiki_indic_cleaned |
|
data_files: |
|
- wiki_indic_cleaned/* |
|
language: |
|
- hi |
|
- en |
|
- gu |
|
- bn |
|
- ka |
|
- ta |
|
- ur |
|
--- |
|
# Bhasha Wiki Indic Context |
|
|
|
<!-- Provide a quick summary of the dataset. --> |
|
This dataset has Wikipedia articles pertaining to Indian context. |
|
## Dataset Details |
|
|
|
### Dataset Description |
|
|
|
<!-- Provide a longer summary of what this dataset is. --> |
|
The dataset is built from Wikipedia articles taken from [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia). |
|
We filtered, cleaned and translated English articles related to India and Indian context out of entire dataset. |
|
|
|
\Each example has contents of a full cleaned wikipedia article and it's translations in 6 Indian languages. |
|
|
|
|
|
- **Curated by:** [Soket AI Labs](https://soket.ai/) |
|
- **Language(s) (NLP):** [English, Hindi, Bengali, Gujarati, Tamil, Kannada, Urdu] |
|
- **License:** [cc-by-sa-3.0] |
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the dataset is intended to be used. --> |
|
The dataset is focussed on Indian factual content for pre-training LLMs where Indian knowledge and contextual understanding is required. |
|
|
|
## Dataset Structure |
|
|
|
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> |
|
Total number of rows: 200820 \ |
|
It has approximately **1.54** billion tokens for all languages with almost similar number of tokens for each language when tokenized |
|
with our Indic tokenizer we created which can be found in our model repository [Pragna-1b](https://huggingface.co/soketlabs/pragna-1b). |
|
Here are token counts for each language: |
|
- English: 196.2 millions |
|
- Hindi: 225 millions |
|
- Bengali: 286.2 millions |
|
- Gujarati: 204 millions |
|
- Tamil: 231.3 millions |
|
- Kannada: 201.3 millions |
|
- Urdu: 204.9 millions |
|
|
|
\These numbers were extrapolated from calculations on 10% of randomly sampled dataset.\ |
|
|
|
Each row corresponds to a wikipedia article with the decription of article in source language(english) and translations in 6 indian languages. |
|
The title is in english and descriptions in different languages is represented by column name of format "language_code"_"script". |
|
Each description column in different languages is a list of sentences/multiple sentences and can be concatenated to get cleaned article decription. |
|
\\Each row is of the format: |
|
|
|
{'id': '1', |
|
'url': 'https://simple.wikipedia.org/sample_article', |
|
'title': 'Sample article', |
|
'eng_Latn': ['This is a sample...', 'and more information'], |
|
'hin_Deva': ['यह एक नमूना है'..., 'और अधिक जानकारी'], |
|
'kan_Knda': ['ಇದು ಒಂದು ಮಾದರಿ...', 'ಮತ್ತು ಹೆಚ್ಚಿನ ಮಾಹಿತಿ'], |
|
'ben_Beng': ['এটি একটি নমুনা...', 'এবং আরও তথ্য'], |
|
'guj_Gujr': ['આ એક નમૂનો છે...', 'અને વધુ માહિતી'], |
|
'tam_Taml': ['இது ஒரு மாதிரி...', 'மேலும் தகவல்'], |
|
'urd_Arab': ['...یہ ایک نمونہ ہے۔', 'اور مزید معلومات'] |
|
} |
|
|
|
|
|
## Dataset Creation |
|
|
|
### Curation Rationale |
|
|
|
<!-- Motivation for the creation of this dataset. --> |
|
We needed to induce knowledge regarding India and Indian context while training our LLM, for which we gathered available Indic |
|
content data and also filtered factual data from Wikipedia. |
|
|
|
|
|
### Source Data |
|
|
|
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> |
|
Wikpedia english articles from [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) |
|
|
|
#### Data Collection and Processing |
|
|
|
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> |
|
We filtered out Indian context data from [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset's English |
|
articles by select keywords. |
|
Further we trained a few shot classification model to classify for Indian content vs Not Indian content to narrow down filtered English |
|
articles. |
|
We cleaned the articles and removed unwanted paragraphs for References etc. |
|
We then translated these artices to 6 Indian languages (Hindi, Bengali, Gujarati, Tamil, Kannada, Urdu) using AI4Bharat's [IndicTrans2](https://huggingface.co/ai4bharat/indictrans2-en-indic-1B). The dataset has been cleaned and can be used for pre-training multilingual LLMs. |
|
|
|
|
|
|
|
|
|
### Recommendations |
|
|
|
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> |
|
|
|
Though we tried to filter as much Indic context articles as possible with high Recall, there might be some non indic articles mixed in them as well. |
|
|
|
## Citation [optional] |
|
|
|
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> |
|
|
|
**BibTeX:** |
|
|
|
[More Information Needed] |
|
|
|
**APA:** |
|
|
|
[More Information Needed] |
|
|
|
|
|
## Dataset Card Authors [optional] |
|
|
|
[More Information Needed] |
|
|
|
## Dataset Card Contact |
|
|
|
[More Information Needed] |