|
--- |
|
license: cc-by-sa-3.0 |
|
dataset_info: |
|
- config_name: wiki_indic_cleaned |
|
configs: |
|
- config_name: wiki_indic_cleaned |
|
data_files: |
|
- wiki_indic_cleaned/* |
|
language: |
|
- hi |
|
- en |
|
- gu |
|
- bn |
|
- ka |
|
- ta |
|
- ur |
|
--- |
|
# Bhasha Wiki Indic Context |
|
|
|
<!-- Provide a quick summary of the dataset. --> |
|
This dataset has Wikipedia articles pertaining to Indian context. |
|
## Dataset Details |
|
|
|
### Dataset Description |
|
|
|
<!-- Provide a longer summary of what this dataset is. --> |
|
We filtered out Indian context data from [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset's English articles by keywords. |
|
Further we trained a model to classify for Indian content vs Not Indian content to narrow down filtered English articles. |
|
We then translated these artices to 6 Indian languages (Hindi, Bengali, Gujarati, Tamil, Kannada, Urdu) using AI4Bharat's [IndicTrans2](https://huggingface.co/ai4bharat/indictrans2-en-indic-1B). The dataset has been cleaned and can be used for pre-training multilingual LLMs. |
|
|
|
|
|
- **Curated by:** [Soket AI Labs](https://soket.ai/) |
|
- **Language(s) (NLP):** [English, Hindi, Bengali, Gujarati, Tamil, Kannada, Urdu] |
|
- **License:** [cc-by-sa-3.0] |
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the dataset is intended to be used. --> |
|
The dataset is focussed on Indian factual content for pre-training LLMs where Indian knowledge and contextual understanding is required. |
|
|
|
## Dataset Structure |
|
|
|
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> |
|
Each row corresponds to a wikipedia article with the decription of article in source language(english) and translations in 6 indian languages. |
|
The title is in english and descriptions in different languages is represented by column name of format "<language_code>_<script>" |
|
Each description column in different languages is a list of sentences/multiple sentences and can be concatenated to get cleaned article decription. |
|
|
|
## Dataset Creation |
|
|
|
### Curation Rationale |
|
|
|
<!-- Motivation for the creation of this dataset. --> |
|
|
|
[More Information Needed] |
|
|
|
### Source Data |
|
|
|
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> |
|
Wikpedia english articles from [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) |
|
|
|
#### Data Collection and Processing |
|
|
|
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> |
|
|
|
[More Information Needed] |
|
|
|
|
|
|
|
### Recommendations |
|
|
|
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> |
|
|
|
Though we tried to filter as much Indic context articles as possible with high Recall, there might be some non indic articles mixed in them as well. |
|
|
|
## Citation [optional] |
|
|
|
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> |
|
|
|
**BibTeX:** |
|
|
|
[More Information Needed] |
|
|
|
**APA:** |
|
|
|
[More Information Needed] |
|
|
|
|
|
## Dataset Card Authors [optional] |
|
|
|
[More Information Needed] |
|
|
|
## Dataset Card Contact |
|
|
|
[More Information Needed] |