File size: 3,308 Bytes
bbc4afb 9c9af36 bbc4afb a2caf5d c873405 a2caf5d c873405 a2caf5d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 |
---
license: cc-by-sa-3.0
dataset_info:
- config_name: wiki_indic_cleaned
configs:
- config_name: wiki_indic_cleaned
data_files:
- wiki_indic_cleaned/*
language:
- hi
- en
- gu
- bn
- ka
- ta
- ur
---
# Bhasha Wiki Indic Context
<!-- Provide a quick summary of the dataset. -->
This dataset has Wikipedia articles pertaining to Indian context.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
We filtered out Indian context data from [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset's English articles by keywords.
Further we trained a model to classify for Indian content vs Not Indian content to narrow down filtered English articles.
We then translated these artices to 6 Indian languages (Hindi, Bengali, Gujarati, Tamil, Kannada, Urdu) using AI4Bharat's [IndicTrans2](https://huggingface.co/ai4bharat/indictrans2-en-indic-1B). The dataset has been cleaned and can be used for pre-training multilingual LLMs.
- **Curated by:** [Soket AI Labs](https://soket.ai/)
- **Language(s) (NLP):** [English, Hindi, Bengali, Gujarati, Tamil, Kannada, Urdu]
- **License:** [cc-by-sa-3.0]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
The dataset is focussed on Indian factual content for pre-training LLMs where Indian knowledge and contextual understanding is required.
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
Each row corresponds to a wikipedia article with the decription of article in source language(english) and translations in 6 indian languages.
The title is in english and descriptions in different languages is represented by column name of format "<language_code>_<script>"
Each description column in different languages is a list of sentences/multiple sentences and can be concatenated to get cleaned article decription.
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
Wikpedia english articles from [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia)
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Though we tried to filter as much Indic context articles as possible with high Recall, there might be some non indic articles mixed in them as well.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |