license: cc-by-sa-3.0
dataset_info:
- config_name: wiki_indic_cleaned
configs:
- config_name: wiki_indic_cleaned
data_files:
- wiki_indic_cleaned/*
language:
- hi
- en
- gu
- bn
- ka
- ta
- ur
Bhasha Wiki Indic Context
This dataset has Wikipedia articles pertaining to Indian context.
Dataset Details
Dataset Description
We filtered out Indian context data from wikimedia/wikipedia dataset's English articles by keywords. Further we trained a model to classify for Indian content vs Not Indian content to narrow down filtered English articles. We then translated these artices to 6 Indian languages (Hindi, Bengali, Gujarati, Tamil, Kannada, Urdu) using AI4Bharat's IndicTrans2. The dataset has been cleaned and can be used for pre-training multilingual LLMs.
- Curated by: Soket AI Labs
- Language(s) (NLP): [English, Hindi, Bengali, Gujarati, Tamil, Kannada, Urdu]
- License: [cc-by-sa-3.0]
Uses
The dataset is focussed on Indian factual content for pre-training LLMs where Indian knowledge and contextual understanding is required.
Dataset Structure
Each row corresponds to a wikipedia article with the decription of article in source language(english) and translations in 6 indian languages.
The title is in english and descriptions in different languages is represented by column name of format "language_code"_"script".
Each description column in different languages is a list of sentences/multiple sentences and can be concatenated to get cleaned article decription.
Dataset Creation
Curation Rationale
[More Information Needed]
Source Data
Wikpedia english articles from wikimedia/wikipedia
Data Collection and Processing
[More Information Needed]
Recommendations
Though we tried to filter as much Indic context articles as possible with high Recall, there might be some non indic articles mixed in them as well.
Citation [optional]
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
Dataset Card Authors [optional]
[More Information Needed]
Dataset Card Contact
[More Information Needed]