license: cc-by-sa-3.0
dataset_info:
- config_name: wiki_indic_cleaned
configs:
- config_name: wiki_indic_cleaned
data_files:
- wiki_indic_cleaned/*
language:
- hi
- en
- gu
- bn
- ka
- ta
- ur
Bhasha Wiki Indic Context
This dataset has Wikipedia articles pertaining to Indian context.
Dataset Details
Dataset Description
We filtered out Indian context data from wikimedia/wikipedia dataset's English articles by keywords. Further we trained a model to classify for Indian content vs Not Indian content to narrow down filtered English articles. We then translated these artices to 6 Indian languages (Hindi, Bengali, Gujarati, Tamil, Kannada, Urdu) using AI4Bharat's IndicTrans2. The dataset has been cleaned and can be used for pre-training multilingual LLMs.
- Curated by: Soket AI Labs
- Language(s) (NLP): [English, Hindi, Bengali, Gujarati, Tamil, Kannada, Urdu]
- License: [cc-by-sa-3.0]
Uses
The dataset is focussed on Indian factual content for pre-training LLMs where Indian knowledge and contextual understanding is required.
Dataset Structure
Each row corresponds to a wikipedia article with the decription of article in source language(english) and translations in 6 indian languages. The title is in english and descriptions in different languages is represented by column name of format "_