|
--- |
|
language: |
|
- bn |
|
- en |
|
- gu |
|
- hi |
|
- kn |
|
- ta |
|
- ur |
|
license: cc-by-3.0 |
|
size_categories: |
|
- 1M<n<10M |
|
task_categories: |
|
- text-generation |
|
- fill-mask |
|
task_ids: |
|
- language-modeling |
|
- masked-language-modeling |
|
configs: |
|
- config_name: 20231101.bn |
|
data_files: |
|
- split: train |
|
path: ben_Beng/train-* |
|
- config_name: 20231101.en |
|
data_files: |
|
- split: train |
|
path: eng_Latn/train-* |
|
- config_name: 20231101.gu |
|
data_files: |
|
- split: train |
|
path: guj_Gujr/train-* |
|
- config_name: 20231101.hi |
|
data_files: |
|
- split: train |
|
path: hin_Deva/train-* |
|
- config_name: 20231101.kn |
|
data_files: |
|
- split: train |
|
path: kan_Knda/train-* |
|
- config_name: 20231101.ta |
|
data_files: |
|
- split: train |
|
path: tam_Taml/train-* |
|
- config_name: 20231101.ur |
|
data_files: |
|
- split: train |
|
path: urd_Arab/train-* |
|
dataset_info: |
|
- config_name: 20231101.bn |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: url |
|
dtype: string |
|
- name: title |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: sents |
|
dtype: int32 |
|
- name: chars |
|
dtype: int32 |
|
- name: words |
|
dtype: int32 |
|
- name: tokens |
|
dtype: int32 |
|
splits: |
|
- name: train |
|
num_bytes: 674539757 |
|
num_examples: 200820 |
|
download_size: 652782434 |
|
dataset_size: 652782434 |
|
- config_name: 20231101.en |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: url |
|
dtype: string |
|
- name: title |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: sents |
|
dtype: int32 |
|
- name: chars |
|
dtype: int32 |
|
- name: words |
|
dtype: int32 |
|
- name: tokens |
|
dtype: int32 |
|
splits: |
|
- name: train |
|
num_bytes: 703955598 |
|
num_examples: 200820 |
|
download_size: 426488108 |
|
dataset_size: 426488108 |
|
- config_name: 20231101.gu |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: url |
|
dtype: string |
|
- name: title |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: sents |
|
dtype: int32 |
|
- name: chars |
|
dtype: int32 |
|
- name: words |
|
dtype: int32 |
|
- name: tokens |
|
dtype: int32 |
|
splits: |
|
- name: train |
|
num_bytes: 668666407 |
|
num_examples: 200820 |
|
download_size: 658661502 |
|
dataset_size: 658661502 |
|
- config_name: 20231101.hi |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: url |
|
dtype: string |
|
- name: title |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: sents |
|
dtype: int32 |
|
- name: chars |
|
dtype: int32 |
|
- name: words |
|
dtype: int32 |
|
- name: tokens |
|
dtype: int32 |
|
splits: |
|
- name: train |
|
num_bytes: 678769726 |
|
num_examples: 200820 |
|
download_size: 640983312 |
|
dataset_size: 640983312 |
|
- config_name: 20231101.kn |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: url |
|
dtype: string |
|
- name: title |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: sents |
|
dtype: int32 |
|
- name: chars |
|
dtype: int32 |
|
- name: words |
|
dtype: int32 |
|
- name: tokens |
|
dtype: int32 |
|
splits: |
|
- name: train |
|
num_bytes: 708769566 |
|
num_examples: 200820 |
|
download_size: 689888426 |
|
dataset_size: 689888426 |
|
- config_name: 20231101.ta |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: url |
|
dtype: string |
|
- name: title |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: sents |
|
dtype: int32 |
|
- name: chars |
|
dtype: int32 |
|
- name: words |
|
dtype: int32 |
|
- name: tokens |
|
dtype: int32 |
|
splits: |
|
- name: train |
|
num_bytes: 781041863 |
|
num_examples: 200820 |
|
download_size: 721062888 |
|
dataset_size: 721062888 |
|
- config_name: 20231101.ur |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: url |
|
dtype: string |
|
- name: title |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: sents |
|
dtype: int32 |
|
- name: chars |
|
dtype: int32 |
|
- name: words |
|
dtype: int32 |
|
- name: tokens |
|
dtype: int32 |
|
splits: |
|
- name: train |
|
num_bytes: 655510379 |
|
num_examples: 200820 |
|
download_size: 543259766 |
|
dataset_size: 543259766 |
|
--- |
|
|
|
# Bhasha Wiki Indic |
|
|
|
<!-- Provide a quick summary of the dataset. --> |
|
This dataset has Wikipedia articles pertaining to Indian context. |
|
## Dataset Details |
|
|
|
### Dataset Description |
|
|
|
<!-- Provide a longer summary of what this dataset is. --> |
|
The dataset is built from Wikipedia articles taken from [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia). |
|
We filtered, cleaned and translated English articles related to India and Indian context out of entire dataset. |
|
|
|
Each example has contents of a full cleaned wikipedia article and it's translations in 6 Indian languages. |
|
|
|
|
|
- **Curated by:** [Soket AI Labs](https://soket.ai/) |
|
- **Language(s) (NLP):** [English, Hindi, Bengali, Gujarati, Tamil, Kannada, Urdu] |
|
- **License:** [cc-by-sa-3.0] |
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the dataset is intended to be used. --> |
|
The dataset is focussed on Indian factual content for pre-training LLMs where Indian knowledge and contextual understanding is required. |
|
|
|
## Dataset Structure |
|
|
|
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> |
|
Total number of rows: 200820 |
|
It has approximately **1.56** billion tokens for all languages. The ratio for number of tokens for each language is roughly same |
|
when tokenized |
|
with our Indic tokenizer we created which can be found in our model repository [Pragna-1b](https://huggingface.co/soketlabs/pragna-1b). |
|
Here are token counts for each language: |
|
- English: 197.7 millions |
|
- Hindi: 227.5 millions |
|
- Bengali: 289.1 millions |
|
- Gujarati: 206.2 millions |
|
- Tamil: 233.8 millions |
|
- Kannada: 203.5 millions |
|
- Urdu: 207 millions |
|
|
|
Each row corresponds to a wikipedia article with the decription of article in source language(english) and translations in 6 indian languages. |
|
The title is in english and descriptions in different languages is represented by column name of format "language_code"_"script". |
|
Each description column in different languages is a list of sentences/multiple sentences and can be concatenated to get cleaned article decription. |
|
|
|
Each row is of the format: |
|
```yaml |
|
{'id': '1', |
|
'url': 'https://simple.wikipedia.org/sample_article', |
|
'title': 'Sample article', |
|
'eng_Latn': ['This is a sample...', 'and more information'], |
|
'hin_Deva': ['यह एक नमूना है'..., 'और अधिक जानकारी'], |
|
'kan_Knda': ['ಇದು ಒಂದು ಮಾದರಿ...', 'ಮತ್ತು ಹೆಚ್ಚಿನ ಮಾಹಿತಿ'], |
|
'ben_Beng': ['এটি একটি নমুনা...', 'এবং আরও তথ্য'], |
|
'guj_Gujr': ['આ એક નમૂનો છે...', 'અને વધુ માહિતી'], |
|
'tam_Taml': ['இது ஒரு மாதிரி...', 'மேலும் தகவல்'], |
|
'urd_Arab': ['...یہ ایک نمونہ ہے۔', 'اور مزید معلومات'] |
|
} |
|
``` |
|
|
|
|
|
## Dataset Creation |
|
|
|
### Curation Rationale |
|
|
|
<!-- Motivation for the creation of this dataset. --> |
|
We needed to induce knowledge regarding India and Indian context while training our LLM, for which we gathered available Indic |
|
content data and also filtered factual data from Wikipedia. |
|
|
|
|
|
### Source Data |
|
|
|
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> |
|
Wikpedia english articles from [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) |
|
|
|
#### Data Collection and Processing |
|
|
|
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> |
|
We filtered out Indian context data from [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset's English |
|
articles by select keywords. |
|
Further we trained a few shot classification model to classify for Indian content vs Not Indian content to narrow down filtered English |
|
articles. |
|
We cleaned the articles and removed unwanted paragraphs for References etc. |
|
We then translated these artices to 6 Indian languages (Hindi, Bengali, Gujarati, Tamil, Kannada, Urdu) using AI4Bharat's [IndicTrans2](https://huggingface.co/ai4bharat/indictrans2-en-indic-1B). The dataset has been cleaned and can be used for pre-training multilingual LLMs. |
|
|
|
|
|
|
|
|
|
### Recommendations |
|
|
|
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> |
|
|
|
Though we tried to filter as much Indic context articles as possible with high Recall, there might be some non indic articles mixed in them as well. |
|
|
|
|
|
### Citation Information |
|
|
|
``` |
|
@ONLINE{bhasha-wiki-indic, |
|
author = "Soket Labs Technology and Research Private Limited", |
|
title = "Bhasha-Wiki-Indic", |
|
url = "https://soket.ai" |
|
} |
|
``` |