metadata
dataset_info:
features:
- name: text
dtype: large_string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 29474299014
num_examples: 1905072
download_size: 9941967601
dataset_size: 29474299014
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
This dataset is the reupload of ai4bharat/sangraha dataset. Specifically, 1.9 Million rows of Hindi Verified Data. This is tokenized with Hindi Tokenizer: atharvanighot/hindi-tokenizer such that it can be used to train directly as it is pretokenized dataset.