The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

KoPI (Korpus Perayapan Indonesia) is Indonesian general corpora for sequence language modelling

Subset of KoPI corpora: KoPI-CC + KoPI-CC-NEWS + KoPI-Mc4 + KoPI-Wiki + KoPI-Leipzig + KoPI-Paper + KoPI-Books

Prerequisite

  • Zstandard
    • you need to install zstandard first ( pip install zstandard )

Usage

from datasets import load_dataset

tiny = load_dataset('acul3/KoPI','tiny') #10 files load only
#small = load_dataset('acul3/KoPI','small') #30 files load only
#medium = load_dataset('acul3/KoPI','medium') #55 files load only
#large = load_dataset('acul3/KoPI','large') #75 files load only
#full = load_dataset('acul3/KoPI','full') #107 files load only (all files)

output dataset will be like

DatasetDict({
    train: Dataset({
        features: ['text', 'url', 'timestamp', 'meta'],
        num_rows: 2000000
    })
    validation: Dataset({
        features: ['text', 'url', 'timestamp', 'meta'],
        num_rows: 200000
    })
})
Downloads last month
252

Models trained or fine-tuned on Bahasalab/KoPI