sradc's picture
Update README.md
36942ec
|
raw
history blame
792 Bytes
metadata
dataset_info:
  features:
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 26076989556
      num_examples: 33536113
  download_size: 15221565467
  dataset_size: 26076989556

Dataset Card for "chunked-wikipedia20220301en-bookcorpusopen"

This dataset combines wikipedia20220301.en and bookcorpusopen, and splits the data into smaller chunks, of size ~820 chars (such that each item will be at least ~128 tokens). The logic only splits on spaces, so the chunks are likely to be slightly larger than 820 chars. The dataset has been normalized into lower case, removing accents a non-english characters. Items with less than 200 chars are removed, or more than 1000.