crumb's picture
Update README.md
1f8fe48
|
raw
history blame
721 Bytes
metadata
license: apache-2.0
language:
  - en
task_categories:
  - feature-extraction
tags:
  - t5
  - flan
size_categories:
  - 100K<n<1M

All of the data together is around 41GB. It's the last hidden states of 131,072 samples from refinedweb padded/truncated to 512 tokens on the left, fed through google/flan-t5-small.

Structure:

{
  "encoding": List, shaped (512, 512) aka (tokens, d_model),
  "text": String, the original text that was encoded,
  "attention_mask": List, binary mask to pass to your model with encoding to not attend to pad tokens
}

just a tip, you cannot load this with the RAM in the free ver of google colab, not even a single file, streaming won't work either