metadata
task_categories:
- text-generation
language:
- en
tags:
- fine-tuning
- shakespeare
size_categories:
- n<1K
Data source
Downloaded via Andrej Karpathy's nanogpt repo from this link
Data Format
- The entire dataset is split into train (90%) and test (10%).
- All rows are at most 1024 tokens, using the Llama 2 tokenizer.
- All rows are split cleanly so that sentences are whole and unbroken.