--- license: apache-2.0 size_categories: - 10M`, `` and ``, respectively, to normalise spaces, which effectively reduced the length of the sequences. Finally, only instances with a maximum length of 1024 tokens (docstring+code) were kept. The final dataset contains 23,526,586 text-to-code pairs in Python. Check the paper for additional details! ## Data Fields Each instance contains 3 fields: - `id`: Unique ID of each pair - `code`: The python code - `docstring`: The docstring/problem description associated with this code ## Data Splits There is a single data split in the dataset. We randomly sampled 0.1% of the dataset to serve as validation set. ## Citation **BibTeX:** ```html @inproceedings{christopoulou-etal-2024-text, title = "Text-to-Code Generation with Modality-relative Pre-training", author = "Christopoulou, Fenia and Zhang, Guchun and Lampouras, Gerasimos", editor = "Graham, Yvette and Purver, Matthew", booktitle = "Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)", month = mar, year = "2024", address = "St. Julian{'}s, Malta", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.eacl-long.72", pages = "1194--1208" }