|
--- |
|
task_categories: |
|
- text-classification |
|
- text2text-generation |
|
- summarization |
|
language: |
|
- en |
|
--- |
|
|
|
# typescript-chunks |
|
A dataset of TypeScript snippets, processed from the typescript subset of [the-stack-smol](https://huggingface.co/datasets/bigcode/the-stack-smol). |
|
|
|
|
|
# Processing |
|
- Each source file is parsed with the TypeScript AST and queried for 'semantic chunks' of the following types. |
|
``` |
|
FunctionDeclaration ---- 8327 |
|
ArrowFunction --------- 37971 |
|
ClassDeclaration ------- 5864 |
|
InterfaceDeclaration -- 13426 |
|
EnumDeclaration --------- 531 |
|
TypeAliasDeclaration --- 3657 |
|
MethodDeclaration ----- 27079 |
|
ModuleDeclaration ------ 1174 |
|
``` |
|
- Leading comments are added to the front of `content` |
|
- Removed all chunks over max sequence length (2048) |
|
- Deduplicated / cleaned up |
|
- Generated instructions / summaries with `gpt-3.5-turbo` |
|
|
|
|
|
|
|
# Dataset Structure |
|
```python |
|
from datasets import load_dataset |
|
load_dataset("bleugreen/typescript-chunks") |
|
|
|
DatasetDict({ |
|
train: Dataset({ |
|
features: ['repo', 'path', 'type', 'content', 'summary', 'instruction'], |
|
num_rows: |
|
}) |
|
}) |
|
``` |