Datasets:
metadata
task_categories:
- text-classification
- text2text-generation
- summarization
language:
- en
typescript-chunks
A dataset of TypeScript snippets, processed from the typescript subset of the-stack-smol.
Processing
- Each source file is parsed with the TypeScript AST and queried for 'semantic chunks' of the following types.
FunctionDeclaration ---- 8205
ArrowFunction --------- 33890
ClassDeclaration ------- 5325
InterfaceDeclaration -- 12884
EnumDeclaration --------- 518
TypeAliasDeclaration --- 3580
MethodDeclaration ----- 24713
- Leading comments are added to the front of
content
- Removed all chunks over max sequence length (2048)
- Deduplicated / cleaned up
- Generated instructions / summaries with
gpt-3.5-turbo
(in progress)
Dataset Structure
from datasets import load_dataset
load_dataset("bleugreen/typescript-chunks")
DatasetDict({
train: Dataset({
features: ['repo', 'path', 'type', 'content', 'summary', 'instruction'],
num_rows: 89115
})
})