The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
removing the
loading script
and relying on
automated data support
(you can use
convert_to_parquet
from the datasets
library). If this is not possible, please
open a discussion
for direct help.
Summary of the Dataset
Description
Stack-Repo is a dataset of 200 Java repositories from GitHub with permissive licenses and near-deduplicated files that are augmented with three types of repository contexts.
- Prompt Proposal (PP) Contexts: These contexts are based on the prompt proposals from the paper Repository-Level Prompt Generation for Large Language Models of Code.
- BM25 Contexts: These contexts are obtained based on the BM25 similarity scores.
- RandomNN Contexts: These contexts are obtained using the nearest neighbors in the representation space of an embedding model.
For more details, please check our paper RepoFusion: Training Code Models to Understand Your Repository.
The original Java source files are obtained using a modified version of The Stack.
Data Splits
The dataset consists of three splits: train
, validation
and test
, comprising of 100, 50, and 50 repositories, respectively.
Data Organization
Each split contains separate folder for a repository where each repository contains all .java
source code files in the repository in the original directory structure along with three .json
files corresponding to the PP, BM25 and RandomNN repo contexts. In terms of the HuggingFace Datasets terminology, we have four subdatasets or configurations.
PP_contexts
: Propmt Proposal repo contexts.bm25_contexts
: BM25 repo contexts.randomNN_contexts
: RandomNN repo contexts.sources
: actual java (.java
) source code files
Dataset Usage
To clone the dataset locally
git clone https://huggingface.co/datasets/RepoFusion/Stack-Repo <local_path>
To load the dataset desired configuration and split:
import datasets
ds = datasets.load_dataset(
"RepoFusion/Stack-Repo",
name="<configuration_name>",
split="<split_name>"
data_dir="<local_path>"
)
NOTE: The configurations for the repo contexts bm25_contexts
, PP_contexts
and randomNN_contexts
can be loaded directly by specifying the corresponding
<configuration_name>
along with the <split_name>
in the load_dataset command listed above without cloning the repo locally.
For the sources
if not cloned beforehand or data_dir
not specified, ManualDownloadError
will be raised.
Data Format
The expected data format of the .json
files is a list of target holes and corresponding repo contexts where each entry in the .json
file corresponds to a target hole consisting of the location of the target hole, the target hole as a string, the surrounding context as a string and a list of repo-contexts as strings. Specifically, each row is a dictionary containing
id
: hole_id (location of the target hole)question
: surrounding contexttarget
: target holectxs
: a list of repo contexts where each item is a dictionary containingtitle
: name of the repo contexttext
: content of the repo context
The actual java sources can be accessed via file system directly. The format is like this [<data_set_root>/data/<split_name>/<github_user>/<repo_name>/<path/to/every/java/file/in/the/repo>.java]
. When accessed through Datasets.load_dataset
, the data fields for the sources
can be specified as below.
features = datasets.Features({
'file': datasets.Value('string'),
'content': datasets.Value('string')
})
When accessed through Datasets.load_dataset
, the data fields for the repo contexts can be specified as below.
features = datasets.Features({
'id': datasets.Value('string'),
'hole_file': datasets.Value('string'),
'hole_line': datasets.Value('int32'),
'hole_pos': datasets.Value('int32'),
'question': datasets.Value('string'),
'target': datasets.Value('string'),
'answers': datasets.Sequence(
datasets.Value('string')
),
'ctxs': [{
'title': datasets.Value('string'),
'text': datasets.Value('string'),
'score': datasets.Value('float64')
}]
})
Additional Information
Dataset Curators
- Disha Shrivastava, [email protected]
- Denis Kocetkov, [email protected]
Licensing Information
Stack-Repo is derived from a modified version of The Stack. The Stack is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point. The list of SPDX license identifiers included in the dataset can be found here.
Citation
@article{shrivastava2023repofusion,
title={RepoFusion: Training Code Models to Understand Your Repository},
author={Shrivastava, Disha and Kocetkov, Denis and de Vries, Harm and Bahdanau, Dzmitry and Scholak, Torsten},
journal={arXiv preprint arXiv:2306.10998},
year={2023}
}
- Downloads last month
- 294