Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
code
ArXiv:
Libraries:
Datasets
Dask
License:
KStack / README.md
Titovs's picture
Update README.md
263c274 verified
|
raw
history blame
4.01 kB
metadata
language:
  - code
license: other
size_categories:
  - 100M<n<1B
task_categories:
  - text-generation
pretty_name: KExercises
dataset_info:
  features:
    - name: path
      dtype: string
    - name: owner
      dtype: string
    - name: repo_id
      dtype: int64
    - name: is_fork
      dtype: bool
    - name: languages_distribution
      dtype: string
    - name: content
      dtype: string
    - name: issues
      dtype: float64
    - name: main_language
      dtype: string
    - name: forks
      dtype: int64
    - name: stars
      dtype: int64
    - name: commit_sha
      dtype: string
    - name: size
      dtype: int64
    - name: name
      dtype: string
    - name: license
      dtype: string
  splits:
    - name: train
      num_bytes: 11547281532
      num_examples: 4000514
  download_size: 4257401542
  dataset_size: 11547281532
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Dataset Summary

KStack is the largest collection of permissively licensed Kotlin code. banner

Comparison with The Stack v2

In the table below one can find the comparsion between the Kotlin part of The Stack v2 and KStack:

Files Repositories Lines Tokens
Kotlin in The Stack v2 2M 109,457 162M 1.7B
Kstack 4M 168,902 292M 3.1B

Dataset Creation

Collection procedure

We collected repositories from GitHub with the main language being Kotlin, as well as any repositories with Kotlin files that have received 10 or more stars (as of February 2024). Additionally, we gathered repositories with Kotlin files from The Stack v1.2. Kotlin files were identified using go-enry and include files with extensions such as .kt, .kts, and .gradle.kts. It is estimated that we have collected 97% of available Kotlin repositories as of February 2024.

Initial filtering

We conducted full deduplication, using the hash of file content, as well as near deduplication using the same method as in The Stack v1.2. We aggregated the files from one near-deduplicated cluster into a file from the repository with the most stars.

Detecting permissive licenses

We filtered permissive repositories based on the licenses detected by GitHub, and using go-license-detector if GitHub did not have licensing information available. The list of permissive licenses used in dataset can be found here.

Personal and Sensitive Information

To filter out personal information, we applied the same model that was used for The Stack v2 β€” star-pii.

Column description

The dataset contains the following columns:

  • size β€” size of the file in bytes
  • content β€” text (content) of the file after removing personal identifiable information
  • repo_id β€” GitHub ID of the repository
  • path β€” path to a file
  • owner β€” repo owner on GitHub
  • name β€” repo name on GitHub
  • commit_sha β€” hash of the commit, from which the revision of the file is taken
  • stars β€” number of stars in the repo at the moment of collection
  • forks β€” number of forks in the repo at the moment of collection
  • issues β€” number of issues in the repo at the moment of collection
  • is_fork β€” true if the repo is a fork or not as defined by GitHub
  • main_language β€” main language of the repo as defined by GitHub
  • languages_distribution β€” JSON with the distribution of languages by size in bytes in the repo
  • license β€” permissive license of the repository

Opt-out

If you want your data to be removed from dataset, or have any other questions, please reach out to Sergey Titov: [email protected]