File size: 6,379 Bytes
da415d7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 |
---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instance_id
dtype: string
- name: patch
dtype: string
- name: repo
dtype: string
- name: base_commit
dtype: string
- name: hints_text
dtype: string
- name: created_at
dtype: string
- name: test_patch
dtype: string
- name: problem_statement
dtype: string
- name: version
dtype: string
- name: environment_setup_commit
dtype: string
- name: FAIL_TO_PASS
sequence: string
- name: PASS_TO_PASS
sequence: string
- name: meta
struct:
- name: failed_lite_validators
sequence: string
- name: has_test_patch
dtype: bool
- name: is_lite
dtype: bool
splits:
- name: train
num_bytes: 101732749
num_examples: 6426
download_size: 27722795
dataset_size: 101732749
---
# Dataset Summary
SWE-bench Extra is a dataset that can be used to train or evaluate agentic systems specializing in resolving GitHub issues. It is based on the methodology used to build SWE-bench benchmark and includes 6,448 Issue-Pull Request pairs sourced from 2,133 Python repositories.
# Dataset Description
The SWE-bench Extra dataset supports the development of software engineering agents capable of autonomously solving GitHub issues. The data collection process, based on the SWE-bench methodology, involves the following steps:
1. **Issue and Pull Request Collection**: Issues are gathered and linked with pull requests that successfully resolve them.
2. **Filtering**: Instances are filtered based on attributes such as issue descriptions, relevant code paths, and test patches.
3. **Execution-based Validation**: The project environments are set up and tests are run to verify that they execute correctly.
For a more detailed description of the data collection process, please refer to our blog post. [link]
As an example use case of this dataset, we’ve used SWE-bench-extra instances to generate a dataset of 84,480 trajectories (`nebius/swe-agent-trajectories` [link]). We’ve then trained an action generator model, that achieves a score of 19.2% on the subset of 50 random instances from the SWE-bench Verified benchmark, outperforming its parent model `Qwen2.5-72B-Instruct` by 30% relative improvement. Further augmenting the action generator with a guided search based on a critic model, also trained on this data, achieves 40.6% on the full SWE-bench Verified benchmark, which is state-of-the-art among agents using solely open-weight models. You can read more about this agent in our blog post, *“Leveraging Training and Search for Better Software Engineering Agents”* [https://nebius.com/blog/posts/training-and-search-for-software-engineering-agents].
# How to Use
```python
from datasets import load_dataset
ds = load_dataset('nebius/SWE-bench-extra')
```
# Dataset Statistics
Average, 75th percentile, and maximum values characterizing various attributes of the collected instances. Statistics are micro-averaged without grouping by repository.
| Data | Type | Mean | p75 | Max |
|---------------|--------------------|----------|----------|-----------|
| Issue text | Length (words) | 111.5 | 146 | 1,294 |
| Code base | Files (Non-test) | 71.71 | 72.00 | 2,264 |
| | Lines (Non-test) | 15,163.38| 13,777 | 1,039,288 |
| Gold patch | Files edited | 2.6 | 3 | 7 |
| | Lines edited | 56 | 76 | 300 |
| Tests | Fail to Pass | 10.94 | 5 | 4,941 |
| | Total | 58.5 | 49 | 7,820 |
# Dataset Structure
The dataset contains the following fields. It includes all fields from SWE-bench and adds a `meta` column, which indicates whether the instance meets the "lite" criteria and, if not, lists the failed validators.
| Field name | Type | Description |
|----------------------------|--------|-------------------------------------------------------------------------------------------------|
| `instance_id` | str | A formatted instance identifier, usually as `repo_owner__repo_name-PR-number`. |
| `patch` | str | The gold patch, the patch generated by the PR (minus test-related code), that resolved the issue. |
| `repo` | str | The repository owner/name identifier from GitHub. |
| `base_commit` | str | The commit hash of the repository representing the HEAD of the repository before the solution PR is applied. |
| `hints_text` | str | Comments made on the issue prior to the creation of the solution PR’s first commit creation date. |
| `created_at` | str | The creation date of the pull request. |
| `test_patch` | str | A test-file patch that was contributed by the solution PR. |
| `problem_statement` | str | The issue title and body. |
| `version` | str | Installation version to use for running evaluation. |
| `environment_setup_commit` | str | Commit hash to use for environment setup and installation. |
| `FAIL_TO_PASS` | str | A JSON list of strings that represent the set of tests resolved by the PR and tied to the issue resolution. |
| `PASS_TO_PASS` | str | A JSON list of strings that represent tests that should pass before and after the PR application. |
| `meta` | str | A JSON dictionary indicating whether the instance is lite, along with a list of failed lite validators if it is not. |
To execute instances within SWE-bench, you need to provide a default recipe for dependency installation. The constants required for running these instances are described in this [link].
# Licensing Information
All dataset contents are available under the MIT license.
|