tiginamaria
commited on
Commit
•
7a0f2f0
1
Parent(s):
8bf46c9
Update README.md
Browse files
README.md
CHANGED
@@ -259,3 +259,91 @@ configs:
|
|
259 |
- split: dev
|
260 |
path: py/dev-*
|
261 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
259 |
- split: dev
|
260 |
path: py/dev-*
|
261 |
---
|
262 |
+
|
263 |
+
# Bug Localization
|
264 |
+
This is the data for **Bug Localization** benchmark.
|
265 |
+
|
266 |
+
## How-to
|
267 |
+
|
268 |
+
1. Since the dataset is private, if you haven't used HF Hub before, add your token via `huggingface-cli` first:
|
269 |
+
|
270 |
+
```
|
271 |
+
huggingface-cli login
|
272 |
+
```
|
273 |
+
|
274 |
+
2. List all the available configs via [`datasets.get_dataset_config_names`](https://huggingface.co/docs/datasets/v2.14.3/en/package_reference/loading_methods#datasets.get_dataset_config_names) and choose an appropriate one
|
275 |
+
|
276 |
+
|
277 |
+
3. Load the data via [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.3/en/package_reference/loading_methods#datasets.load_dataset):
|
278 |
+
|
279 |
+
```py
|
280 |
+
from datasets import load_dataset
|
281 |
+
|
282 |
+
# Select a configuration from ["py", "java", "kt", "mixed"]
|
283 |
+
configuration = "py"
|
284 |
+
# Select a split from ["dev", "train", "test"]
|
285 |
+
split = "dev"
|
286 |
+
# Load data
|
287 |
+
dataset = load_dataset("JetBrains-Research/lca-bug-localization", configuration, split=split)
|
288 |
+
```
|
289 |
+
|
290 |
+
4. Load repos via [`hf_hub_download`](https://huggingface.co/docs/huggingface_hub/v0.20.3/en/package_reference/file_download#huggingface_hub.hf_hub_download)
|
291 |
+
```py
|
292 |
+
from huggingface_hub import hf_hub_download
|
293 |
+
from datasets import load_dataset
|
294 |
+
|
295 |
+
# Load json with list of repos' .tar.gz file paths
|
296 |
+
paths_json = load_dataset("JetBrains-Research/lca-bug-localization", data_files="paths.json")
|
297 |
+
|
298 |
+
# Load each repo in .tar.gz format, unzip, delete archive
|
299 |
+
repos = paths_json["repos"][0]
|
300 |
+
|
301 |
+
for i, repo_tar_path in enumerate(repos):
|
302 |
+
|
303 |
+
local_repo_tars = hf_hub_download(
|
304 |
+
"JetBrains-Research/lca-bug-localization",
|
305 |
+
filename=repo_tar_path,
|
306 |
+
repo_type="dataset",
|
307 |
+
local_dir="local/dir"
|
308 |
+
)
|
309 |
+
|
310 |
+
result = subprocess.run(["tar", "-xzf", local_repo_tars, "-C", os.path.join("local/dir", "repos")])
|
311 |
+
os.remove(local_repo_tars)
|
312 |
+
```
|
313 |
+
|
314 |
+
## Dataset Structure
|
315 |
+
|
316 |
+
TODO: some overall structure or repo
|
317 |
+
|
318 |
+
### Bug localization data
|
319 |
+
|
320 |
+
This section concerns configuration with *full data* about each commit (no `-labels` suffix).
|
321 |
+
|
322 |
+
Each example has the following fields:
|
323 |
+
|
324 |
+
| **Field** | **Description** |
|
325 |
+
|:------------------:|:----------------------------------------:|
|
326 |
+
| `repo_owner` | Bug issue repository owner. |
|
327 |
+
| `repo_name` | Bug issue repository name. |
|
328 |
+
| `issue_url` | GitHub link to issue <br> `https://github.com/{repo_owner}/{repo_name}/issues/{issue_id}`. |
|
329 |
+
| `pull_url` | GitHub link to pull request <br> `https://github.com/{repo_owner}/{repo_name}/pull/{pull_id}`. |
|
330 |
+
| `comment_url` | GitHub link to comment with pull request to issue reference <br> `https://github.com/{repo_owner}/{repo_name}/pull/{pull_id}#issuecomment-{comment_id}`. |
|
331 |
+
| `issue_title` | Issue title. |
|
332 |
+
| `issue_body` | Issue body. |
|
333 |
+
| `base_sha` | Pull request base sha. |
|
334 |
+
| `head_sha` | Pull request head sha. |
|
335 |
+
| `diff_url` | Pull request diff url between base and head sha <br> `https://github.com/{repo_owner}/{repo_name}/compare/{base_sha}...{head_sha}`. |
|
336 |
+
| `diff` | Pull request diff content. |
|
337 |
+
| `changed_files` | List of changed files parsed from diff. |
|
338 |
+
| `changed_files_exts` | Dict from changed files extension to count. |
|
339 |
+
| `changed_files_count` | Number of changed files. |
|
340 |
+
| `java_changed_files_count` | Number of changed `.java` files. |
|
341 |
+
| `kt_changed_files_count` | Number of changed `.kt` files. |
|
342 |
+
| `py_changed_files_count` | Number of changed `.py` files. |
|
343 |
+
| `code_changed_files_count` | Number of changed `.java`, `.kt` or `.py` files. |
|
344 |
+
| `pull_create_at` | Data of pull request creation in format yyyy-mm-ddThh:mm:ssZ. |
|
345 |
+
| `stars` | Number of repo stars. |
|
346 |
+
|
347 |
+
### Repos data
|
348 |
+
|
349 |
+
TODO: describe repos data as `.tar.gz` archives with list of repos metadata
|