url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
2.12B
node_id
stringlengths
18
32
number
int64
1
6.65k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
int64
0
70
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
float64
draft
float64
0
1
pull_request
dict
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
float64
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/17
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/17/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/17/comments
https://api.github.com/repos/huggingface/datasets/issues/17/events
https://github.com/huggingface/datasets/pull/17
605,753,027
MDExOlB1bGxSZXF1ZXN0NDA4MDk3NjM0
17
Add Pandas as format type
{ "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jplu", "id": 959590, "login": "jplu", "node_id": "MDQ6VXNlcjk1OTU5MA==", "organizations_url": "https://api.github.com/users/jplu/orgs", "received_events_url": "https://api.github.com/users/jplu/received_events", "repos_url": "https://api.github.com/users/jplu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "type": "User", "url": "https://api.github.com/users/jplu" }
[]
closed
false
null
[]
null
0
"2020-04-23T18:20:14Z"
"2020-04-27T18:07:50Z"
"2020-04-27T18:07:48Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/17.diff", "html_url": "https://github.com/huggingface/datasets/pull/17", "merged_at": "2020-04-27T18:07:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/17.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/17" }
As detailed in the title ^^
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/17/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/17/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/16
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/16/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/16/comments
https://api.github.com/repos/huggingface/datasets/issues/16/events
https://github.com/huggingface/datasets/pull/16
605,661,462
MDExOlB1bGxSZXF1ZXN0NDA4MDIyMTUz
16
create our own DownloadManager
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
4
"2020-04-23T16:08:07Z"
"2021-05-05T18:25:24Z"
"2020-04-25T21:25:10Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/16.diff", "html_url": "https://github.com/huggingface/datasets/pull/16", "merged_at": "2020-04-25T21:25:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/16.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/16" }
I tried to create our own - and way simpler - download manager, by replacing all the complicated stuff with our own `cached_path` solution. With this implementation, I tried `dataset = nlp.load('squad')` and it seems to work fine. For the implementation, what I did exactly: - I copied the old download manager - I removed all the dependences to the old `download` files - I replaced all the download + extract calls by calls to `cached_path` - I removed unused parameters (extract_dir, compute_stats) (maybe compute_stats could be re-added later if we want to compute stats...) - I left some functions unimplemented for now. We will probably have to implement them because they are used by some datasets scripts (download_kaggle_data, iter_archive) or because we may need them at some point (download_checksums, _record_sizes_checksums) Let me know if you think that this is going the right direction or if you have remarks. Note: I didn't write any test yet as I wanted to read your remarks first
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/16/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/16/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/15
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/15/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/15/comments
https://api.github.com/repos/huggingface/datasets/issues/15/events
https://github.com/huggingface/datasets/pull/15
604,906,708
MDExOlB1bGxSZXF1ZXN0NDA3NDEwOTk3
15
[Tests] General Test Design for all dataset scripts
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[]
closed
false
null
[]
null
10
"2020-04-22T16:46:01Z"
"2022-10-04T09:31:54Z"
"2020-04-27T14:48:02Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/15.diff", "html_url": "https://github.com/huggingface/datasets/pull/15", "merged_at": "2020-04-27T14:48:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/15.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/15" }
The general idea is similar to how testing is done in `transformers`. There is one general `test_dataset_common.py` file which has a `DatasetTesterMixin` class. This class implements all of the logic that can be used in a generic way for all dataset classes. The idea is to keep each individual dataset test file as minimal as possible. In order to test whether the specific data set class can download the data and generate the examples **without** downloading the actual data all the time, a MockDataLoaderManager class is used which receives a `mock_folder_structure_fn` function from each individual dataset test file that create "fake" data and which returns the same folder structure that would have been created when using the real data downloader.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/15/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/15/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/14
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/14/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/14/comments
https://api.github.com/repos/huggingface/datasets/issues/14/events
https://github.com/huggingface/datasets/pull/14
604,761,315
MDExOlB1bGxSZXF1ZXN0NDA3MjkzNjU5
14
[Download] Only create dir if not already exist
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[]
closed
false
null
[]
null
0
"2020-04-22T13:32:51Z"
"2022-10-04T09:31:50Z"
"2020-04-23T08:27:33Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/14.diff", "html_url": "https://github.com/huggingface/datasets/pull/14", "merged_at": "2020-04-23T08:27:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/14.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/14" }
This was quite annoying to find out :D. Some datasets have save in the same directory. So we should only create a new directory if it doesn't already exist.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/14/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/14/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/13
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/13/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/13/comments
https://api.github.com/repos/huggingface/datasets/issues/13/events
https://github.com/huggingface/datasets/pull/13
604,547,951
MDExOlB1bGxSZXF1ZXN0NDA3MTIxMjkw
13
[Make style]
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[]
closed
false
null
[]
null
3
"2020-04-22T08:10:06Z"
"2022-10-04T09:31:51Z"
"2020-04-23T13:02:22Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/13.diff", "html_url": "https://github.com/huggingface/datasets/pull/13", "merged_at": "2020-04-23T13:02:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/13.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/13" }
Added Makefile and applied make style to all. make style runs the following code: ``` style: black --line-length 119 --target-version py35 src isort --recursive src ``` It's the same code that is run in `transformers`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/13/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/13/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/12
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/12/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/12/comments
https://api.github.com/repos/huggingface/datasets/issues/12/events
https://github.com/huggingface/datasets/pull/12
604,518,583
MDExOlB1bGxSZXF1ZXN0NDA3MDk3MzA4
12
[Map Function] add assert statement if map function does not return dict or None
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[]
closed
false
null
[]
null
3
"2020-04-22T07:21:24Z"
"2022-10-04T09:31:53Z"
"2020-04-24T06:29:03Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/12.diff", "html_url": "https://github.com/huggingface/datasets/pull/12", "merged_at": "2020-04-24T06:29:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/12.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/12" }
IMO, if a function is provided that is not a print statement (-> returns variable of type `None`) or a function that updates the datasets (-> returns variable of type `dict`), then a `TypeError` should be raised. Not sure whether you had cases in mind where the user should do something else @thomwolf , but I think a lot of silent errors can be avoided with this assert statement.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/12/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/12/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/11
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/11/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/11/comments
https://api.github.com/repos/huggingface/datasets/issues/11/events
https://github.com/huggingface/datasets/pull/11
603,921,624
MDExOlB1bGxSZXF1ZXN0NDA2NjExODk2
11
[Convert TFDS to HFDS] Extend script to also allow just converting a single file
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[]
closed
false
null
[]
null
0
"2020-04-21T11:25:33Z"
"2022-10-04T09:31:46Z"
"2020-04-21T20:47:00Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/11.diff", "html_url": "https://github.com/huggingface/datasets/pull/11", "merged_at": "2020-04-21T20:47:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/11.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/11" }
Adds another argument to be able to convert only a single file
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/11/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/11/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/10
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/10/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/10/comments
https://api.github.com/repos/huggingface/datasets/issues/10/events
https://github.com/huggingface/datasets/pull/10
603,909,327
MDExOlB1bGxSZXF1ZXN0NDA2NjAxNzQ2
10
Name json file "squad.json" instead of "squad.py.json"
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[]
closed
false
null
[]
null
0
"2020-04-21T11:04:28Z"
"2022-10-04T09:31:44Z"
"2020-04-21T20:48:06Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/10.diff", "html_url": "https://github.com/huggingface/datasets/pull/10", "merged_at": "2020-04-21T20:48:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/10.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/10" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/10/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/10/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/9
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/9/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/9/comments
https://api.github.com/repos/huggingface/datasets/issues/9/events
https://github.com/huggingface/datasets/pull/9
603,894,874
MDExOlB1bGxSZXF1ZXN0NDA2NTkwMDQw
9
[Clean up] Datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[]
closed
false
null
[]
null
1
"2020-04-21T10:39:56Z"
"2022-10-04T09:31:42Z"
"2020-04-21T20:49:58Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/9.diff", "html_url": "https://github.com/huggingface/datasets/pull/9", "merged_at": "2020-04-21T20:49:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/9.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/9" }
Clean up `nlp/datasets` folder. As I understood, eventually the `nlp/datasets` shall not exist anymore at all. The folder `nlp/datasets/nlp` is kept for the moment, but won't be needed in the future, since it will live on S3 (actually it already does) at: `https://s3.console.aws.amazon.com/s3/buckets/datasets.huggingface.co/nlp/?region=us-east-1` and the different `dataset downloader scripts will be added to `nlp/src/nlp` when downloaded by the user. The folder `nlp/datasets/checksums` is kept for now, but won't be needed anymore in the future. The remaining folders/ files are leftovers from tensorflow-datasets and are not needed. The can be looked up in the private tensorflow-dataset repo.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/9/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/9/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/8
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/8/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/8/comments
https://api.github.com/repos/huggingface/datasets/issues/8/events
https://github.com/huggingface/datasets/pull/8
601,783,243
MDExOlB1bGxSZXF1ZXN0NDA0OTg0NDUz
8
Fix issue 6: error when the citation is missing in the DatasetInfo
{ "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jplu", "id": 959590, "login": "jplu", "node_id": "MDQ6VXNlcjk1OTU5MA==", "organizations_url": "https://api.github.com/users/jplu/orgs", "received_events_url": "https://api.github.com/users/jplu/received_events", "repos_url": "https://api.github.com/users/jplu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "type": "User", "url": "https://api.github.com/users/jplu" }
[]
closed
false
null
[]
null
0
"2020-04-17T08:04:26Z"
"2020-04-29T09:27:11Z"
"2020-04-20T13:24:12Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/8.diff", "html_url": "https://github.com/huggingface/datasets/pull/8", "merged_at": "2020-04-20T13:24:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/8.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/8" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/8/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/8/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7/comments
https://api.github.com/repos/huggingface/datasets/issues/7/events
https://github.com/huggingface/datasets/pull/7
601,780,534
MDExOlB1bGxSZXF1ZXN0NDA0OTgyMzA2
7
Fix issue 5: allow empty datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jplu", "id": 959590, "login": "jplu", "node_id": "MDQ6VXNlcjk1OTU5MA==", "organizations_url": "https://api.github.com/users/jplu/orgs", "received_events_url": "https://api.github.com/users/jplu/received_events", "repos_url": "https://api.github.com/users/jplu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "type": "User", "url": "https://api.github.com/users/jplu" }
[]
closed
false
null
[]
null
0
"2020-04-17T07:59:56Z"
"2020-04-29T09:27:13Z"
"2020-04-20T13:23:48Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7.diff", "html_url": "https://github.com/huggingface/datasets/pull/7", "merged_at": "2020-04-20T13:23:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/7.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6/comments
https://api.github.com/repos/huggingface/datasets/issues/6/events
https://github.com/huggingface/datasets/issues/6
600,330,836
MDU6SXNzdWU2MDAzMzA4MzY=
6
Error when citation is not given in the DatasetInfo
{ "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jplu", "id": 959590, "login": "jplu", "node_id": "MDQ6VXNlcjk1OTU5MA==", "organizations_url": "https://api.github.com/users/jplu/orgs", "received_events_url": "https://api.github.com/users/jplu/received_events", "repos_url": "https://api.github.com/users/jplu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "type": "User", "url": "https://api.github.com/users/jplu" }
[]
closed
false
null
[]
null
3
"2020-04-15T14:14:54Z"
"2020-04-29T09:23:22Z"
"2020-04-29T09:23:22Z"
CONTRIBUTOR
null
null
null
The following error is raised when the `citation` parameter is missing when we instantiate a `DatasetInfo`: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/jplu/dev/jplu/datasets/src/nlp/info.py", line 338, in __repr__ citation_pprint = _indent('"""{}"""'.format(self.citation.strip())) AttributeError: 'NoneType' object has no attribute 'strip' ``` I propose to do the following change in the `info.py` file. The method: ```python def __repr__(self): splits_pprint = _indent("\n".join(["{"] + [ " '{}': {},".format(k, split.num_examples) for k, split in sorted(self.splits.items()) ] + ["}"])) features_pprint = _indent(repr(self.features)) citation_pprint = _indent('"""{}"""'.format(self.citation.strip())) return INFO_STR.format( name=self.name, version=self.version, description=self.description, total_num_examples=self.splits.total_num_examples, features=features_pprint, splits=splits_pprint, citation=citation_pprint, homepage=self.homepage, supervised_keys=self.supervised_keys, # Proto add a \n that we strip. license=str(self.license).strip()) ``` Becomes: ```python def __repr__(self): splits_pprint = _indent("\n".join(["{"] + [ " '{}': {},".format(k, split.num_examples) for k, split in sorted(self.splits.items()) ] + ["}"])) features_pprint = _indent(repr(self.features)) ## the strip is done only is the citation is given citation_pprint = self.citation if self.citation: citation_pprint = _indent('"""{}"""'.format(self.citation.strip())) return INFO_STR.format( name=self.name, version=self.version, description=self.description, total_num_examples=self.splits.total_num_examples, features=features_pprint, splits=splits_pprint, citation=citation_pprint, homepage=self.homepage, supervised_keys=self.supervised_keys, # Proto add a \n that we strip. license=str(self.license).strip()) ``` And now it is ok. @thomwolf are you ok with this fix?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5/comments
https://api.github.com/repos/huggingface/datasets/issues/5/events
https://github.com/huggingface/datasets/issues/5
600,295,889
MDU6SXNzdWU2MDAyOTU4ODk=
5
ValueError when a split is empty
{ "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jplu", "id": 959590, "login": "jplu", "node_id": "MDQ6VXNlcjk1OTU5MA==", "organizations_url": "https://api.github.com/users/jplu/orgs", "received_events_url": "https://api.github.com/users/jplu/received_events", "repos_url": "https://api.github.com/users/jplu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "type": "User", "url": "https://api.github.com/users/jplu" }
[]
closed
false
null
[]
null
3
"2020-04-15T13:25:13Z"
"2020-04-29T09:23:05Z"
"2020-04-29T09:23:05Z"
CONTRIBUTOR
null
null
null
When a split is empty either TEST, VALIDATION or TRAIN I get the following error: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/jplu/dev/jplu/datasets/src/nlp/load.py", line 295, in load ds = dbuilder.as_dataset(**as_dataset_kwargs) File "/home/jplu/dev/jplu/datasets/src/nlp/builder.py", line 587, in as_dataset datasets = utils.map_nested(build_single_dataset, split, map_tuple=True) File "/home/jplu/dev/jplu/datasets/src/nlp/utils/py_utils.py", line 158, in map_nested for k, v in data_struct.items() File "/home/jplu/dev/jplu/datasets/src/nlp/utils/py_utils.py", line 158, in <dictcomp> for k, v in data_struct.items() File "/home/jplu/dev/jplu/datasets/src/nlp/utils/py_utils.py", line 172, in map_nested return function(data_struct) File "/home/jplu/dev/jplu/datasets/src/nlp/builder.py", line 601, in _build_single_dataset split=split, File "/home/jplu/dev/jplu/datasets/src/nlp/builder.py", line 625, in _as_dataset split_infos=self.info.splits.values(), File "/home/jplu/dev/jplu/datasets/src/nlp/arrow_reader.py", line 200, in read return py_utils.map_nested(_read_instruction_to_ds, instructions) File "/home/jplu/dev/jplu/datasets/src/nlp/utils/py_utils.py", line 172, in map_nested return function(data_struct) File "/home/jplu/dev/jplu/datasets/src/nlp/arrow_reader.py", line 191, in _read_instruction_to_ds file_instructions = make_file_instructions(name, split_infos, instruction) File "/home/jplu/dev/jplu/datasets/src/nlp/arrow_reader.py", line 104, in make_file_instructions absolute_instructions=absolute_instructions, File "/home/jplu/dev/jplu/datasets/src/nlp/arrow_reader.py", line 122, in _make_file_instructions_from_absolutes 'Split empty. This might means that dataset hasn\'t been generated ' ValueError: Split empty. This might means that dataset hasn't been generated yet and info not restored from GCS, or that legacy dataset is used. ``` How to reproduce: ```python import csv import nlp class Bbc(nlp.GeneratorBasedBuilder): VERSION = nlp.Version("1.0.0") def __init__(self, **config): self.train = config.pop("train", None) self.validation = config.pop("validation", None) super(Bbc, self).__init__(**config) def _info(self): return nlp.DatasetInfo(builder=self, description="bla", features=nlp.features.FeaturesDict({"id": nlp.int32, "text": nlp.string, "label": nlp.string})) def _split_generators(self, dl_manager): return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={"filepath": self.train}), nlp.SplitGenerator(name=nlp.Split.VALIDATION, gen_kwargs={"filepath": self.validation}), nlp.SplitGenerator(name=nlp.Split.TEST, gen_kwargs={"filepath": None})] def _generate_examples(self, filepath): if not filepath: return None, {} with open(filepath) as f: reader = csv.reader(f, delimiter=',', quotechar="\"") lines = list(reader)[1:] for idx, line in enumerate(lines): yield idx, {"id": idx, "text": line[1], "label": line[0]} ``` ```python import nlp dataset = nlp.load("bbc", builder_kwargs={"train": "bbc/data/train.csv", "validation": "bbc/data/test.csv"}) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4/comments
https://api.github.com/repos/huggingface/datasets/issues/4/events
https://github.com/huggingface/datasets/issues/4
600,185,417
MDU6SXNzdWU2MDAxODU0MTc=
4
[Feature] Keep the list of labels of a dataset as metadata
{ "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jplu", "id": 959590, "login": "jplu", "node_id": "MDQ6VXNlcjk1OTU5MA==", "organizations_url": "https://api.github.com/users/jplu/orgs", "received_events_url": "https://api.github.com/users/jplu/received_events", "repos_url": "https://api.github.com/users/jplu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "type": "User", "url": "https://api.github.com/users/jplu" }
[]
closed
false
null
[]
null
6
"2020-04-15T10:17:10Z"
"2020-07-08T16:59:46Z"
"2020-05-04T06:11:57Z"
CONTRIBUTOR
null
null
null
It would be useful to keep the list of the labels of a dataset as metadata. Either directly in the `DatasetInfo` or in the Arrow metadata.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3/comments
https://api.github.com/repos/huggingface/datasets/issues/3/events
https://github.com/huggingface/datasets/issues/3
600,180,050
MDU6SXNzdWU2MDAxODAwNTA=
3
[Feature] More dataset outputs
{ "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jplu", "id": 959590, "login": "jplu", "node_id": "MDQ6VXNlcjk1OTU5MA==", "organizations_url": "https://api.github.com/users/jplu/orgs", "received_events_url": "https://api.github.com/users/jplu/received_events", "repos_url": "https://api.github.com/users/jplu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "type": "User", "url": "https://api.github.com/users/jplu" }
[]
closed
false
null
[]
null
3
"2020-04-15T10:08:14Z"
"2020-05-04T06:12:27Z"
"2020-05-04T06:12:27Z"
CONTRIBUTOR
null
null
null
Add the following dataset outputs: - Spark - Pandas
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2/comments
https://api.github.com/repos/huggingface/datasets/issues/2/events
https://github.com/huggingface/datasets/issues/2
599,767,671
MDU6SXNzdWU1OTk3Njc2NzE=
2
Issue to read a local dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jplu", "id": 959590, "login": "jplu", "node_id": "MDQ6VXNlcjk1OTU5MA==", "organizations_url": "https://api.github.com/users/jplu/orgs", "received_events_url": "https://api.github.com/users/jplu/received_events", "repos_url": "https://api.github.com/users/jplu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "type": "User", "url": "https://api.github.com/users/jplu" }
[]
closed
false
null
[]
null
5
"2020-04-14T18:18:51Z"
"2020-05-11T18:55:23Z"
"2020-05-11T18:55:22Z"
CONTRIBUTOR
null
null
null
Hello, As proposed by @thomwolf, I open an issue to explain what I'm trying to do without success. What I want to do is to create and load a local dataset, the script I have done is the following: ```python import os import csv import nlp class BbcConfig(nlp.BuilderConfig): def __init__(self, **kwargs): super(BbcConfig, self).__init__(**kwargs) class Bbc(nlp.GeneratorBasedBuilder): _DIR = "./data" _DEV_FILE = "test.csv" _TRAINING_FILE = "train.csv" BUILDER_CONFIGS = [BbcConfig(name="bbc", version=nlp.Version("1.0.0"))] def _info(self): return nlp.DatasetInfo(builder=self, features=nlp.features.FeaturesDict({"id": nlp.string, "text": nlp.string, "label": nlp.string})) def _split_generators(self, dl_manager): files = {"train": os.path.join(self._DIR, self._TRAINING_FILE), "dev": os.path.join(self._DIR, self._DEV_FILE)} return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={"filepath": files["train"]}), nlp.SplitGenerator(name=nlp.Split.VALIDATION, gen_kwargs={"filepath": files["dev"]})] def _generate_examples(self, filepath): with open(filepath) as f: reader = csv.reader(f, delimiter=',', quotechar="\"") lines = list(reader)[1:] for idx, line in enumerate(lines): yield idx, {"idx": idx, "text": line[1], "label": line[0]} ``` The dataset is attached to this issue as well: [data.zip](https://github.com/huggingface/datasets/files/4476928/data.zip) Now the steps to reproduce what I would like to do: 1. unzip data locally (I know the nlp lib can detect and extract archives but I want to reduce and facilitate the reproduction as much as possible) 2. create the `bbc.py` script as above at the same location than the unziped `data` folder. Now I try to load the dataset in three different ways and none works, the first one with the name of the dataset like I would do with TFDS: ```python import nlp from bbc import Bbc dataset = nlp.load("bbc") ``` I get: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 280, in load dbuilder: DatasetBuilder = builder(path, name, data_dir=data_dir, **builder_kwargs) File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 166, in builder builder_cls = load_dataset(path, name=name, **builder_kwargs) File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 88, in load_dataset local_files_only=local_files_only, File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/utils/file_utils.py", line 214, in cached_path if not is_zipfile(output_path) and not tarfile.is_tarfile(output_path): File "/opt/anaconda3/envs/transformers/lib/python3.7/zipfile.py", line 203, in is_zipfile with open(filename, "rb") as fp: TypeError: expected str, bytes or os.PathLike object, not NoneType ``` But @thomwolf told me that no need to import the script, just put the path of it, then I tried three different way to do: ```python import nlp dataset = nlp.load("bbc.py") ``` And ```python import nlp dataset = nlp.load("./bbc.py") ``` And ```python import nlp dataset = nlp.load("/absolute/path/to/bbc.py") ``` These three ways gives me: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 280, in load dbuilder: DatasetBuilder = builder(path, name, data_dir=data_dir, **builder_kwargs) File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 166, in builder builder_cls = load_dataset(path, name=name, **builder_kwargs) File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 124, in load_dataset dataset_module = importlib.import_module(module_path) File "/opt/anaconda3/envs/transformers/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked ModuleNotFoundError: No module named 'nlp.datasets.2fd72627d92c328b3e9c4a3bf7ec932c48083caca09230cebe4c618da6e93688.bbc' ``` Any idea of what I'm missing? or I might have spot a bug :)
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1/comments
https://api.github.com/repos/huggingface/datasets/issues/1/events
https://github.com/huggingface/datasets/pull/1
599,457,467
MDExOlB1bGxSZXF1ZXN0NDAzMDk1NDYw
1
changing nlp.bool to nlp.bool_
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
[]
closed
false
null
[]
null
0
"2020-04-14T10:18:02Z"
"2022-10-04T09:31:40Z"
"2020-04-14T12:01:40Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1.diff", "html_url": "https://github.com/huggingface/datasets/pull/1", "merged_at": "2020-04-14T12:01:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/1.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1/timeline
null
null
true