url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
2.12B
node_id
stringlengths
18
32
number
int64
1
6.65k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
int64
0
70
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
float64
draft
float64
0
1
pull_request
dict
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
float64
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/522
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/522/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/522/comments
https://api.github.com/repos/huggingface/datasets/issues/522/events
https://github.com/huggingface/datasets/issues/522
682,478,833
MDU6SXNzdWU2ODI0Nzg4MzM=
522
dictionnary typo in docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/4004127?v=4", "events_url": "https://api.github.com/users/yonigottesman/events{/privacy}", "followers_url": "https://api.github.com/users/yonigottesman/followers", "following_url": "https://api.github.com/users/yonigottesman/following{/other_user}", "gists_url": "https://api.github.com/users/yonigottesman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yonigottesman", "id": 4004127, "login": "yonigottesman", "node_id": "MDQ6VXNlcjQwMDQxMjc=", "organizations_url": "https://api.github.com/users/yonigottesman/orgs", "received_events_url": "https://api.github.com/users/yonigottesman/received_events", "repos_url": "https://api.github.com/users/yonigottesman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yonigottesman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yonigottesman/subscriptions", "type": "User", "url": "https://api.github.com/users/yonigottesman" }
[]
closed
false
null
[]
null
1
"2020-08-20T07:11:05Z"
"2020-08-20T07:52:14Z"
"2020-08-20T07:52:13Z"
CONTRIBUTOR
null
null
null
Many places dictionary is spelled dictionnary, not sure if its on purpose or not. Fixed in this pr: https://github.com/huggingface/nlp/pull/521
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/522/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/522/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/521
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/521/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/521/comments
https://api.github.com/repos/huggingface/datasets/issues/521/events
https://github.com/huggingface/datasets/pull/521
682,477,648
MDExOlB1bGxSZXF1ZXN0NDcwNzEyNzgz
521
Fix dictionnary (dictionary) typo
{ "avatar_url": "https://avatars.githubusercontent.com/u/4004127?v=4", "events_url": "https://api.github.com/users/yonigottesman/events{/privacy}", "followers_url": "https://api.github.com/users/yonigottesman/followers", "following_url": "https://api.github.com/users/yonigottesman/following{/other_user}", "gists_url": "https://api.github.com/users/yonigottesman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yonigottesman", "id": 4004127, "login": "yonigottesman", "node_id": "MDQ6VXNlcjQwMDQxMjc=", "organizations_url": "https://api.github.com/users/yonigottesman/orgs", "received_events_url": "https://api.github.com/users/yonigottesman/received_events", "repos_url": "https://api.github.com/users/yonigottesman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yonigottesman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yonigottesman/subscriptions", "type": "User", "url": "https://api.github.com/users/yonigottesman" }
[]
closed
false
null
[]
null
1
"2020-08-20T07:09:02Z"
"2020-08-20T07:52:04Z"
"2020-08-20T07:52:04Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/521.diff", "html_url": "https://github.com/huggingface/datasets/pull/521", "merged_at": "2020-08-20T07:52:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/521.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/521" }
This error happens many times I'm thinking maybe its spelled like this on purpose?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/521/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/521/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/520
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/520/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/520/comments
https://api.github.com/repos/huggingface/datasets/issues/520/events
https://github.com/huggingface/datasets/pull/520
682,264,839
MDExOlB1bGxSZXF1ZXN0NDcwNTI4MDE0
520
Transform references for sacrebleu
{ "avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4", "events_url": "https://api.github.com/users/jbragg/events{/privacy}", "followers_url": "https://api.github.com/users/jbragg/followers", "following_url": "https://api.github.com/users/jbragg/following{/other_user}", "gists_url": "https://api.github.com/users/jbragg/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jbragg", "id": 2238344, "login": "jbragg", "node_id": "MDQ6VXNlcjIyMzgzNDQ=", "organizations_url": "https://api.github.com/users/jbragg/orgs", "received_events_url": "https://api.github.com/users/jbragg/received_events", "repos_url": "https://api.github.com/users/jbragg/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jbragg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jbragg/subscriptions", "type": "User", "url": "https://api.github.com/users/jbragg" }
[]
closed
false
null
[]
null
1
"2020-08-20T00:26:55Z"
"2020-08-20T09:30:54Z"
"2020-08-20T09:30:53Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/520.diff", "html_url": "https://github.com/huggingface/datasets/pull/520", "merged_at": "2020-08-20T09:30:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/520.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/520" }
Currently it is impossible to use sacrebleu when len(predictions) != the number of references per prediction (very uncommon), due to a strange format expected by sacrebleu. If one passes in the data to `nlp.metric.compute()` in sacrebleu format, `nlp` throws an error due to mismatching lengths between predictions and references. If one uses a more standard format where predictions and references are lists of the same length, sacrebleu throws an error. This PR transforms reference data in a more standard format into the [unusual format](https://github.com/mjpost/sacreBLEU#using-sacrebleu-from-python) expected by sacrebleu.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/520/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/520/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/519
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/519/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/519/comments
https://api.github.com/repos/huggingface/datasets/issues/519/events
https://github.com/huggingface/datasets/issues/519
682,193,882
MDU6SXNzdWU2ODIxOTM4ODI=
519
[BUG] Metrics throwing new error on master since 0.4.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4", "events_url": "https://api.github.com/users/jbragg/events{/privacy}", "followers_url": "https://api.github.com/users/jbragg/followers", "following_url": "https://api.github.com/users/jbragg/following{/other_user}", "gists_url": "https://api.github.com/users/jbragg/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jbragg", "id": 2238344, "login": "jbragg", "node_id": "MDQ6VXNlcjIyMzgzNDQ=", "organizations_url": "https://api.github.com/users/jbragg/orgs", "received_events_url": "https://api.github.com/users/jbragg/received_events", "repos_url": "https://api.github.com/users/jbragg/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jbragg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jbragg/subscriptions", "type": "User", "url": "https://api.github.com/users/jbragg" }
[]
closed
false
null
[]
null
2
"2020-08-19T21:29:15Z"
"2022-06-02T16:41:01Z"
"2020-08-19T22:04:40Z"
CONTRIBUTOR
null
null
null
The following error occurs when passing in references of type `List[List[str]]` to metrics like bleu. Wasn't happening on 0.4.0 but happening now on master. ``` File "/usr/local/lib/python3.7/site-packages/nlp/metric.py", line 226, in compute self.add_batch(predictions=predictions, references=references) File "/usr/local/lib/python3.7/site-packages/nlp/metric.py", line 242, in add_batch batch = self.info.features.encode_batch(batch) File "/usr/local/lib/python3.7/site-packages/nlp/features.py", line 527, in encode_batch encoded_batch[key] = [encode_nested_example(self[key], cast_to_python_objects(obj)) for obj in column] File "/usr/local/lib/python3.7/site-packages/nlp/features.py", line 527, in <listcomp> encoded_batch[key] = [encode_nested_example(self[key], cast_to_python_objects(obj)) for obj in column] File "/usr/local/lib/python3.7/site-packages/nlp/features.py", line 456, in encode_nested_example raise ValueError("Got a string but expected a list instead: '{}'".format(obj)) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/519/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/519/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/518
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/518/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/518/comments
https://api.github.com/repos/huggingface/datasets/issues/518/events
https://github.com/huggingface/datasets/pull/518
682,131,165
MDExOlB1bGxSZXF1ZXN0NDcwNDE0ODE1
518
[METRICS, breaking] Refactor caching behavior, pickle/cloudpickle metrics and dataset, add tests on metrics
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
2
"2020-08-19T19:43:08Z"
"2020-08-24T16:01:40Z"
"2020-08-24T16:01:39Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/518.diff", "html_url": "https://github.com/huggingface/datasets/pull/518", "merged_at": "2020-08-24T16:01:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/518.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/518" }
Move the acquisition of the filelock at a later stage during metrics processing so it can be pickled/cloudpickled after instantiation. Also add some tests on pickling, concurrent but separate metric instances and concurrent and distributed metric instances. Changes significantly the caching behavior for the metrics: - if the metric is used in a non-distributed setup (most common case) we try to find a free cache file using UUID instead of asking for an `experiment_id` if we can't lock the cache file this allows to use several instances of the same metrics in parallel. - if the metrics is used in a distributed setup we ask for an `experiment_id` if we can't lock the cache file (because all the nodes need to have related cache file names for the final sync. - after the computation, we free the locks and delete all the cache files. Breaking: Some arguments for Metrics initialization have been removed for simplicity (`version`...) and some have been renamed for consistency with the rest of the library (`in_memory` => `keep_in_memory`). Also remove the `_has_transformers` detection in utils to avoid importing transformers everytime during loading.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/518/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/518/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/517
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/517/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/517/comments
https://api.github.com/repos/huggingface/datasets/issues/517/events
https://github.com/huggingface/datasets/issues/517
681,896,944
MDU6SXNzdWU2ODE4OTY5NDQ=
517
add MLDoc dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "events_url": "https://api.github.com/users/jxmorris12/events{/privacy}", "followers_url": "https://api.github.com/users/jxmorris12/followers", "following_url": "https://api.github.com/users/jxmorris12/following{/other_user}", "gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jxmorris12", "id": 13238952, "login": "jxmorris12", "node_id": "MDQ6VXNlcjEzMjM4OTUy", "organizations_url": "https://api.github.com/users/jxmorris12/orgs", "received_events_url": "https://api.github.com/users/jxmorris12/received_events", "repos_url": "https://api.github.com/users/jxmorris12/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions", "type": "User", "url": "https://api.github.com/users/jxmorris12" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
[]
null
2
"2020-08-19T14:41:59Z"
"2021-08-03T05:59:33Z"
null
CONTRIBUTOR
null
null
null
Hi, I am recommending that someone add MLDoc, a multilingual news topic classification dataset. - Here's a link to the Github: https://github.com/facebookresearch/MLDoc - and the paper: http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf Looks like the dataset contains news stories in multiple languages that can be classified into four hierarchical groups: CCAT (Corporate/Industrial), ECAT (Economics), GCAT (Government/Social) and MCAT (Markets). There are 13 languages: Dutch, French, German, Chinese, Japanese, Russian, Portuguese, Spanish, Latin American Spanish, Italian, Danish, Norwegian, and Swedish
{ "+1": 4, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/517/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/517/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/516
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/516/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/516/comments
https://api.github.com/repos/huggingface/datasets/issues/516/events
https://github.com/huggingface/datasets/pull/516
681,846,032
MDExOlB1bGxSZXF1ZXN0NDcwMTY5NTA0
516
[Breaking] Rename formated to formatted
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
0
"2020-08-19T13:35:23Z"
"2020-08-20T08:41:17Z"
"2020-08-20T08:41:16Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/516.diff", "html_url": "https://github.com/huggingface/datasets/pull/516", "merged_at": "2020-08-20T08:41:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/516.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/516" }
`formated` is not correct but `formatted` is
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/516/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/516/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/515
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/515/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/515/comments
https://api.github.com/repos/huggingface/datasets/issues/515/events
https://github.com/huggingface/datasets/pull/515
681,845,619
MDExOlB1bGxSZXF1ZXN0NDcwMTY5MTQ0
515
Fix batched map for formatted dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
0
"2020-08-19T13:34:50Z"
"2020-08-20T20:30:43Z"
"2020-08-20T20:30:42Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/515.diff", "html_url": "https://github.com/huggingface/datasets/pull/515", "merged_at": "2020-08-20T20:30:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/515.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/515" }
If you had a dataset formatted as numpy for example, and tried to do a batched map, then it would crash because one of the elements from the inputs was missing for unchanged columns (ex: batch of length 999 instead of 1000). The happened during the creation of the `pa.Table`, since columns had different lengths.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/515/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/515/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/514
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/514/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/514/comments
https://api.github.com/repos/huggingface/datasets/issues/514/events
https://github.com/huggingface/datasets/issues/514
681,256,348
MDU6SXNzdWU2ODEyNTYzNDg=
514
dataset.shuffle(keep_in_memory=True) is never allowed
{ "avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4", "events_url": "https://api.github.com/users/vegarab/events{/privacy}", "followers_url": "https://api.github.com/users/vegarab/followers", "following_url": "https://api.github.com/users/vegarab/following{/other_user}", "gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vegarab", "id": 24683907, "login": "vegarab", "node_id": "MDQ6VXNlcjI0NjgzOTA3", "organizations_url": "https://api.github.com/users/vegarab/orgs", "received_events_url": "https://api.github.com/users/vegarab/received_events", "repos_url": "https://api.github.com/users/vegarab/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vegarab/subscriptions", "type": "User", "url": "https://api.github.com/users/vegarab" }
[ { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" }, { "color": "DF8D62", "default": false, "description": "", "id": 4614514401, "name": "hacktoberfest", "node_id": "LA_kwDODunzps8AAAABEwvm4Q", "url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest" } ]
closed
false
null
[]
null
10
"2020-08-18T18:47:40Z"
"2022-10-10T12:21:58Z"
"2022-10-10T12:21:58Z"
CONTRIBUTOR
null
null
null
As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)` The commit added the lines ```python # lines 994-996 in src/nlp/arrow_dataset.py assert ( not keep_in_memory or cache_file_name is None ), "Please use either `keep_in_memory` or `cache_file_name` but not both." ``` This affects both `shuffle()` as `select()` is a sub-routine, and `map()` that has the same check. I'd love to fix this myself, but unsure what the intention of the assert is given the rest of the logic in the function concerning `ccache_file_name` and `keep_in_memory`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/514/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/514/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/513
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/513/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/513/comments
https://api.github.com/repos/huggingface/datasets/issues/513/events
https://github.com/huggingface/datasets/pull/513
681,215,612
MDExOlB1bGxSZXF1ZXN0NDY5NjQxMjg1
513
[speedup] Use indices mappings instead of deepcopy for all the samples reordering methods
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
4
"2020-08-18T17:36:02Z"
"2020-08-28T08:41:51Z"
"2020-08-28T08:41:50Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/513.diff", "html_url": "https://github.com/huggingface/datasets/pull/513", "merged_at": "2020-08-28T08:41:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/513.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/513" }
Use an indices mapping instead of rewriting the dataset for all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`). Added a `flatten_indices` method which copy the dataset to a new table to remove the indices mapping with tests. All the samples re-ordering/selection methods should be a lot faster. The downside is that iterating on very large batch of the dataset might be a little slower when we have changed the order of the samples since with in these case we use `pyarrow.Table.take` instead of `pyarrow.Table.slice`. There is no free lunch but the speed of iterating over the dataset is rarely the bottleneck. *Backward breaking change*: the `cache_file_name` argument in all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`) is now called `indices_cache_file_name` on purpose to make it explicit to the user that this caching file is used for caching the indices mapping and not the dataset itself.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 1, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/513/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/513/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/512
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/512/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/512/comments
https://api.github.com/repos/huggingface/datasets/issues/512/events
https://github.com/huggingface/datasets/pull/512
681,137,164
MDExOlB1bGxSZXF1ZXN0NDY5NTc2NzE3
512
Delete CONTRIBUTING.md
{ "avatar_url": "https://avatars.githubusercontent.com/u/56394989?v=4", "events_url": "https://api.github.com/users/ChenZehong13/events{/privacy}", "followers_url": "https://api.github.com/users/ChenZehong13/followers", "following_url": "https://api.github.com/users/ChenZehong13/following{/other_user}", "gists_url": "https://api.github.com/users/ChenZehong13/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ChenZehong13", "id": 56394989, "login": "ChenZehong13", "node_id": "MDQ6VXNlcjU2Mzk0OTg5", "organizations_url": "https://api.github.com/users/ChenZehong13/orgs", "received_events_url": "https://api.github.com/users/ChenZehong13/received_events", "repos_url": "https://api.github.com/users/ChenZehong13/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ChenZehong13/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ChenZehong13/subscriptions", "type": "User", "url": "https://api.github.com/users/ChenZehong13" }
[]
closed
false
null
[]
null
2
"2020-08-18T15:33:25Z"
"2020-08-18T15:48:21Z"
"2020-08-18T15:39:07Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/512.diff", "html_url": "https://github.com/huggingface/datasets/pull/512", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/512.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/512" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/512/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/512/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/511
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/511/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/511/comments
https://api.github.com/repos/huggingface/datasets/issues/511/events
https://github.com/huggingface/datasets/issues/511
681,055,553
MDU6SXNzdWU2ODEwNTU1NTM=
511
dataset.shuffle() and select() resets format. Intended?
{ "avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4", "events_url": "https://api.github.com/users/vegarab/events{/privacy}", "followers_url": "https://api.github.com/users/vegarab/followers", "following_url": "https://api.github.com/users/vegarab/following{/other_user}", "gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vegarab", "id": 24683907, "login": "vegarab", "node_id": "MDQ6VXNlcjI0NjgzOTA3", "organizations_url": "https://api.github.com/users/vegarab/orgs", "received_events_url": "https://api.github.com/users/vegarab/received_events", "repos_url": "https://api.github.com/users/vegarab/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vegarab/subscriptions", "type": "User", "url": "https://api.github.com/users/vegarab" }
[]
closed
false
null
[]
null
5
"2020-08-18T13:46:01Z"
"2020-09-14T08:45:38Z"
"2020-09-14T08:45:38Z"
CONTRIBUTOR
null
null
null
Calling `dataset.shuffle()` or `dataset.select()` on a dataset resets its format set by `dataset.set_format()`. Is this intended or an oversight? When working on quite large datasets that require a lot of preprocessing I find it convenient to save the processed dataset to file using `torch.save("dataset.pt")`. Later loading the dataset object using `torch.load("dataset.pt")`, which conserves the defined format before saving. I do shuffling and selecting (for controlling dataset size) after loading the data from .pt-file, as it's convenient whenever you train multiple models with varying sizes of the same dataset. The obvious workaround for this is to set the format again after using `dataset.select()` or `dataset.shuffle()`. _I guess this is more of a discussion on the design philosophy of the functions. Please let me know if this is not the right channel for these kinds of discussions or if they are not wanted at all!_ #### How to reproduce: ```python import nlp from transformers import T5Tokenizer tokenizer = T5Tokenizer.from_pretrained("t5-base") def create_features(batch): context_encoding = tokenizer.batch_encode_plus(batch["context"]) return {"input_ids": context_encoding["input_ids"]} dataset = nlp.load_dataset("cosmos_qa", split="train") dataset = dataset.map(create_features, batched=True) dataset.set_format(type="torch", columns=["input_ids"]) dataset[0] # {'input_ids': tensor([ 1804, 3525, 1602, ... 0, 0])} dataset = dataset.shuffle() dataset[0] # {'id': '3Q9(...)20', 'context': "Good Old War an (...) play ?', 'answer0': 'None of the above choices .', 'answer1': 'This person likes music and likes to see the show , they will see other bands play .', (...) 'input_ids': [1804, 3525, 1602, ... , 0, 0]} ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/511/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/511/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/510
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/510/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/510/comments
https://api.github.com/repos/huggingface/datasets/issues/510/events
https://github.com/huggingface/datasets/issues/510
680,823,644
MDU6SXNzdWU2ODA4MjM2NDQ=
510
Version of numpy to use the library
{ "avatar_url": "https://avatars.githubusercontent.com/u/6966175?v=4", "events_url": "https://api.github.com/users/isspek/events{/privacy}", "followers_url": "https://api.github.com/users/isspek/followers", "following_url": "https://api.github.com/users/isspek/following{/other_user}", "gists_url": "https://api.github.com/users/isspek/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/isspek", "id": 6966175, "login": "isspek", "node_id": "MDQ6VXNlcjY5NjYxNzU=", "organizations_url": "https://api.github.com/users/isspek/orgs", "received_events_url": "https://api.github.com/users/isspek/received_events", "repos_url": "https://api.github.com/users/isspek/repos", "site_admin": false, "starred_url": "https://api.github.com/users/isspek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/isspek/subscriptions", "type": "User", "url": "https://api.github.com/users/isspek" }
[]
closed
false
null
[]
null
2
"2020-08-18T08:59:13Z"
"2020-08-19T18:35:56Z"
"2020-08-19T18:35:56Z"
NONE
null
null
null
Thank you so much for your excellent work! I would like to use nlp library in my project. While importing nlp, I am receiving the following error `AttributeError: module 'numpy.random' has no attribute 'Generator'` Numpy version in my project is 1.16.0. May I learn which numpy version is used for the nlp library. Thanks in advance.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/510/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/510/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/509
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/509/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/509/comments
https://api.github.com/repos/huggingface/datasets/issues/509/events
https://github.com/huggingface/datasets/issues/509
679,711,585
MDU6SXNzdWU2Nzk3MTE1ODU=
509
Converting TensorFlow dataset example
{ "avatar_url": "https://avatars.githubusercontent.com/u/22762845?v=4", "events_url": "https://api.github.com/users/saareliad/events{/privacy}", "followers_url": "https://api.github.com/users/saareliad/followers", "following_url": "https://api.github.com/users/saareliad/following{/other_user}", "gists_url": "https://api.github.com/users/saareliad/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/saareliad", "id": 22762845, "login": "saareliad", "node_id": "MDQ6VXNlcjIyNzYyODQ1", "organizations_url": "https://api.github.com/users/saareliad/orgs", "received_events_url": "https://api.github.com/users/saareliad/received_events", "repos_url": "https://api.github.com/users/saareliad/repos", "site_admin": false, "starred_url": "https://api.github.com/users/saareliad/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/saareliad/subscriptions", "type": "User", "url": "https://api.github.com/users/saareliad" }
[]
closed
false
null
[]
null
2
"2020-08-16T08:05:20Z"
"2021-08-03T06:01:18Z"
"2021-08-03T06:01:17Z"
NONE
null
null
null
Hi, I want to use TensorFlow datasets with this repo, I noticed you made some conversion script, can you give a simple example of using it? Thanks
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/509/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/509/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/508
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/508/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/508/comments
https://api.github.com/repos/huggingface/datasets/issues/508/events
https://github.com/huggingface/datasets/issues/508
679,705,734
MDU6SXNzdWU2Nzk3MDU3MzQ=
508
TypeError: Receiver() takes no arguments
{ "avatar_url": "https://avatars.githubusercontent.com/u/1225851?v=4", "events_url": "https://api.github.com/users/sebastiantomac/events{/privacy}", "followers_url": "https://api.github.com/users/sebastiantomac/followers", "following_url": "https://api.github.com/users/sebastiantomac/following{/other_user}", "gists_url": "https://api.github.com/users/sebastiantomac/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sebastiantomac", "id": 1225851, "login": "sebastiantomac", "node_id": "MDQ6VXNlcjEyMjU4NTE=", "organizations_url": "https://api.github.com/users/sebastiantomac/orgs", "received_events_url": "https://api.github.com/users/sebastiantomac/received_events", "repos_url": "https://api.github.com/users/sebastiantomac/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sebastiantomac/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sebastiantomac/subscriptions", "type": "User", "url": "https://api.github.com/users/sebastiantomac" }
[]
closed
false
null
[]
null
5
"2020-08-16T07:18:16Z"
"2020-09-01T14:53:33Z"
"2020-09-01T14:49:03Z"
NONE
null
null
null
I am trying to load a wikipedia data set ``` import nlp from nlp import load_dataset dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner') #dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_path, beam_runner='DirectRunner') ``` This fails in the apache beam runner. ``` Traceback (most recent call last): File "D:/ML/wikiembedding/gpt2_sv.py", line 36, in <module> dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=my_cache_dir, beam_runner='DirectRunner') File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\nlp\load.py", line 548, in load_dataset builder_instance.download_and_prepare( File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\nlp\builder.py", line 462, in download_and_prepare self._download_and_prepare( File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\nlp\builder.py", line 969, in _download_and_prepare pipeline_results = pipeline.run() File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\apache_beam\pipeline.py", line 534, in run return self.runner.run_pipeline(self, self._options) .... File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 218, in process_encoded self.output(decoded_value) File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\apache_beam\runners\worker\operations.py", line 332, in output cython.cast(Receiver, self.receivers[output_index]).receive(windowed_value) File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\Cython\Shadow.py", line 167, in cast return type(*args) TypeError: Receiver() takes no arguments ``` This is run on a Windows 10 machine with python 3.8. I get the same error loading the swedish wikipedia dump.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/508/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/508/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/507
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/507/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/507/comments
https://api.github.com/repos/huggingface/datasets/issues/507/events
https://github.com/huggingface/datasets/issues/507
679,400,683
MDU6SXNzdWU2Nzk0MDA2ODM=
507
Errors when I use
{ "avatar_url": "https://avatars.githubusercontent.com/u/30506151?v=4", "events_url": "https://api.github.com/users/mchari/events{/privacy}", "followers_url": "https://api.github.com/users/mchari/followers", "following_url": "https://api.github.com/users/mchari/following{/other_user}", "gists_url": "https://api.github.com/users/mchari/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mchari", "id": 30506151, "login": "mchari", "node_id": "MDQ6VXNlcjMwNTA2MTUx", "organizations_url": "https://api.github.com/users/mchari/orgs", "received_events_url": "https://api.github.com/users/mchari/received_events", "repos_url": "https://api.github.com/users/mchari/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mchari/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchari/subscriptions", "type": "User", "url": "https://api.github.com/users/mchari" }
[]
closed
false
null
[]
null
1
"2020-08-14T21:03:57Z"
"2020-08-14T21:39:10Z"
"2020-08-14T21:39:10Z"
NONE
null
null
null
I tried the following example code from https://huggingface.co/deepset/roberta-base-squad2 and got errors I am using **transformers 3.0.2** code . from transformers.pipelines import pipeline from transformers.modeling_auto import AutoModelForQuestionAnswering from transformers.tokenization_auto import AutoTokenizer model_name = "deepset/roberta-base-squad2" nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) The errors are : res = nlp(QA_input) File ".local/lib/python3.6/site-packages/transformers/pipelines.py", line 1316, in __call__ for s, e, score in zip(starts, ends, scores) File ".local/lib/python3.6/site-packages/transformers/pipelines.py", line 1316, in <listcomp> for s, e, score in zip(starts, ends, scores) KeyError: 0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/507/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/507/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/506
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/506/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/506/comments
https://api.github.com/repos/huggingface/datasets/issues/506/events
https://github.com/huggingface/datasets/pull/506
679,164,788
MDExOlB1bGxSZXF1ZXN0NDY3OTkwNjc2
506
fix dataset.map for function without outputs
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
0
"2020-08-14T13:40:22Z"
"2020-08-17T11:24:39Z"
"2020-08-17T11:24:38Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/506.diff", "html_url": "https://github.com/huggingface/datasets/pull/506", "merged_at": "2020-08-17T11:24:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/506.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/506" }
As noticed in #505 , giving a function that doesn't return anything in `.map` raises an error because of an unreferenced variable. I fixed that and added tests. Thanks @avloss for reporting
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/506/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/506/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/505
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/505/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/505/comments
https://api.github.com/repos/huggingface/datasets/issues/505/events
https://github.com/huggingface/datasets/pull/505
678,791,400
MDExOlB1bGxSZXF1ZXN0NDY3NjgxMjY4
505
tmp_file referenced before assignment
{ "avatar_url": "https://avatars.githubusercontent.com/u/17853685?v=4", "events_url": "https://api.github.com/users/avloss/events{/privacy}", "followers_url": "https://api.github.com/users/avloss/followers", "following_url": "https://api.github.com/users/avloss/following{/other_user}", "gists_url": "https://api.github.com/users/avloss/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/avloss", "id": 17853685, "login": "avloss", "node_id": "MDQ6VXNlcjE3ODUzNjg1", "organizations_url": "https://api.github.com/users/avloss/orgs", "received_events_url": "https://api.github.com/users/avloss/received_events", "repos_url": "https://api.github.com/users/avloss/repos", "site_admin": false, "starred_url": "https://api.github.com/users/avloss/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avloss/subscriptions", "type": "User", "url": "https://api.github.com/users/avloss" }
[]
closed
false
null
[]
null
2
"2020-08-13T23:27:33Z"
"2020-08-14T13:42:46Z"
"2020-08-14T13:42:46Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/505.diff", "html_url": "https://github.com/huggingface/datasets/pull/505", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/505.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/505" }
Just learning about this library - so might've not set up all the flags correctly, but was getting this error about "tmp_file".
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/505/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/505/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/504
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/504/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/504/comments
https://api.github.com/repos/huggingface/datasets/issues/504/events
https://github.com/huggingface/datasets/pull/504
678,756,211
MDExOlB1bGxSZXF1ZXN0NDY3NjUxOTA5
504
Added downloading to Hyperpartisan news detection
{ "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghomasHudson", "id": 13795113, "login": "ghomasHudson", "node_id": "MDQ6VXNlcjEzNzk1MTEz", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "type": "User", "url": "https://api.github.com/users/ghomasHudson" }
[]
closed
false
null
[]
null
2
"2020-08-13T21:53:46Z"
"2020-08-27T08:18:41Z"
"2020-08-27T08:18:41Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/504.diff", "html_url": "https://github.com/huggingface/datasets/pull/504", "merged_at": "2020-08-27T08:18:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/504.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/504" }
Following the discussion on Slack and #349, I've updated the hyperpartisan dataset to pull directly from Zenodo rather than manual install, which should make this dataset much more accessible. Many thanks to @johanneskiesel ! Currently doesn't pass `test_load_real_dataset` - I'm using `self.config.name` which is `default` in this test. Might be related to #474
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/504/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/504/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/503
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/503/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/503/comments
https://api.github.com/repos/huggingface/datasets/issues/503/events
https://github.com/huggingface/datasets/pull/503
678,726,538
MDExOlB1bGxSZXF1ZXN0NDY3NjI3MTEw
503
CompGuessWhat?! 0.2.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/1479733?v=4", "events_url": "https://api.github.com/users/aleSuglia/events{/privacy}", "followers_url": "https://api.github.com/users/aleSuglia/followers", "following_url": "https://api.github.com/users/aleSuglia/following{/other_user}", "gists_url": "https://api.github.com/users/aleSuglia/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/aleSuglia", "id": 1479733, "login": "aleSuglia", "node_id": "MDQ6VXNlcjE0Nzk3MzM=", "organizations_url": "https://api.github.com/users/aleSuglia/orgs", "received_events_url": "https://api.github.com/users/aleSuglia/received_events", "repos_url": "https://api.github.com/users/aleSuglia/repos", "site_admin": false, "starred_url": "https://api.github.com/users/aleSuglia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aleSuglia/subscriptions", "type": "User", "url": "https://api.github.com/users/aleSuglia" }
[]
closed
false
null
[]
null
20
"2020-08-13T20:51:26Z"
"2020-10-21T06:54:29Z"
"2020-10-21T06:54:29Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/503.diff", "html_url": "https://github.com/huggingface/datasets/pull/503", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/503.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/503" }
We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/503/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/503/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/502
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/502/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/502/comments
https://api.github.com/repos/huggingface/datasets/issues/502/events
https://github.com/huggingface/datasets/pull/502
678,546,070
MDExOlB1bGxSZXF1ZXN0NDY3NDc1MDg0
502
Fix tokenizers caching
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
1
"2020-08-13T15:53:37Z"
"2020-08-19T13:37:19Z"
"2020-08-19T13:37:18Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/502.diff", "html_url": "https://github.com/huggingface/datasets/pull/502", "merged_at": "2020-08-19T13:37:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/502.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/502" }
I've found some cases where the caching didn't work properly for tokenizers: 1. if a tokenizer has a regex pattern, then the caching would be inconsistent across sessions 2. if a tokenizer has a cache attribute that changes after some calls, the the caching would not work after cache updates 3. if a tokenizer is used inside a function, the caching of this function would result in the same cache file for different tokenizers 4. if `unique_no_split_tokens`'s attribute is not the same across sessions (after loading a tokenizer) then the caching could be inconsistent To fix that, this is what I did: 1. register a specific `save_regex` function for pickle that makes regex dumps deterministic 2. ignore cache attribute of some tokenizers before dumping 3. enable recursive dump by default for all dumps 4. make `unique_no_split_tokens` deterministic in https://github.com/huggingface/transformers/pull/6461 I also added tests to make sure that tokenizers hashing works as expected. In the future we should find a way to test if hashing also works across session (maybe using two CI jobs ? or by hardcoding a tokenizer's hash ?)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/502/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/502/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/501
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/501/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/501/comments
https://api.github.com/repos/huggingface/datasets/issues/501/events
https://github.com/huggingface/datasets/issues/501
677,952,893
MDU6SXNzdWU2Nzc5NTI4OTM=
501
Caching doesn't work for map (non-deterministic)
{ "avatar_url": "https://avatars.githubusercontent.com/u/8149933?v=4", "events_url": "https://api.github.com/users/wulu473/events{/privacy}", "followers_url": "https://api.github.com/users/wulu473/followers", "following_url": "https://api.github.com/users/wulu473/following{/other_user}", "gists_url": "https://api.github.com/users/wulu473/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wulu473", "id": 8149933, "login": "wulu473", "node_id": "MDQ6VXNlcjgxNDk5MzM=", "organizations_url": "https://api.github.com/users/wulu473/orgs", "received_events_url": "https://api.github.com/users/wulu473/received_events", "repos_url": "https://api.github.com/users/wulu473/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wulu473/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wulu473/subscriptions", "type": "User", "url": "https://api.github.com/users/wulu473" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
4
"2020-08-12T20:20:07Z"
"2022-08-08T11:02:23Z"
"2020-08-24T16:34:35Z"
NONE
null
null
null
The caching functionality doesn't work reliably when tokenizing a dataset. Here's a small example to reproduce it. ```python import nlp import transformers def main(): ds = nlp.load_dataset("reddit", split="train[:500]") tokenizer = transformers.AutoTokenizer.from_pretrained("gpt2") def convert_to_features(example_batch): input_str = example_batch["body"] encodings = tokenizer(input_str, add_special_tokens=True, truncation=True) return encodings ds = ds.map(convert_to_features, batched=True) if __name__ == "__main__": main() ``` Roughly 3/10 times, this example recomputes the tokenization. Is this expected behaviour?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/501/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/501/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/500
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/500/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/500/comments
https://api.github.com/repos/huggingface/datasets/issues/500/events
https://github.com/huggingface/datasets/pull/500
677,841,708
MDExOlB1bGxSZXF1ZXN0NDY2ODk0NTk0
500
Use hnsw in wiki_dpr
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
0
"2020-08-12T16:58:07Z"
"2020-08-20T07:59:19Z"
"2020-08-20T07:59:18Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/500.diff", "html_url": "https://github.com/huggingface/datasets/pull/500", "merged_at": "2020-08-20T07:59:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/500.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/500" }
The HNSW faiss index is much faster that regular Flat index.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/500/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/500/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/499
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/499/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/499/comments
https://api.github.com/repos/huggingface/datasets/issues/499/events
https://github.com/huggingface/datasets/pull/499
677,709,938
MDExOlB1bGxSZXF1ZXN0NDY2Nzg1MjAy
499
Narrativeqa (with full text)
{ "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghomasHudson", "id": 13795113, "login": "ghomasHudson", "node_id": "MDQ6VXNlcjEzNzk1MTEz", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "type": "User", "url": "https://api.github.com/users/ghomasHudson" }
[]
closed
false
null
[]
null
9
"2020-08-12T13:49:43Z"
"2020-12-09T11:21:02Z"
"2020-12-09T11:21:02Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/499.diff", "html_url": "https://github.com/huggingface/datasets/pull/499", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/499.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/499" }
Following the uploading of the full text data in #309, I've added the full text to the narrativeqa dataset. Few notes: - Had some encoding issues using the default `open` so am using `open(encoding="latin-1"...` which seems to fix it. Looks fine. - Can't get the dummy data to work. Currently putting stuff at: ``` dummy |---- 0.0.0 |- dummy_data.zip |-master.zip | |- narrativeqa-master | |- documents.csv | |- qaps.csv | |- third_party ...... | | - narrativeqa_full_text.zip | | - 001.content | | - .... ``` Not sure what I'm messing up here (probably something obvious).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/499/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/499/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/498
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/498/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/498/comments
https://api.github.com/repos/huggingface/datasets/issues/498/events
https://github.com/huggingface/datasets/pull/498
677,597,479
MDExOlB1bGxSZXF1ZXN0NDY2Njg5NTcy
498
dont use beam fs to save info for local cache dir
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
0
"2020-08-12T11:00:00Z"
"2020-08-14T13:17:21Z"
"2020-08-14T13:17:20Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/498.diff", "html_url": "https://github.com/huggingface/datasets/pull/498", "merged_at": "2020-08-14T13:17:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/498.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/498" }
If the cache dir is local, then we shouldn't use beam's filesystem to save the dataset info Fix #490
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/498/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/498/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/497
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/497/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/497/comments
https://api.github.com/repos/huggingface/datasets/issues/497/events
https://github.com/huggingface/datasets/pull/497
677,057,116
MDExOlB1bGxSZXF1ZXN0NDY2MjQ2NDQ3
497
skip header in PAWS-X
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
0
"2020-08-11T17:26:25Z"
"2020-08-19T09:50:02Z"
"2020-08-19T09:50:01Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/497.diff", "html_url": "https://github.com/huggingface/datasets/pull/497", "merged_at": "2020-08-19T09:50:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/497.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/497" }
This should fix #485 I also updated the `dataset_infos.json` file that is used to verify the integrity of the generated splits (the number of examples was reduced by one). Note that there are new fields in `dataset_infos.json` introduced in the latest release 0.4.0 corresponding to post processing info. I removed them in this case when I ran `nlp-cli ./datasets/xtreme --save_infos` to keep backward compatibility (versions 0.3.0 can't load these fields). I think I'll change the logic so that `nlp-cli test` doesn't create these fields for dataset with no post processing
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/497/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/497/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/496
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/496/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/496/comments
https://api.github.com/repos/huggingface/datasets/issues/496/events
https://github.com/huggingface/datasets/pull/496
677,016,998
MDExOlB1bGxSZXF1ZXN0NDY2MjE1Mjg1
496
fix bad type in overflow check
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
0
"2020-08-11T16:24:58Z"
"2020-08-14T13:29:35Z"
"2020-08-14T13:29:34Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/496.diff", "html_url": "https://github.com/huggingface/datasets/pull/496", "merged_at": "2020-08-14T13:29:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/496.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/496" }
When writing an arrow file and inferring the features, the overflow check could fail if the first example had a `null` field. This is because we were not using the inferred features to do this check, and we could end up with arrays that don't match because of a type mismatch (`null` vs `string` for example). This should fix #482
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/496/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/496/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/495
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/495/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/495/comments
https://api.github.com/repos/huggingface/datasets/issues/495/events
https://github.com/huggingface/datasets/pull/495
676,959,289
MDExOlB1bGxSZXF1ZXN0NDY2MTY5MTA3
495
stack vectors in pytorch and tensorflow
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
0
"2020-08-11T15:12:53Z"
"2020-08-12T09:30:49Z"
"2020-08-12T09:30:48Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/495.diff", "html_url": "https://github.com/huggingface/datasets/pull/495", "merged_at": "2020-08-12T09:30:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/495.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/495" }
When the format of a dataset is set to pytorch or tensorflow, and if the dataset has vectors in it, they were not stacked together as tensors when calling `dataset[i:i + batch_size][column]` or `dataset[column]`. I added support for stacked tensors for both pytorch and tensorflow. For ragged tensors, they are stacked only for tensorflow as pytorch doesn't support ragged tensors.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/495/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/495/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/494
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/494/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/494/comments
https://api.github.com/repos/huggingface/datasets/issues/494/events
https://github.com/huggingface/datasets/pull/494
676,886,955
MDExOlB1bGxSZXF1ZXN0NDY2MTExOTQz
494
Fix numpy stacking
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
1
"2020-08-11T13:40:30Z"
"2020-08-11T14:56:50Z"
"2020-08-11T13:49:52Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/494.diff", "html_url": "https://github.com/huggingface/datasets/pull/494", "merged_at": "2020-08-11T13:49:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/494.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/494" }
When getting items using a column name as a key, numpy arrays were not stacked. I fixed that and added some tests. There is another issue that still needs to be fixed though: when getting items using a column name as a key, pytorch tensors are not stacked (it outputs a list of tensors). This PR should help with the to fix this issue.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/494/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/494/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/493
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/493/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/493/comments
https://api.github.com/repos/huggingface/datasets/issues/493/events
https://github.com/huggingface/datasets/pull/493
676,527,351
MDExOlB1bGxSZXF1ZXN0NDY1ODIxOTA0
493
Fix wmt zh-en url
{ "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sshleifer", "id": 6045025, "login": "sshleifer", "node_id": "MDQ6VXNlcjYwNDUwMjU=", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "repos_url": "https://api.github.com/users/sshleifer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "type": "User", "url": "https://api.github.com/users/sshleifer" }
[]
closed
false
null
[]
null
1
"2020-08-11T02:14:52Z"
"2020-08-11T02:22:28Z"
"2020-08-11T02:22:12Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/493.diff", "html_url": "https://github.com/huggingface/datasets/pull/493", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/493.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/493" }
I verified that ``` wget https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-zh.tar.gz.00 ``` runs in 2 minutes.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/493/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/493/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/492
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/492/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/492/comments
https://api.github.com/repos/huggingface/datasets/issues/492/events
https://github.com/huggingface/datasets/issues/492
676,495,064
MDU6SXNzdWU2NzY0OTUwNjQ=
492
nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema
{ "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jarednielsen", "id": 4564897, "login": "jarednielsen", "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "repos_url": "https://api.github.com/users/jarednielsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "type": "User", "url": "https://api.github.com/users/jarednielsen" }
[]
closed
false
null
[]
null
7
"2020-08-11T00:27:46Z"
"2020-08-26T16:17:19Z"
"2020-08-26T16:17:19Z"
CONTRIBUTOR
null
null
null
Here's the code I'm trying to run: ```python dset_wikipedia = nlp.load_dataset("wikipedia", "20200501.en", split="train", cache_dir=args.cache_dir) dset_wikipedia.drop(columns=["title"]) dset_wikipedia.features.pop("title") dset_books = nlp.load_dataset("bookcorpus", split="train", cache_dir=args.cache_dir) dset = nlp.concatenate_datasets([dset_wikipedia, dset_books]) ``` This fails because they have different schemas, despite having identical features. ```python assert dset_wikipedia.features == dset_books.features # True assert dset_wikipedia._data.schema == dset_books._data.schema # False ``` The Wikipedia dataset has 'text: string', while the BookCorpus dataset has 'text: string not null'. Currently I hack together a working schema match with the following line, but it would be better if this was handled in Features themselves. ```python dset_wikipedia._data = dset_wikipedia.data.cast(dset_books._data.schema) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/492/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/492/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/491
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/491/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/491/comments
https://api.github.com/repos/huggingface/datasets/issues/491/events
https://github.com/huggingface/datasets/issues/491
676,486,275
MDU6SXNzdWU2NzY0ODYyNzU=
491
No 0.4.0 release on GitHub
{ "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jarednielsen", "id": 4564897, "login": "jarednielsen", "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "repos_url": "https://api.github.com/users/jarednielsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "type": "User", "url": "https://api.github.com/users/jarednielsen" }
[]
closed
false
null
[]
null
2
"2020-08-10T23:59:57Z"
"2020-08-11T16:50:07Z"
"2020-08-11T16:50:07Z"
CONTRIBUTOR
null
null
null
0.4.0 was released on PyPi, but not on GitHub. This means [the documentation](https://huggingface.co/nlp/) is still displaying from 0.3.0, and that there's no tag to easily clone the 0.4.0 version of the repo.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/491/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/491/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/490
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/490/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/490/comments
https://api.github.com/repos/huggingface/datasets/issues/490/events
https://github.com/huggingface/datasets/issues/490
676,482,242
MDU6SXNzdWU2NzY0ODIyNDI=
490
Loading preprocessed Wikipedia dataset requires apache_beam
{ "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jarednielsen", "id": 4564897, "login": "jarednielsen", "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "repos_url": "https://api.github.com/users/jarednielsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "type": "User", "url": "https://api.github.com/users/jarednielsen" }
[]
closed
false
null
[]
null
0
"2020-08-10T23:46:50Z"
"2020-08-14T13:17:20Z"
"2020-08-14T13:17:20Z"
CONTRIBUTOR
null
null
null
Running `nlp.load_dataset("wikipedia", "20200501.en", split="train", dir="/tmp/wikipedia")` gives an error if apache_beam is not installed, stemming from https://github.com/huggingface/nlp/blob/38eb2413de54ee804b0be81781bd65ac4a748ced/src/nlp/builder.py#L981-L988 This succeeded without the dependency in version 0.3.0. This seems like an unnecessary dependency to process some dataset info if you're using the already-preprocessed version. Could it be removed?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/490/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/490/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/489
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/489/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/489/comments
https://api.github.com/repos/huggingface/datasets/issues/489/events
https://github.com/huggingface/datasets/issues/489
676,456,257
MDU6SXNzdWU2NzY0NTYyNTc=
489
ug
{ "avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4", "events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}", "followers_url": "https://api.github.com/users/timothyjlaurent/followers", "following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}", "gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/timothyjlaurent", "id": 2000204, "login": "timothyjlaurent", "node_id": "MDQ6VXNlcjIwMDAyMDQ=", "organizations_url": "https://api.github.com/users/timothyjlaurent/orgs", "received_events_url": "https://api.github.com/users/timothyjlaurent/received_events", "repos_url": "https://api.github.com/users/timothyjlaurent/repos", "site_admin": false, "starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions", "type": "User", "url": "https://api.github.com/users/timothyjlaurent" }
[]
closed
false
null
[]
null
2
"2020-08-10T22:33:03Z"
"2020-08-10T22:55:14Z"
"2020-08-10T22:33:40Z"
NONE
null
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/489/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/489/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/488
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/488/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/488/comments
https://api.github.com/repos/huggingface/datasets/issues/488/events
https://github.com/huggingface/datasets/issues/488
676,299,993
MDU6SXNzdWU2NzYyOTk5OTM=
488
issues with downloading datasets for wmt16 and wmt19
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00" }
[]
closed
false
null
[]
null
3
"2020-08-10T17:32:51Z"
"2022-10-04T17:46:59Z"
"2022-10-04T17:46:58Z"
CONTRIBUTOR
null
null
null
I have encountered multiple issues while trying to: ``` import nlp dataset = nlp.load_dataset('wmt16', 'ru-en') metric = nlp.load_metric('wmt16') ``` 1. I had to do `pip install -e ".[dev]" ` on master, currently released nlp didn't work (sorry, didn't save the error) - I went back to the released version and now it worked. So it must have been some outdated dependencies that `pip install -e ".[dev]" ` fixed. 2. it was downloading at 60kbs - almost 5 hours to get the dataset. It was downloading all pairs and not just the one I asked for. I tried the same code with `wmt19` in parallel and it took a few secs to download and it only fetched data for the requested pair. (but it failed too, see below) 3. my machine has crushed and when I retried I got: ``` Traceback (most recent call last): File "./download.py", line 9, in <module> dataset = nlp.load_dataset('wmt16', 'ru-en') File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/load.py", line 549, in load_dataset download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications, File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/builder.py", line 449, in download_and_prepare with incomplete_dir(self._cache_dir) as tmp_data_dir: File "/home/stas/anaconda3/envs/main/lib/python3.7/contextlib.py", line 112, in __enter__ return next(self.gen) File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/builder.py", line 422, in incomplete_dir os.makedirs(tmp_dir) File "/home/stas/anaconda3/envs/main/lib/python3.7/os.py", line 221, in makedirs mkdir(name, mode) FileExistsError: [Errno 17] File exists: '/home/stas/.cache/huggingface/datasets/wmt16/ru-en/1.0.0/4d8269cdd971ed26984a9c0e4a158e0c7afc8135fac8fb8ee43ceecf38fd422d.incomplete' ``` it can't handle resumes. but neither allows a new start. Had to delete it manually. 4. and finally when it downloaded the dataset, it then failed to fetch the metrics: ``` Traceback (most recent call last): File "./download.py", line 15, in <module> metric = nlp.load_metric('wmt16') File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/load.py", line 442, in load_metric module_path, hash = prepare_module(path, download_config=download_config, dataset=False) File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/load.py", line 258, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/utils/file_utils.py", line 198, in cached_path local_files_only=download_config.local_files_only, File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/utils/file_utils.py", line 356, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/nlp/metrics/wmt16/wmt16.py ``` 5. If I run the same code with `wmt19`, it fails too: ``` ConnectionError: Couldn't reach https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-ru.tar.gz ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/488/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/488/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/487
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/487/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/487/comments
https://api.github.com/repos/huggingface/datasets/issues/487/events
https://github.com/huggingface/datasets/pull/487
676,143,029
MDExOlB1bGxSZXF1ZXN0NDY1NTA1NjQy
487
Fix elasticsearch result ids returning as strings
{ "avatar_url": "https://avatars.githubusercontent.com/u/3595526?v=4", "events_url": "https://api.github.com/users/sai-prasanna/events{/privacy}", "followers_url": "https://api.github.com/users/sai-prasanna/followers", "following_url": "https://api.github.com/users/sai-prasanna/following{/other_user}", "gists_url": "https://api.github.com/users/sai-prasanna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sai-prasanna", "id": 3595526, "login": "sai-prasanna", "node_id": "MDQ6VXNlcjM1OTU1MjY=", "organizations_url": "https://api.github.com/users/sai-prasanna/orgs", "received_events_url": "https://api.github.com/users/sai-prasanna/received_events", "repos_url": "https://api.github.com/users/sai-prasanna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sai-prasanna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sai-prasanna/subscriptions", "type": "User", "url": "https://api.github.com/users/sai-prasanna" }
[]
closed
false
null
[]
null
1
"2020-08-10T13:37:11Z"
"2020-08-31T10:42:46Z"
"2020-08-31T10:42:46Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/487.diff", "html_url": "https://github.com/huggingface/datasets/pull/487", "merged_at": "2020-08-31T10:42:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/487.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/487" }
I am using the latest elasticsearch binary and master of nlp. For me elasticsearch searches failed because the resultant "id_" returned for searches are strings, but our library assumes them to be integers.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/487/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/487/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/486
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/486/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/486/comments
https://api.github.com/repos/huggingface/datasets/issues/486/events
https://github.com/huggingface/datasets/issues/486
675,649,034
MDU6SXNzdWU2NzU2NDkwMzQ=
486
Bookcorpus data contains pretokenized text
{ "avatar_url": "https://avatars.githubusercontent.com/u/99543?v=4", "events_url": "https://api.github.com/users/orsharir/events{/privacy}", "followers_url": "https://api.github.com/users/orsharir/followers", "following_url": "https://api.github.com/users/orsharir/following{/other_user}", "gists_url": "https://api.github.com/users/orsharir/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/orsharir", "id": 99543, "login": "orsharir", "node_id": "MDQ6VXNlcjk5NTQz", "organizations_url": "https://api.github.com/users/orsharir/orgs", "received_events_url": "https://api.github.com/users/orsharir/received_events", "repos_url": "https://api.github.com/users/orsharir/repos", "site_admin": false, "starred_url": "https://api.github.com/users/orsharir/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/orsharir/subscriptions", "type": "User", "url": "https://api.github.com/users/orsharir" }
[]
closed
false
null
[]
null
8
"2020-08-09T06:53:24Z"
"2022-10-04T17:44:33Z"
"2022-10-04T17:44:33Z"
CONTRIBUTOR
null
null
null
It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes are changed to `` and '' for start and end quotes, respectively. On my own projects, I just run the data through NLTK's TreebankWordDetokenizer to reverse the tokenization (as best as possible). I think it would be beneficial to apply this transformation directly on your remote cached copy of the dataset. If you choose to do so, I would also suggest to use my fork of NLTK that fixes several bugs in their detokenizer (I've opened a pull-request, but they've yet to respond): https://github.com/nltk/nltk/pull/2575
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/486/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/486/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/485
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/485/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/485/comments
https://api.github.com/repos/huggingface/datasets/issues/485/events
https://github.com/huggingface/datasets/issues/485
675,595,393
MDU6SXNzdWU2NzU1OTUzOTM=
485
PAWS dataset first item is header
{ "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "events_url": "https://api.github.com/users/jxmorris12/events{/privacy}", "followers_url": "https://api.github.com/users/jxmorris12/followers", "following_url": "https://api.github.com/users/jxmorris12/following{/other_user}", "gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jxmorris12", "id": 13238952, "login": "jxmorris12", "node_id": "MDQ6VXNlcjEzMjM4OTUy", "organizations_url": "https://api.github.com/users/jxmorris12/orgs", "received_events_url": "https://api.github.com/users/jxmorris12/received_events", "repos_url": "https://api.github.com/users/jxmorris12/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions", "type": "User", "url": "https://api.github.com/users/jxmorris12" }
[]
closed
false
null
[]
null
0
"2020-08-08T22:05:25Z"
"2020-08-19T09:50:01Z"
"2020-08-19T09:50:01Z"
CONTRIBUTOR
null
null
null
``` import nlp dataset = nlp.load_dataset('xtreme', 'PAWS-X.en') dataset['test'][0] ``` prints the following ``` {'label': 'label', 'sentence1': 'sentence1', 'sentence2': 'sentence2'} ``` dataset['test'][0] should probably be the first item in the dataset, not just a dictionary mapping the column names to themselves. Probably just need to ignore the first row in the dataset by default or something like that.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/485/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/485/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/484
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/484/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/484/comments
https://api.github.com/repos/huggingface/datasets/issues/484/events
https://github.com/huggingface/datasets/pull/484
675,088,983
MDExOlB1bGxSZXF1ZXN0NDY0NjY1NTU4
484
update mirror for RT dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "events_url": "https://api.github.com/users/jxmorris12/events{/privacy}", "followers_url": "https://api.github.com/users/jxmorris12/followers", "following_url": "https://api.github.com/users/jxmorris12/following{/other_user}", "gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jxmorris12", "id": 13238952, "login": "jxmorris12", "node_id": "MDQ6VXNlcjEzMjM4OTUy", "organizations_url": "https://api.github.com/users/jxmorris12/orgs", "received_events_url": "https://api.github.com/users/jxmorris12/received_events", "repos_url": "https://api.github.com/users/jxmorris12/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions", "type": "User", "url": "https://api.github.com/users/jxmorris12" }
[]
closed
false
null
[]
null
4
"2020-08-07T15:25:45Z"
"2020-08-24T13:33:37Z"
"2020-08-24T13:33:37Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/484.diff", "html_url": "https://github.com/huggingface/datasets/pull/484", "merged_at": "2020-08-24T13:33:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/484.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/484" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/484/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/484/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/483
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/483/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/483/comments
https://api.github.com/repos/huggingface/datasets/issues/483/events
https://github.com/huggingface/datasets/issues/483
675,080,694
MDU6SXNzdWU2NzUwODA2OTQ=
483
rotten tomatoes movie review dataset taken down
{ "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "events_url": "https://api.github.com/users/jxmorris12/events{/privacy}", "followers_url": "https://api.github.com/users/jxmorris12/followers", "following_url": "https://api.github.com/users/jxmorris12/following{/other_user}", "gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jxmorris12", "id": 13238952, "login": "jxmorris12", "node_id": "MDQ6VXNlcjEzMjM4OTUy", "organizations_url": "https://api.github.com/users/jxmorris12/orgs", "received_events_url": "https://api.github.com/users/jxmorris12/received_events", "repos_url": "https://api.github.com/users/jxmorris12/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions", "type": "User", "url": "https://api.github.com/users/jxmorris12" }
[]
closed
false
null
[]
null
3
"2020-08-07T15:12:01Z"
"2020-09-08T09:36:34Z"
"2020-09-08T09:36:33Z"
CONTRIBUTOR
null
null
null
In an interesting twist of events, the individual who created the movie review seems to have left Cornell, and their webpage has been removed, along with the movie review dataset (http://www.cs.cornell.edu/people/pabo/movie-review-data/rt-polaritydata.tar.gz). It's not downloadable anymore.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/483/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/483/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/482
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/482/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/482/comments
https://api.github.com/repos/huggingface/datasets/issues/482/events
https://github.com/huggingface/datasets/issues/482
674,851,147
MDU6SXNzdWU2NzQ4NTExNDc=
482
Bugs : dataset.map() is frozen on ELI5
{ "avatar_url": "https://avatars.githubusercontent.com/u/56621342?v=4", "events_url": "https://api.github.com/users/ratthachat/events{/privacy}", "followers_url": "https://api.github.com/users/ratthachat/followers", "following_url": "https://api.github.com/users/ratthachat/following{/other_user}", "gists_url": "https://api.github.com/users/ratthachat/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ratthachat", "id": 56621342, "login": "ratthachat", "node_id": "MDQ6VXNlcjU2NjIxMzQy", "organizations_url": "https://api.github.com/users/ratthachat/orgs", "received_events_url": "https://api.github.com/users/ratthachat/received_events", "repos_url": "https://api.github.com/users/ratthachat/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ratthachat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ratthachat/subscriptions", "type": "User", "url": "https://api.github.com/users/ratthachat" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
8
"2020-08-07T08:23:35Z"
"2023-04-06T09:39:59Z"
"2020-08-11T23:55:15Z"
NONE
null
null
null
Hi Huggingface Team! Thank you guys once again for this amazing repo. I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) However, when I run `dataset.map()` on ELI5 to prepare `input_text, target_text`, `dataset.map` is **frozen** in the first hundreds examples. On the contrary, this works totally fine on SQUAD (80,000 examples). Both `nlp` version 0.3.0 and 0.4.0 cause frozen process . Also try various `pyarrow` versions from 0.16.0 / 0.17.0 / 1.0.0 also have the same frozen process. Reproducible code can be found on [this colab notebook ](https://colab.research.google.com/drive/14wttOTv3ky74B_c0kv5WrbgQjCF2fYQk?usp=sharing), where I also show that the same mapping function works fine on SQUAD, so the problem is likely due to ELI5 somehow. ---------------------------------------- **More Info :** instead of `map`, if I run `for` loop and apply function by myself, there's no error and can finish within 10 seconds. However, `nlp dataset` is immutable (I couldn't manually assign a new key-value to `dataset `object) I also notice that SQUAD texts are quite clean while ELI5 texts contain many special characters, not sure if this is the cause ?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/482/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/482/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/481
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/481/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/481/comments
https://api.github.com/repos/huggingface/datasets/issues/481/events
https://github.com/huggingface/datasets/pull/481
674,567,389
MDExOlB1bGxSZXF1ZXN0NDY0MjM2MTA1
481
Apply utf-8 encoding to all datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
[]
closed
false
null
[]
null
6
"2020-08-06T20:02:09Z"
"2020-08-20T08:16:08Z"
"2020-08-20T08:16:08Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/481.diff", "html_url": "https://github.com/huggingface/datasets/pull/481", "merged_at": "2020-08-20T08:16:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/481.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/481" }
## Description This PR applies utf-8 encoding for all instances of `with open(...) as f` to all Python files in `datasets/`. As suggested by @thomwolf in #468 , we use regular expressions and the following function ```python def apply_encoding_on_file_open(filepath: str): """Apply UTF-8 encoding for all instances where a non-binary file is opened.""" with open(filepath, 'r', encoding='utf-8') as input_file: regexp = re.compile(r"(?!.*\b(?:encoding|rb|w|wb|w+|wb+|ab|ab+)\b)(?<=\s)(open)\((.*)\)") input_text = input_file.read() match = regexp.search(input_text) if match: output = regexp.sub(lambda m: m.group()[:-1]+', encoding="utf-8")', input_text) with open(filepath, 'w', encoding='utf-8') as output_file: output_file.write(output) ``` to perform the replacement. Note: 1. I excluded all _**binary files**_ from the search since it's possible some objects are opened for which the encoding doesn't make sense. Please correct me if I'm wrong and I'll tweak the regexp accordingly 2. There were two edge cases where the regexp failed (e.g. two `open` instances on a single line), but I decided to just fix these manually in the interest of time. 3. I only applied the replacement to files in `datasets/`. Let me know if this should be extended to other places like `metrics/` 4. I have implemented a unit test that should catch missing encodings in future CI runs Closes #468 and possibly #347
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/481/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/481/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/480
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/480/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/480/comments
https://api.github.com/repos/huggingface/datasets/issues/480/events
https://github.com/huggingface/datasets/pull/480
674,245,959
MDExOlB1bGxSZXF1ZXN0NDYzOTcwNjQ2
480
Column indexing hotfix
{ "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TevenLeScao", "id": 26709476, "login": "TevenLeScao", "node_id": "MDQ6VXNlcjI2NzA5NDc2", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "type": "User", "url": "https://api.github.com/users/TevenLeScao" }
[]
closed
false
null
[]
null
2
"2020-08-06T11:37:05Z"
"2023-09-24T09:49:33Z"
"2020-08-12T08:36:10Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/480.diff", "html_url": "https://github.com/huggingface/datasets/pull/480", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/480.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/480" }
As observed for example in #469 , currently `__getitem__` does not convert the data to the dataset format when indexing by column. This is a hotfix that imitates functional 0.3.0. code. In the future it'd probably be nice to have a test there.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/480/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/480/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/479
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/479/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/479/comments
https://api.github.com/repos/huggingface/datasets/issues/479/events
https://github.com/huggingface/datasets/pull/479
673,905,407
MDExOlB1bGxSZXF1ZXN0NDYzNjkxMjA0
479
add METEOR metric
{ "avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4", "events_url": "https://api.github.com/users/vegarab/events{/privacy}", "followers_url": "https://api.github.com/users/vegarab/followers", "following_url": "https://api.github.com/users/vegarab/following{/other_user}", "gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vegarab", "id": 24683907, "login": "vegarab", "node_id": "MDQ6VXNlcjI0NjgzOTA3", "organizations_url": "https://api.github.com/users/vegarab/orgs", "received_events_url": "https://api.github.com/users/vegarab/received_events", "repos_url": "https://api.github.com/users/vegarab/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vegarab/subscriptions", "type": "User", "url": "https://api.github.com/users/vegarab" }
[]
closed
false
null
[]
null
5
"2020-08-05T23:13:00Z"
"2020-08-19T13:39:09Z"
"2020-08-19T13:39:09Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/479.diff", "html_url": "https://github.com/huggingface/datasets/pull/479", "merged_at": "2020-08-19T13:39:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/479.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/479" }
Added the METEOR metric. Can be used like this: ```python import nlp meteor = nlp.load_metric('metrics/meteor') meteor.compute(["some string", "some string"], ["some string", "some similar string"]) # {'meteor': 0.6411637931034483} meteor.add("some string", "some string") meteor.add('some string", "some similar string") meteor.compute() # {'meteor': 0.6411637931034483} ``` Uses [NLTK's implementation](https://www.nltk.org/api/nltk.translate.html#module-nltk.translate.meteor_score), [(source)](https://github.com/nltk/nltk/blob/develop/nltk/translate/meteor_score.py)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/479/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/479/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/478
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/478/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/478/comments
https://api.github.com/repos/huggingface/datasets/issues/478/events
https://github.com/huggingface/datasets/issues/478
673,178,317
MDU6SXNzdWU2NzMxNzgzMTc=
478
Export TFRecord to GCP bucket
{ "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/astariul", "id": 43774355, "login": "astariul", "node_id": "MDQ6VXNlcjQzNzc0MzU1", "organizations_url": "https://api.github.com/users/astariul/orgs", "received_events_url": "https://api.github.com/users/astariul/received_events", "repos_url": "https://api.github.com/users/astariul/repos", "site_admin": false, "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "type": "User", "url": "https://api.github.com/users/astariul" }
[]
closed
false
null
[]
null
1
"2020-08-05T01:08:32Z"
"2020-08-05T01:21:37Z"
"2020-08-05T01:21:36Z"
NONE
null
null
null
Previously, I was writing TFRecords manually to GCP bucket with : `with tf.io.TFRecordWriter('gs://my_bucket/x.tfrecord')` Since `0.4.0` is out with the `export()` function, I tried it. But it seems TFRecords cannot be directly written to GCP bucket. `dataset.export('local.tfrecord')` works fine, but `dataset.export('gs://my_bucket/x.tfrecord')` does not work. There is no error message, I just can't find the file on my bucket... --- Looking at the code, `nlp` is using `tf.data.experimental.TFRecordWriter`, while I was using `tf.io.TFRecordWriter`. **What's the difference between those 2 ? How can I write TFRecords files directly to GCP bucket ?** @jarednielsen @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/478/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/478/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/477
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/477/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/477/comments
https://api.github.com/repos/huggingface/datasets/issues/477/events
https://github.com/huggingface/datasets/issues/477
673,142,143
MDU6SXNzdWU2NzMxNDIxNDM=
477
Overview.ipynb throws exceptions with nlp 0.4.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/23109219?v=4", "events_url": "https://api.github.com/users/mandy-li/events{/privacy}", "followers_url": "https://api.github.com/users/mandy-li/followers", "following_url": "https://api.github.com/users/mandy-li/following{/other_user}", "gists_url": "https://api.github.com/users/mandy-li/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mandy-li", "id": 23109219, "login": "mandy-li", "node_id": "MDQ6VXNlcjIzMTA5MjE5", "organizations_url": "https://api.github.com/users/mandy-li/orgs", "received_events_url": "https://api.github.com/users/mandy-li/received_events", "repos_url": "https://api.github.com/users/mandy-li/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mandy-li/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mandy-li/subscriptions", "type": "User", "url": "https://api.github.com/users/mandy-li" }
[]
closed
false
null
[]
null
3
"2020-08-04T23:18:15Z"
"2021-08-03T06:02:15Z"
"2021-08-03T06:02:15Z"
NONE
null
null
null
with nlp 0.4.0, the TensorFlow example in Overview.ipynb throws the following exceptions: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-5-48907f2ad433> in <module> ----> 1 features = {x: train_tf_dataset[x].to_tensor(default_value=0, shape=[None, tokenizer.max_len]) for x in columns[:3]} 2 labels = {"output_1": train_tf_dataset["start_positions"].to_tensor(default_value=0, shape=[None, 1])} 3 labels["output_2"] = train_tf_dataset["end_positions"].to_tensor(default_value=0, shape=[None, 1]) 4 tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8) <ipython-input-5-48907f2ad433> in <dictcomp>(.0) ----> 1 features = {x: train_tf_dataset[x].to_tensor(default_value=0, shape=[None, tokenizer.max_len]) for x in columns[:3]} 2 labels = {"output_1": train_tf_dataset["start_positions"].to_tensor(default_value=0, shape=[None, 1])} 3 labels["output_2"] = train_tf_dataset["end_positions"].to_tensor(default_value=0, shape=[None, 1]) 4 tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8) AttributeError: 'numpy.ndarray' object has no attribute 'to_tensor'
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/477/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/477/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/476
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/476/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/476/comments
https://api.github.com/repos/huggingface/datasets/issues/476/events
https://github.com/huggingface/datasets/pull/476
672,991,854
MDExOlB1bGxSZXF1ZXN0NDYyOTMyMTgx
476
CheckList
{ "avatar_url": "https://avatars.githubusercontent.com/u/698010?v=4", "events_url": "https://api.github.com/users/marcotcr/events{/privacy}", "followers_url": "https://api.github.com/users/marcotcr/followers", "following_url": "https://api.github.com/users/marcotcr/following{/other_user}", "gists_url": "https://api.github.com/users/marcotcr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/marcotcr", "id": 698010, "login": "marcotcr", "node_id": "MDQ6VXNlcjY5ODAxMA==", "organizations_url": "https://api.github.com/users/marcotcr/orgs", "received_events_url": "https://api.github.com/users/marcotcr/received_events", "repos_url": "https://api.github.com/users/marcotcr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/marcotcr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marcotcr/subscriptions", "type": "User", "url": "https://api.github.com/users/marcotcr" }
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
[]
null
2
"2020-08-04T18:32:05Z"
"2022-10-03T09:43:37Z"
"2022-10-03T09:43:37Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/476.diff", "html_url": "https://github.com/huggingface/datasets/pull/476", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/476.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/476" }
Sorry for the large pull request. - Added checklists as datasets. I can't run `test_load_real_dataset` (see #474), but I can load the datasets successfully as shown in the example notebook - Added a checklist wrapper
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/476/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/476/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/475
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/475/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/475/comments
https://api.github.com/repos/huggingface/datasets/issues/475/events
https://github.com/huggingface/datasets/pull/475
672,884,595
MDExOlB1bGxSZXF1ZXN0NDYyODQzMzQz
475
misc. bugs and quality of life
{ "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "events_url": "https://api.github.com/users/joeddav/events{/privacy}", "followers_url": "https://api.github.com/users/joeddav/followers", "following_url": "https://api.github.com/users/joeddav/following{/other_user}", "gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/joeddav", "id": 9353833, "login": "joeddav", "node_id": "MDQ6VXNlcjkzNTM4MzM=", "organizations_url": "https://api.github.com/users/joeddav/orgs", "received_events_url": "https://api.github.com/users/joeddav/received_events", "repos_url": "https://api.github.com/users/joeddav/repos", "site_admin": false, "starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joeddav/subscriptions", "type": "User", "url": "https://api.github.com/users/joeddav" }
[]
closed
false
null
[]
null
2
"2020-08-04T15:32:29Z"
"2020-08-17T21:14:08Z"
"2020-08-17T21:14:07Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/475.diff", "html_url": "https://github.com/huggingface/datasets/pull/475", "merged_at": "2020-08-17T21:14:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/475.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/475" }
A few misc. bugs and QOL improvements that I've come across in using the library. Let me know if you don't like any of them and I can adjust/remove them. 1. Printing datasets without a description field throws an error when formatting the `single_line_description`. This fixes that, and also adds some formatting to the repr to make it slightly more readable. ``` >>> print(list_datasets()[0]) nlp.ObjectInfo( id='aeslc', description='A collection of email messages of employees in the Enron Corporation.There are two features: - email_body: email body text. - subject_line: email subject text.', files=[nlp.S3Object('aeslc.py'), nlp.S3Object('dataset_infos.json'), nlp.S3Object('dummy/1.0.0/dummy_data-zip-extracted/dummy_data/AESLC-master/enron_subject_line/dev/allen-p_inbox_29.subject'), nlp.S3Object('dummy/1.0.0/dummy_data-zip-extracted/dummy_data/AESLC-master/enron_subject_line/test/allen-p_inbox_24.subject'), nlp.S3Object('dummy/1.0.0/dummy_data-zip-extracted/dummy_data/AESLC-master/enron_subject_line/train/allen-p_inbox_20.subject'), nlp.S3Object('dummy/1.0.0/dummy_data.zip'), nlp.S3Object('urls_checksums/checksums.txt')] ) ``` 2. Add id-only option to `list_datasets` and `list_metrics` to allow the user to easily print out just the names of the datasets & metrics. I often found myself annoyed that this took so many strokes to do. ```python [dataset.id for dataset in list_datasets()] # before list_datasets(id_only=True) # after ``` 3. Fix null-seed randomization caching. When using `train_test_split` and `shuffle`, the computation was being cached even without a seed or generator being passed. The result was that calling `.shuffle` more than once on the same dataset didn't do anything without passing a distinct seed or generator. Likewise with `train_test_split`. 4. Indexing by iterables of bool. I added support for passing an iterable of type bool to `_getitem` as a numpy/pandas-like indexing method. Let me know if you think it's redundant with `filter` (I know it's not optimal memory-wise), but I think it's nice to have as a lightweight alternative to do simple things without having to create a copy of the entire dataset, e.g. ```python dataset[dataset['label'] == 0] # numpy-like bool indexing to look at instances with labels of 0 ``` 5. Add an `input_column` argument to `map` and `filter`, which allows you to filter/map on a particular column rather than passing the whole dict to the function. Also adds `fn_kwargs` to be passed to the function. I think these together make mapping much cleaner in many cases such as mono-column tokenization: ```python # before dataset = dataset.map(lambda batch: tokenizer(batch["text"]) # after dataset = dataset.map(tokenizer, input_column="text") dataset = dataset.map(tokenizer, input_column="text", fn_kwargs={"truncation": True, "padding": True}) ```
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/475/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/475/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/474
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/474/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/474/comments
https://api.github.com/repos/huggingface/datasets/issues/474/events
https://github.com/huggingface/datasets/issues/474
672,407,330
MDU6SXNzdWU2NzI0MDczMzA=
474
test_load_real_dataset when config has BUILDER_CONFIGS that matter
{ "avatar_url": "https://avatars.githubusercontent.com/u/698010?v=4", "events_url": "https://api.github.com/users/marcotcr/events{/privacy}", "followers_url": "https://api.github.com/users/marcotcr/followers", "following_url": "https://api.github.com/users/marcotcr/following{/other_user}", "gists_url": "https://api.github.com/users/marcotcr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/marcotcr", "id": 698010, "login": "marcotcr", "node_id": "MDQ6VXNlcjY5ODAxMA==", "organizations_url": "https://api.github.com/users/marcotcr/orgs", "received_events_url": "https://api.github.com/users/marcotcr/received_events", "repos_url": "https://api.github.com/users/marcotcr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/marcotcr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marcotcr/subscriptions", "type": "User", "url": "https://api.github.com/users/marcotcr" }
[]
closed
false
null
[]
null
2
"2020-08-03T23:46:36Z"
"2020-09-07T14:53:13Z"
"2020-09-07T14:53:13Z"
NONE
null
null
null
It a dataset has custom `BUILDER_CONFIGS` with non-keyword arguments (or keyword arguments with non default values), the config is not loaded during the test and causes an error. I think the problem is that `test_load_real_dataset` calls `load_dataset` with `data_dir=temp_data_dir` ([here](https://github.com/huggingface/nlp/blob/master/tests/test_dataset_common.py#L200)). This causes [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/builder.py#L201) to always be false because `config_kwargs` is not `None`. [This line](https://github.com/huggingface/nlp/blob/master/src/nlp/builder.py#L222) will be run instead, which doesn't use `BUILDER_CONFIGS`. For an example, you can try running the test for lince: ` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_lince` which yields > E TypeError: __init__() missing 3 required positional arguments: 'colnames', 'classes', and 'label_column'
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/474/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/474/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/473
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/473/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/473/comments
https://api.github.com/repos/huggingface/datasets/issues/473/events
https://github.com/huggingface/datasets/pull/473
672,007,247
MDExOlB1bGxSZXF1ZXN0NDYyMTIwNzU4
473
add DoQA dataset (ACL 2020)
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
[]
closed
false
null
[]
null
0
"2020-08-03T11:26:52Z"
"2020-09-10T17:19:11Z"
"2020-09-03T11:44:15Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/473.diff", "html_url": "https://github.com/huggingface/datasets/pull/473", "merged_at": "2020-09-03T11:44:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/473.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/473" }
add DoQA dataset (ACL 2020) http://ixa.eus/node/12931
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/473/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/473/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/472
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/472/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/472/comments
https://api.github.com/repos/huggingface/datasets/issues/472/events
https://github.com/huggingface/datasets/pull/472
672,000,745
MDExOlB1bGxSZXF1ZXN0NDYyMTE1MjA4
472
add crd3 dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
[]
closed
false
null
[]
null
1
"2020-08-03T11:15:02Z"
"2020-08-03T11:22:10Z"
"2020-08-03T11:22:09Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/472.diff", "html_url": "https://github.com/huggingface/datasets/pull/472", "merged_at": "2020-08-03T11:22:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/472.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/472" }
opening new PR for CRD3 dataset (ACL2020) to fix the circle CI problems
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/472/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/472/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/471
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/471/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/471/comments
https://api.github.com/repos/huggingface/datasets/issues/471/events
https://github.com/huggingface/datasets/pull/471
671,996,423
MDExOlB1bGxSZXF1ZXN0NDYyMTExNTU1
471
add reuters21578 dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
[]
closed
false
null
[]
null
0
"2020-08-03T11:07:14Z"
"2022-08-04T08:39:11Z"
"2020-09-03T09:58:50Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/471.diff", "html_url": "https://github.com/huggingface/datasets/pull/471", "merged_at": "2020-09-03T09:58:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/471.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/471" }
new PR to add the reuters21578 dataset and fix the circle CI problems. Fix partially: - #353 Subsequent PR after: - #449
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/471/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/471/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/470
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/470/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/470/comments
https://api.github.com/repos/huggingface/datasets/issues/470/events
https://github.com/huggingface/datasets/pull/470
671,952,276
MDExOlB1bGxSZXF1ZXN0NDYyMDc0MzQ0
470
Adding IWSLT 2017 dataset.
{ "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Narsil", "id": 204321, "login": "Narsil", "node_id": "MDQ6VXNlcjIwNDMyMQ==", "organizations_url": "https://api.github.com/users/Narsil/orgs", "received_events_url": "https://api.github.com/users/Narsil/received_events", "repos_url": "https://api.github.com/users/Narsil/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "type": "User", "url": "https://api.github.com/users/Narsil" }
[]
closed
false
null
[]
null
6
"2020-08-03T09:52:39Z"
"2020-09-07T12:33:30Z"
"2020-09-07T12:33:30Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/470.diff", "html_url": "https://github.com/huggingface/datasets/pull/470", "merged_at": "2020-09-07T12:33:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/470.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/470" }
Created a [IWSLT 2017](https://sites.google.com/site/iwsltevaluation2017/TED-tasks) dataset script for the *multilingual data*. ``` Bilingual data: {Arabic, German, French, Japanese, Korean, Chinese} <-> English Multilingual data: German, English, Italian, Dutch, Romanian. (Any pair) ``` I'm unsure how to handle bilingual vs multilingual. Given `nlp` architecture a Config option seems to be the way to go, however, it might be a bit confusing to have different language pairs with different option. Using just language pairs is not viable as English to German exists in both. Any opinion on how that should be done ? EDIT: I decided to just omit de-en from multilingual as it's only a subset of the bilingual one. That way only language pairs exist. EDIT : Could be interesting for #438
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/470/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/470/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/469
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/469/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/469/comments
https://api.github.com/repos/huggingface/datasets/issues/469/events
https://github.com/huggingface/datasets/issues/469
671,876,963
MDU6SXNzdWU2NzE4NzY5NjM=
469
invalid data type 'str' at _convert_outputs in arrow_dataset.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/30617486?v=4", "events_url": "https://api.github.com/users/Murgates/events{/privacy}", "followers_url": "https://api.github.com/users/Murgates/followers", "following_url": "https://api.github.com/users/Murgates/following{/other_user}", "gists_url": "https://api.github.com/users/Murgates/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Murgates", "id": 30617486, "login": "Murgates", "node_id": "MDQ6VXNlcjMwNjE3NDg2", "organizations_url": "https://api.github.com/users/Murgates/orgs", "received_events_url": "https://api.github.com/users/Murgates/received_events", "repos_url": "https://api.github.com/users/Murgates/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Murgates/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Murgates/subscriptions", "type": "User", "url": "https://api.github.com/users/Murgates" }
[]
closed
false
null
[]
null
9
"2020-08-03T07:48:29Z"
"2023-07-20T15:54:17Z"
"2023-07-20T15:54:17Z"
NONE
null
null
null
I trying to build multi label text classifier model using Transformers lib. I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error File "C:\***\arrow_dataset.py", line 343, in _convert_outputs v = command(v) TypeError: new(): invalid data type 'str' I'm using pyarrow 1.0.0. And I have simple custom data set with Text and Integer Label. Ex: Data Text , Label #Column Header I'm facing an Network issue, 1 I forgot my password, 2 Error StackTrace: File "C:\**\transformers\trainer.py", line 492, in train for step, inputs in enumerate(epoch_iterator): File "C:\**\tqdm\std.py", line 1104, in __iter__ for obj in iterable: File "C:\**\torch\utils\data\dataloader.py", line 345, in __next__ data = self._next_data() File "C:\**\torch\utils\data\dataloader.py", line 385, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "C:\**\torch\utils\data\_utils\fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "C:\**\torch\utils\data\_utils\fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "C:\**\nlp\arrow_dataset.py", line 414, in __getitem__ output_all_columns=self._output_all_columns, File "C:\**\nlp\arrow_dataset.py", line 403, in _getitem outputs, format_type=format_type, format_columns=format_columns, output_all_columns=output_all_columns File "C:\**\nlp\arrow_dataset.py", line 343, in _convert_outputs v = command(v) TypeError: new(): invalid data type 'str'
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/469/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/469/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/468
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/468/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/468/comments
https://api.github.com/repos/huggingface/datasets/issues/468/events
https://github.com/huggingface/datasets/issues/468
671,622,441
MDU6SXNzdWU2NzE2MjI0NDE=
468
UnicodeDecodeError while loading PAN-X task of XTREME dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
[]
closed
false
null
[]
null
5
"2020-08-02T14:05:10Z"
"2020-08-20T08:16:08Z"
"2020-08-20T08:16:08Z"
MEMBER
null
null
null
Hi 🤗 team! ## Description of the problem I'm running into a `UnicodeDecodeError` while trying to load the PAN-X subset the XTREME dataset: ``` --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) <ipython-input-5-1d61f439b843> in <module> ----> 1 dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data') /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 528 ignore_verifications = ignore_verifications or save_infos 529 # Download/copy dataset processing script --> 530 module_path, hash = prepare_module(path, download_config=download_config, dataset=True) 531 532 # Get dataset builder class from the processing script /usr/local/lib/python3.6/dist-packages/nlp/load.py in prepare_module(path, download_config, dataset, force_local_path, **download_kwargs) 265 266 # Download external imports if needed --> 267 imports = get_imports(local_path) 268 local_imports = [] 269 library_imports = [] /usr/local/lib/python3.6/dist-packages/nlp/load.py in get_imports(file_path) 156 lines = [] 157 with open(file_path, mode="r") as f: --> 158 lines.extend(f.readlines()) 159 160 logger.info("Checking %s for additional imports.", file_path) /usr/lib/python3.6/encodings/ascii.py in decode(self, input, final) 24 class IncrementalDecoder(codecs.IncrementalDecoder): 25 def decode(self, input, final=False): ---> 26 return codecs.ascii_decode(input, self.errors)[0] 27 28 class StreamWriter(Codec,codecs.StreamWriter): UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 111: ordinal not in range(128) ``` ## Steps to reproduce Install from nlp's master branch ```python pip install git+https://github.com/huggingface/nlp.git ``` then run ```python from nlp import load_dataset # AmazonPhotos.zip is located in data/ dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data') ``` ## OS / platform details - `nlp` version: latest from master - Platform: Linux-4.15.0-72-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): 2.1.0 (True) - Using GPU in script?: True - Using distributed or parallel set-up in script?: False ## Proposed solution Either change [line 762](https://github.com/huggingface/nlp/blob/7ada00b1d62f94eee22a7df38c6b01e3f27194b7/datasets/xtreme/xtreme.py#L762) in `xtreme.py` to include UTF-8 encoding: ``` # old with open(filepath) as f # new with open(filepath, encoding='utf-8') as f ``` or raise a warning that suggests setting the locale explicitly, e.g. ```python import locale locale.setlocale(locale.LC_ALL, 'C.UTF-8') ``` I have a preference for the first solution. Let me know if you agree and I'll be happy to implement the simple fix!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/468/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/468/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/467
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/467/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/467/comments
https://api.github.com/repos/huggingface/datasets/issues/467/events
https://github.com/huggingface/datasets/pull/467
671,580,010
MDExOlB1bGxSZXF1ZXN0NDYxNzgwMzUy
467
DOCS: Fix typo
{ "avatar_url": "https://avatars.githubusercontent.com/u/13381361?v=4", "events_url": "https://api.github.com/users/bharatr21/events{/privacy}", "followers_url": "https://api.github.com/users/bharatr21/followers", "following_url": "https://api.github.com/users/bharatr21/following{/other_user}", "gists_url": "https://api.github.com/users/bharatr21/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bharatr21", "id": 13381361, "login": "bharatr21", "node_id": "MDQ6VXNlcjEzMzgxMzYx", "organizations_url": "https://api.github.com/users/bharatr21/orgs", "received_events_url": "https://api.github.com/users/bharatr21/received_events", "repos_url": "https://api.github.com/users/bharatr21/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bharatr21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bharatr21/subscriptions", "type": "User", "url": "https://api.github.com/users/bharatr21" }
[]
closed
false
null
[]
null
1
"2020-08-02T08:59:37Z"
"2020-08-02T13:52:27Z"
"2020-08-02T09:18:54Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/467.diff", "html_url": "https://github.com/huggingface/datasets/pull/467", "merged_at": "2020-08-02T09:18:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/467.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/467" }
Fix typo from dictionnary -> dictionary
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/467/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/467/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/466
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/466/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/466/comments
https://api.github.com/repos/huggingface/datasets/issues/466/events
https://github.com/huggingface/datasets/pull/466
670,766,891
MDExOlB1bGxSZXF1ZXN0NDYxMDEzOTM0
466
[METRICS] Various improvements on metrics
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
2
"2020-08-01T11:03:45Z"
"2020-08-17T15:15:00Z"
"2020-08-17T15:14:59Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/466.diff", "html_url": "https://github.com/huggingface/datasets/pull/466", "merged_at": "2020-08-17T15:14:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/466.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/466" }
- Disallow the use of positional arguments to avoid `predictions` vs `references` mistakes - Allow to directly feed numpy/pytorch/tensorflow/pandas objects in metrics
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/466/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/466/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/465
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/465/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/465/comments
https://api.github.com/repos/huggingface/datasets/issues/465/events
https://github.com/huggingface/datasets/pull/465
669,889,779
MDExOlB1bGxSZXF1ZXN0NDYwMjEwODYw
465
Keep features after transform
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
3
"2020-07-31T14:43:21Z"
"2020-07-31T18:27:33Z"
"2020-07-31T18:27:32Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/465.diff", "html_url": "https://github.com/huggingface/datasets/pull/465", "merged_at": "2020-07-31T18:27:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/465.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/465" }
When applying a transform like `map`, some features were lost (and inferred features were used). It was the case for ClassLabel, Translation, etc. To fix that, I did some modifications in the `ArrowWriter`: - added the `update_features` parameter. When it's `True`, then the features specified by the user (if any) can be updated with inferred features if their type don't match. `map` transform sets `update_features=True` when writing to cache file or buffer. Features won't change by default in `map`. - added the `with_metadata` parameter. If `True`, the `features` (after update) will be written inside the metadata of the schema in this format: ``` { "huggingface": {"features" : <serialized Features exactly like dataset_info.json>} } ``` Then, once a dataset is instantiated without info/features, these metadata are used to set the features of the dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/465/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/465/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/464
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/464/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/464/comments
https://api.github.com/repos/huggingface/datasets/issues/464/events
https://github.com/huggingface/datasets/pull/464
669,767,381
MDExOlB1bGxSZXF1ZXN0NDYwMTAxNDYz
464
Add rename, remove and cast in-place operations
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
0
"2020-07-31T12:30:21Z"
"2020-07-31T15:50:02Z"
"2020-07-31T15:50:00Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/464.diff", "html_url": "https://github.com/huggingface/datasets/pull/464", "merged_at": "2020-07-31T15:50:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/464.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/464" }
Add a bunch of in-place operation leveraging the Arrow back-end to rename and remove columns and cast to new features without using the more expensive `map` method. These methods are added to `Dataset` as well as `DatasetDict`. Added tests for these new methods and add the methods to the doc. Naming follows the new pattern with a trailing underscore indicating in-place methods.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/464/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/464/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/463
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/463/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/463/comments
https://api.github.com/repos/huggingface/datasets/issues/463/events
https://github.com/huggingface/datasets/pull/463
669,735,455
MDExOlB1bGxSZXF1ZXN0NDYwMDcyNjQ1
463
Add dataset/mlsum
{ "avatar_url": "https://avatars.githubusercontent.com/u/36986299?v=4", "events_url": "https://api.github.com/users/RachelKer/events{/privacy}", "followers_url": "https://api.github.com/users/RachelKer/followers", "following_url": "https://api.github.com/users/RachelKer/following{/other_user}", "gists_url": "https://api.github.com/users/RachelKer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/RachelKer", "id": 36986299, "login": "RachelKer", "node_id": "MDQ6VXNlcjM2OTg2Mjk5", "organizations_url": "https://api.github.com/users/RachelKer/orgs", "received_events_url": "https://api.github.com/users/RachelKer/received_events", "repos_url": "https://api.github.com/users/RachelKer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/RachelKer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RachelKer/subscriptions", "type": "User", "url": "https://api.github.com/users/RachelKer" }
[]
closed
false
null
[]
null
3
"2020-07-31T11:50:52Z"
"2020-08-24T14:54:42Z"
"2020-08-24T14:54:42Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/463.diff", "html_url": "https://github.com/huggingface/datasets/pull/463", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/463.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/463" }
New pull request that should correct the previous errors. The load_real_data stills fails because it is looking for a default dataset URL that does not exists, this does not happen when loading the dataset with load_dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/463/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/463/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/462
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/462/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/462/comments
https://api.github.com/repos/huggingface/datasets/issues/462/events
https://github.com/huggingface/datasets/pull/462
669,715,547
MDExOlB1bGxSZXF1ZXN0NDYwMDU0NDgz
462
add DoQA (ACL 2020) dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
[]
closed
false
null
[]
null
0
"2020-07-31T11:25:56Z"
"2023-09-24T09:48:42Z"
"2020-08-03T11:28:27Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/462.diff", "html_url": "https://github.com/huggingface/datasets/pull/462", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/462.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/462" }
adds DoQA (ACL 2020) dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/462/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/462/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/461
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/461/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/461/comments
https://api.github.com/repos/huggingface/datasets/issues/461/events
https://github.com/huggingface/datasets/pull/461
669,703,508
MDExOlB1bGxSZXF1ZXN0NDYwMDQzNDY5
461
Doqa
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
[]
closed
false
null
[]
null
0
"2020-07-31T11:11:12Z"
"2023-09-24T09:48:40Z"
"2020-07-31T11:13:15Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/461.diff", "html_url": "https://github.com/huggingface/datasets/pull/461", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/461.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/461" }
add DoQA (ACL 2020) dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/461/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/461/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/460
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/460/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/460/comments
https://api.github.com/repos/huggingface/datasets/issues/460/events
https://github.com/huggingface/datasets/pull/460
669,585,256
MDExOlB1bGxSZXF1ZXN0NDU5OTM2OTU2
460
Fix KeyboardInterrupt in map and bad indices in select
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
2
"2020-07-31T08:57:15Z"
"2020-07-31T11:32:19Z"
"2020-07-31T11:32:18Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/460.diff", "html_url": "https://github.com/huggingface/datasets/pull/460", "merged_at": "2020-07-31T11:32:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/460.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/460" }
If you interrupted a map function while it was writing, the cached file was not discarded. Therefore the next time you called map, it was loading an incomplete arrow file. We had the same issue with select if there was a bad indice at one point. To fix that I used temporary files that are renamed once everything is finished.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/460/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/460/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/459
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/459/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/459/comments
https://api.github.com/repos/huggingface/datasets/issues/459/events
https://github.com/huggingface/datasets/pull/459
669,545,437
MDExOlB1bGxSZXF1ZXN0NDU5OTAxMjEy
459
[Breaking] Update Dataset and DatasetDict API
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
0
"2020-07-31T08:11:33Z"
"2020-08-26T08:28:36Z"
"2020-08-26T08:28:35Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/459.diff", "html_url": "https://github.com/huggingface/datasets/pull/459", "merged_at": "2020-08-26T08:28:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/459.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/459" }
This PR contains a few breaking changes so it's probably good to keep it for the next (major) release: - rename the `flatten`, `drop` and `dictionary_encode_column` methods in `flatten_`, `drop_` and `dictionary_encode_column_` to indicate that these methods have in-place effects as discussed in #166. From now on we should keep the convention of having a trailing underscore for methods which have an in-place effet. I also adopt the conversion of not returning the (self) dataset for these methods. This is different than what PyTorch does for instance (`model.to()` is in-place but return the self model) but I feel like it's a safer approach in terms of UX. - remove the `dataset.columns` property which returns a low-level Apache Arrow object and should not be used by users. Similarly, remove `dataset. nbytes` which we don't really want to expose in this bare-bone format. - add a few more properties and methods to `DatasetDict`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/459/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/459/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/458
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/458/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/458/comments
https://api.github.com/repos/huggingface/datasets/issues/458/events
https://github.com/huggingface/datasets/pull/458
668,972,666
MDExOlB1bGxSZXF1ZXN0NDU5Mzk5ODg2
458
Install CoVal metric from github
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
[]
closed
false
null
[]
null
0
"2020-07-30T16:59:25Z"
"2020-07-31T13:56:33Z"
"2020-07-31T13:56:33Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/458.diff", "html_url": "https://github.com/huggingface/datasets/pull/458", "merged_at": "2020-07-31T13:56:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/458.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/458" }
Changed the import statements in `coval.py` to direct the user to install the original package from github if it's not already installed (the warning will only display properly after merging [PR455](https://github.com/huggingface/nlp/pull/455)) Also changed the function call to use named rather than positional arguments.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/458/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/458/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/457
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/457/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/457/comments
https://api.github.com/repos/huggingface/datasets/issues/457/events
https://github.com/huggingface/datasets/pull/457
668,898,386
MDExOlB1bGxSZXF1ZXN0NDU5MzMyOTM1
457
add set_format to DatasetDict + tests
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
0
"2020-07-30T15:53:20Z"
"2020-07-30T17:34:36Z"
"2020-07-30T17:34:34Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/457.diff", "html_url": "https://github.com/huggingface/datasets/pull/457", "merged_at": "2020-07-30T17:34:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/457.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/457" }
Add the `set_format` and `formated_as` and `reset_format` to `DatasetDict`. Add tests to these for `Dataset` and `DatasetDict`. Fix some bugs uncovered by the tests for `pandas` formating.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/457/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/457/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/456
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/456/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/456/comments
https://api.github.com/repos/huggingface/datasets/issues/456/events
https://github.com/huggingface/datasets/pull/456
668,723,785
MDExOlB1bGxSZXF1ZXN0NDU5MTc1MTY0
456
add crd3(ACL 2020) dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
[]
closed
false
null
[]
null
0
"2020-07-30T13:28:35Z"
"2023-09-24T09:48:47Z"
"2020-08-03T11:28:52Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/456.diff", "html_url": "https://github.com/huggingface/datasets/pull/456", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/456.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/456" }
This PR adds the **Critical Role Dungeons and Dragons Dataset** published at ACL 2020
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/456/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/456/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/455
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/455/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/455/comments
https://api.github.com/repos/huggingface/datasets/issues/455/events
https://github.com/huggingface/datasets/pull/455
668,037,965
MDExOlB1bGxSZXF1ZXN0NDU4NTk4NTUw
455
Add bleurt
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
[]
closed
false
null
[]
null
4
"2020-07-29T18:08:32Z"
"2020-07-31T13:56:14Z"
"2020-07-31T13:56:14Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/455.diff", "html_url": "https://github.com/huggingface/datasets/pull/455", "merged_at": "2020-07-31T13:56:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/455.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/455" }
This PR adds the BLEURT metric to the library. The BLEURT `Metric` downloads a TF checkpoint corresponding to its `config_name` at creation (in the `_info` function). Default is set to `bleurt-base-128`. Note that the default in the original package is `bleurt-tiny-128`, but they throw a warning and recommend using `bleurt-base-128` instead. I think it's safer to have our users have a functioning metric when they call the default behavior, we'll address discrepancies in the issues/discussions if it comes up. In addition to the BLEURT file, `load.py` was changed so we can ask users to pip install the required packages from git when they have a `setup.py` but are not on PyPL cc @ankparikh @tsellam
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/455/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/455/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/454
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/454/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/454/comments
https://api.github.com/repos/huggingface/datasets/issues/454/events
https://github.com/huggingface/datasets/pull/454
668,011,577
MDExOlB1bGxSZXF1ZXN0NDU4NTc3MzA3
454
Create SECURITY.md
{ "avatar_url": "https://avatars.githubusercontent.com/u/56394989?v=4", "events_url": "https://api.github.com/users/ChenZehong13/events{/privacy}", "followers_url": "https://api.github.com/users/ChenZehong13/followers", "following_url": "https://api.github.com/users/ChenZehong13/following{/other_user}", "gists_url": "https://api.github.com/users/ChenZehong13/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ChenZehong13", "id": 56394989, "login": "ChenZehong13", "node_id": "MDQ6VXNlcjU2Mzk0OTg5", "organizations_url": "https://api.github.com/users/ChenZehong13/orgs", "received_events_url": "https://api.github.com/users/ChenZehong13/received_events", "repos_url": "https://api.github.com/users/ChenZehong13/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ChenZehong13/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ChenZehong13/subscriptions", "type": "User", "url": "https://api.github.com/users/ChenZehong13" }
[]
closed
false
null
[]
null
0
"2020-07-29T17:23:34Z"
"2020-07-29T21:45:52Z"
"2020-07-29T21:45:52Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/454.diff", "html_url": "https://github.com/huggingface/datasets/pull/454", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/454.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/454" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/454/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/454/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/453
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/453/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/453/comments
https://api.github.com/repos/huggingface/datasets/issues/453/events
https://github.com/huggingface/datasets/pull/453
667,728,247
MDExOlB1bGxSZXF1ZXN0NDU4MzQwNzky
453
add builder tests
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
0
"2020-07-29T10:22:07Z"
"2020-07-29T11:14:06Z"
"2020-07-29T11:14:05Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/453.diff", "html_url": "https://github.com/huggingface/datasets/pull/453", "merged_at": "2020-07-29T11:14:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/453.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/453" }
I added `as_dataset` and `download_and_prepare` to the tests
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/453/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/453/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/452
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/452/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/452/comments
https://api.github.com/repos/huggingface/datasets/issues/452/events
https://github.com/huggingface/datasets/pull/452
667,498,295
MDExOlB1bGxSZXF1ZXN0NDU4MTUzNjQy
452
Guardian authorship dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/25109412?v=4", "events_url": "https://api.github.com/users/malikaltakrori/events{/privacy}", "followers_url": "https://api.github.com/users/malikaltakrori/followers", "following_url": "https://api.github.com/users/malikaltakrori/following{/other_user}", "gists_url": "https://api.github.com/users/malikaltakrori/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/malikaltakrori", "id": 25109412, "login": "malikaltakrori", "node_id": "MDQ6VXNlcjI1MTA5NDEy", "organizations_url": "https://api.github.com/users/malikaltakrori/orgs", "received_events_url": "https://api.github.com/users/malikaltakrori/received_events", "repos_url": "https://api.github.com/users/malikaltakrori/repos", "site_admin": false, "starred_url": "https://api.github.com/users/malikaltakrori/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/malikaltakrori/subscriptions", "type": "User", "url": "https://api.github.com/users/malikaltakrori" }
[]
closed
false
null
[]
null
6
"2020-07-29T02:23:57Z"
"2020-08-20T15:09:57Z"
"2020-08-20T15:07:56Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/452.diff", "html_url": "https://github.com/huggingface/datasets/pull/452", "merged_at": "2020-08-20T15:07:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/452.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/452" }
A new dataset: Guardian news articles for authorship attribution **tests passed:** python nlp-cli dummy_data datasets/guardian_authorship --save_infos --all_configs RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_guardian_authorship **Tests failed:** Real data: RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_guardian_authorship output: __init__() missing 3 required positional arguments: 'train_folder', 'valid_folder', and 'tes...' Remarks: This is the init function of my class. I am not sure why it passes in both my tests and with nlp-cli, but fails here. By the way, I ran this command with another 2 datasets and they failed: * _glue - OSError: Cannot find data file. *_newsgroup - FileNotFoundError: Local file datasets/newsgroup/dummy/18828_comp.graphics/3.0.0/dummy_data.zip doesn't exist Thank you for letting us contribute to such a huge and important library! EDIT: I was able to fix the dummy_data issue. This dataset has around 14 configurations. I was testing with only 2, but their versions were not in a sequence, they were V1.0.0 and V.12.0.0. It seems that the testing code generates testes for all the versions from 0 to MAX, and was testing for versions (and dummy_data.zip files) that do not exist. I fixed that by changing the versions to 1 and 2.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/452/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/452/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/451
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/451/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/451/comments
https://api.github.com/repos/huggingface/datasets/issues/451/events
https://github.com/huggingface/datasets/pull/451
667,210,468
MDExOlB1bGxSZXF1ZXN0NDU3OTIxNDMx
451
Fix csv/json/txt cache dir
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
4
"2020-07-28T16:30:51Z"
"2020-07-29T13:57:23Z"
"2020-07-29T13:57:22Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/451.diff", "html_url": "https://github.com/huggingface/datasets/pull/451", "merged_at": "2020-07-29T13:57:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/451.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/451" }
The cache dir for csv/json/txt datasets was always the same. This is an issue because it should be different depending on the data files provided by the user. To fix that, I added a line that use the hash of the data files provided by the user to define the cache dir. This should fix #444
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/451/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/451/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/450
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/450/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/450/comments
https://api.github.com/repos/huggingface/datasets/issues/450/events
https://github.com/huggingface/datasets/pull/450
667,074,120
MDExOlB1bGxSZXF1ZXN0NDU3ODA5ODA2
450
add sogou_news
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
[]
closed
false
null
[]
null
0
"2020-07-28T13:29:10Z"
"2020-07-29T13:30:18Z"
"2020-07-29T13:30:17Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/450.diff", "html_url": "https://github.com/huggingface/datasets/pull/450", "merged_at": "2020-07-29T13:30:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/450.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/450" }
This PR adds the sogou news dataset #353
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/450/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/450/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/449
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/449/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/449/comments
https://api.github.com/repos/huggingface/datasets/issues/449/events
https://github.com/huggingface/datasets/pull/449
666,898,923
MDExOlB1bGxSZXF1ZXN0NDU3NjY0NjYx
449
add reuters21578 dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
[]
closed
false
null
[]
null
3
"2020-07-28T08:58:12Z"
"2023-09-24T09:49:28Z"
"2020-08-03T11:10:31Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/449.diff", "html_url": "https://github.com/huggingface/datasets/pull/449", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/449.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/449" }
This PR adds the `Reuters_21578` dataset https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.html #353 The datasets is a lit of `.sgm` files which are a bit different from xml file indeed `xml.etree` couldn't be used to read files. I consider them as text file (to avoid using external library) and read line by line (maybe there is a better way to do, happy to get your opinion on it) In the Readme file 3 ways to split the dataset are given.: - The Modified Lewis ("ModLewis") Split: train, test and unused-set - The Modified Apte ("ModApte") Split : train, test and unused-set - The Modified Hayes ("ModHayes") Split: train and test Here I consider the last one as the readme file highlight that this split provides the ability to compare results with those of the 2 first splits.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/449/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/449/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/448
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/448/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/448/comments
https://api.github.com/repos/huggingface/datasets/issues/448/events
https://github.com/huggingface/datasets/pull/448
666,893,443
MDExOlB1bGxSZXF1ZXN0NDU3NjYwMDU2
448
add aws load metric test
{ "avatar_url": "https://avatars.githubusercontent.com/u/5303103?v=4", "events_url": "https://api.github.com/users/idoh/events{/privacy}", "followers_url": "https://api.github.com/users/idoh/followers", "following_url": "https://api.github.com/users/idoh/following{/other_user}", "gists_url": "https://api.github.com/users/idoh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/idoh", "id": 5303103, "login": "idoh", "node_id": "MDQ6VXNlcjUzMDMxMDM=", "organizations_url": "https://api.github.com/users/idoh/orgs", "received_events_url": "https://api.github.com/users/idoh/received_events", "repos_url": "https://api.github.com/users/idoh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/idoh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/idoh/subscriptions", "type": "User", "url": "https://api.github.com/users/idoh" }
[]
closed
false
null
[]
null
3
"2020-07-28T08:50:22Z"
"2020-07-28T15:02:27Z"
"2020-07-28T15:02:27Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/448.diff", "html_url": "https://github.com/huggingface/datasets/pull/448", "merged_at": "2020-07-28T15:02:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/448.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/448" }
Following issue #445 Added a test to recognize import errors of all metrics
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/448/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/448/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/447
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/447/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/447/comments
https://api.github.com/repos/huggingface/datasets/issues/447/events
https://github.com/huggingface/datasets/pull/447
666,842,115
MDExOlB1bGxSZXF1ZXN0NDU3NjE2NDA0
447
[BugFix] fix wrong import of DEFAULT_TOKENIZER
{ "avatar_url": "https://avatars.githubusercontent.com/u/5303103?v=4", "events_url": "https://api.github.com/users/idoh/events{/privacy}", "followers_url": "https://api.github.com/users/idoh/followers", "following_url": "https://api.github.com/users/idoh/following{/other_user}", "gists_url": "https://api.github.com/users/idoh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/idoh", "id": 5303103, "login": "idoh", "node_id": "MDQ6VXNlcjUzMDMxMDM=", "organizations_url": "https://api.github.com/users/idoh/orgs", "received_events_url": "https://api.github.com/users/idoh/received_events", "repos_url": "https://api.github.com/users/idoh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/idoh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/idoh/subscriptions", "type": "User", "url": "https://api.github.com/users/idoh" }
[]
closed
false
null
[]
null
0
"2020-07-28T07:41:10Z"
"2020-07-28T12:58:01Z"
"2020-07-28T12:52:05Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/447.diff", "html_url": "https://github.com/huggingface/datasets/pull/447", "merged_at": "2020-07-28T12:52:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/447.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/447" }
Fixed the path to `DEFAULT_TOKENIZER` #445
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/447/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/447/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/446
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/446/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/446/comments
https://api.github.com/repos/huggingface/datasets/issues/446/events
https://github.com/huggingface/datasets/pull/446
666,837,351
MDExOlB1bGxSZXF1ZXN0NDU3NjEyNTg5
446
[BugFix] fix wrong import of DEFAULT_TOKENIZER
{ "avatar_url": "https://avatars.githubusercontent.com/u/5303103?v=4", "events_url": "https://api.github.com/users/idoh/events{/privacy}", "followers_url": "https://api.github.com/users/idoh/followers", "following_url": "https://api.github.com/users/idoh/following{/other_user}", "gists_url": "https://api.github.com/users/idoh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/idoh", "id": 5303103, "login": "idoh", "node_id": "MDQ6VXNlcjUzMDMxMDM=", "organizations_url": "https://api.github.com/users/idoh/orgs", "received_events_url": "https://api.github.com/users/idoh/received_events", "repos_url": "https://api.github.com/users/idoh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/idoh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/idoh/subscriptions", "type": "User", "url": "https://api.github.com/users/idoh" }
[]
closed
false
null
[]
null
0
"2020-07-28T07:32:47Z"
"2020-07-28T07:34:46Z"
"2020-07-28T07:33:59Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/446.diff", "html_url": "https://github.com/huggingface/datasets/pull/446", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/446.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/446" }
Fixed the path to `DEFAULT_TOKENIZER` #445
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/446/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/446/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/445
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/445/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/445/comments
https://api.github.com/repos/huggingface/datasets/issues/445/events
https://github.com/huggingface/datasets/issues/445
666,836,658
MDU6SXNzdWU2NjY4MzY2NTg=
445
DEFAULT_TOKENIZER import error in sacrebleu
{ "avatar_url": "https://avatars.githubusercontent.com/u/5303103?v=4", "events_url": "https://api.github.com/users/idoh/events{/privacy}", "followers_url": "https://api.github.com/users/idoh/followers", "following_url": "https://api.github.com/users/idoh/following{/other_user}", "gists_url": "https://api.github.com/users/idoh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/idoh", "id": 5303103, "login": "idoh", "node_id": "MDQ6VXNlcjUzMDMxMDM=", "organizations_url": "https://api.github.com/users/idoh/orgs", "received_events_url": "https://api.github.com/users/idoh/received_events", "repos_url": "https://api.github.com/users/idoh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/idoh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/idoh/subscriptions", "type": "User", "url": "https://api.github.com/users/idoh" }
[]
closed
false
null
[]
null
1
"2020-07-28T07:31:30Z"
"2020-07-28T12:58:56Z"
"2020-07-28T12:58:56Z"
CONTRIBUTOR
null
null
null
Latest Version 0.3.0 When loading the metric "sacrebleu" there is an import error due to the wrong path ![image](https://user-images.githubusercontent.com/5303103/88633063-2c5e5f00-d0bd-11ea-8ca8-4704dc975433.png)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/445/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/445/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/444
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/444/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/444/comments
https://api.github.com/repos/huggingface/datasets/issues/444/events
https://github.com/huggingface/datasets/issues/444
666,280,842
MDU6SXNzdWU2NjYyODA4NDI=
444
Keep loading old file even I specify a new file in load_dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/10594453?v=4", "events_url": "https://api.github.com/users/joshhu/events{/privacy}", "followers_url": "https://api.github.com/users/joshhu/followers", "following_url": "https://api.github.com/users/joshhu/following{/other_user}", "gists_url": "https://api.github.com/users/joshhu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/joshhu", "id": 10594453, "login": "joshhu", "node_id": "MDQ6VXNlcjEwNTk0NDUz", "organizations_url": "https://api.github.com/users/joshhu/orgs", "received_events_url": "https://api.github.com/users/joshhu/received_events", "repos_url": "https://api.github.com/users/joshhu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/joshhu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joshhu/subscriptions", "type": "User", "url": "https://api.github.com/users/joshhu" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
2
"2020-07-27T13:08:06Z"
"2020-07-29T13:57:22Z"
"2020-07-29T13:57:22Z"
NONE
null
null
null
I used load a file called 'a.csv' by ``` dataset = load_dataset('csv', data_file='./a.csv') ``` And after a while, I tried to load another csv called 'b.csv' ``` dataset = load_dataset('csv', data_file='./b.csv') ``` However, the new dataset seems to remain the old 'a.csv' and not loading new csv file. Even worse, after I load a.csv, the load_dataset function keeps loading the 'a.csv' afterward. Is this a cache problem?
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/444/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/444/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/443
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/443/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/443/comments
https://api.github.com/repos/huggingface/datasets/issues/443/events
https://github.com/huggingface/datasets/issues/443
666,246,716
MDU6SXNzdWU2NjYyNDY3MTY=
443
Cannot unpickle saved .pt dataset with torch.save()/load()
{ "avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4", "events_url": "https://api.github.com/users/vegarab/events{/privacy}", "followers_url": "https://api.github.com/users/vegarab/followers", "following_url": "https://api.github.com/users/vegarab/following{/other_user}", "gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vegarab", "id": 24683907, "login": "vegarab", "node_id": "MDQ6VXNlcjI0NjgzOTA3", "organizations_url": "https://api.github.com/users/vegarab/orgs", "received_events_url": "https://api.github.com/users/vegarab/received_events", "repos_url": "https://api.github.com/users/vegarab/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vegarab/subscriptions", "type": "User", "url": "https://api.github.com/users/vegarab" }
[]
closed
false
null
[]
null
1
"2020-07-27T12:13:37Z"
"2020-07-27T13:05:11Z"
"2020-07-27T13:05:11Z"
CONTRIBUTOR
null
null
null
Saving a formatted torch dataset to file using `torch.save()`. Loading the same file fails during unpickling: ```python >>> import torch >>> import nlp >>> squad = nlp.load_dataset("squad.py", split="train") >>> squad Dataset(features: {'source_text': Value(dtype='string', id=None), 'target_text': Value(dtype='string', id=None)}, num_rows: 87599) >>> squad = squad.map(create_features, batched=True) >>> squad.set_format(type="torch", columns=["source_ids", "target_ids", "attention_mask"]) >>> torch.save(squad, "squad.pt") >>> squad_pt = torch.load("squad.pt") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/torch/serialization.py", line 593, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File "/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/torch/serialization.py", line 773, in _legacy_load result = unpickler.load() File "/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/nlp/splits.py", line 493, in __setitem__ raise ValueError("Cannot add elem. Use .add() instead.") ValueError: Cannot add elem. Use .add() instead. ``` where `create_features` is a function that tokenizes the data using `batch_encode_plus` and returns a Dict with `input_ids`, `target_ids` and `attention_mask`. ```python def create_features(batch): source_text_encoding = tokenizer.batch_encode_plus( batch["source_text"], max_length=max_source_length, pad_to_max_length=True, truncation=True) target_text_encoding = tokenizer.batch_encode_plus( batch["target_text"], max_length=max_target_length, pad_to_max_length=True, truncation=True) features = { "source_ids": source_text_encoding["input_ids"], "target_ids": target_text_encoding["input_ids"], "attention_mask": source_text_encoding["attention_mask"] } return features ``` I found a similar issue in [issue 5267 in the huggingface/transformers repo](https://github.com/huggingface/transformers/issues/5267) which was solved by downgrading to `nlp==0.2.0`. That did not solve this problem, however.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/443/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/443/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/442
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/442/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/442/comments
https://api.github.com/repos/huggingface/datasets/issues/442/events
https://github.com/huggingface/datasets/issues/442
666,201,810
MDU6SXNzdWU2NjYyMDE4MTA=
442
[Suggestion] Glue Diagnostic Data with Labels
{ "avatar_url": "https://avatars.githubusercontent.com/u/3662782?v=4", "events_url": "https://api.github.com/users/ggbetz/events{/privacy}", "followers_url": "https://api.github.com/users/ggbetz/followers", "following_url": "https://api.github.com/users/ggbetz/following{/other_user}", "gists_url": "https://api.github.com/users/ggbetz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ggbetz", "id": 3662782, "login": "ggbetz", "node_id": "MDQ6VXNlcjM2NjI3ODI=", "organizations_url": "https://api.github.com/users/ggbetz/orgs", "received_events_url": "https://api.github.com/users/ggbetz/received_events", "repos_url": "https://api.github.com/users/ggbetz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ggbetz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ggbetz/subscriptions", "type": "User", "url": "https://api.github.com/users/ggbetz" }
[ { "color": "72f99f", "default": false, "description": "Discussions on the datasets", "id": 2067401494, "name": "Dataset discussion", "node_id": "MDU6TGFiZWwyMDY3NDAxNDk0", "url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion" } ]
open
false
null
[]
null
0
"2020-07-27T10:59:58Z"
"2020-08-24T15:13:20Z"
null
NONE
null
null
null
Hello! First of all, thanks for setting up this useful project! I've just realised you provide the the [Glue Diagnostics Data](https://huggingface.co/nlp/viewer/?dataset=glue&config=ax) without labels, indicating in the `GlueConfig` that you've only a test set. Yet, the data with labels is available, too (see also [here](https://gluebenchmark.com/diagnostics#introduction)): https://www.dropbox.com/s/ju7d95ifb072q9f/diagnostic-full.tsv?dl=1 Have you considered incorporating it?
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/442/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/442/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/441
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/441/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/441/comments
https://api.github.com/repos/huggingface/datasets/issues/441/events
https://github.com/huggingface/datasets/pull/441
666,148,413
MDExOlB1bGxSZXF1ZXN0NDU3MDQyMjY3
441
Add features parameter in load dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
2
"2020-07-27T09:50:01Z"
"2020-07-30T12:51:17Z"
"2020-07-30T12:51:16Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/441.diff", "html_url": "https://github.com/huggingface/datasets/pull/441", "merged_at": "2020-07-30T12:51:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/441.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/441" }
Added `features` argument in `nlp.load_dataset`. If they don't match the data type, it raises a `ValueError`. It's a draft PR because #440 needs to be merged first.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/441/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/441/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/440
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/440/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/440/comments
https://api.github.com/repos/huggingface/datasets/issues/440/events
https://github.com/huggingface/datasets/pull/440
666,116,823
MDExOlB1bGxSZXF1ZXN0NDU3MDE2MjQy
440
Fix user specified features in map
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
0
"2020-07-27T09:04:26Z"
"2020-07-28T09:25:23Z"
"2020-07-28T09:25:22Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/440.diff", "html_url": "https://github.com/huggingface/datasets/pull/440", "merged_at": "2020-07-28T09:25:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/440.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/440" }
`.map` didn't keep the user specified features because of an issue in the writer. The writer used to overwrite the user specified features with inferred features. I also added tests to make sure it doesn't happen again.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/440/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/440/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/439
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/439/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/439/comments
https://api.github.com/repos/huggingface/datasets/issues/439/events
https://github.com/huggingface/datasets/issues/439
665,964,673
MDU6SXNzdWU2NjU5NjQ2NzM=
439
Issues: Adding a FAISS or Elastic Search index to a Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/431890?v=4", "events_url": "https://api.github.com/users/nsankar/events{/privacy}", "followers_url": "https://api.github.com/users/nsankar/followers", "following_url": "https://api.github.com/users/nsankar/following{/other_user}", "gists_url": "https://api.github.com/users/nsankar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nsankar", "id": 431890, "login": "nsankar", "node_id": "MDQ6VXNlcjQzMTg5MA==", "organizations_url": "https://api.github.com/users/nsankar/orgs", "received_events_url": "https://api.github.com/users/nsankar/received_events", "repos_url": "https://api.github.com/users/nsankar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nsankar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nsankar/subscriptions", "type": "User", "url": "https://api.github.com/users/nsankar" }
[]
closed
false
null
[]
null
5
"2020-07-27T04:25:17Z"
"2020-10-28T01:46:24Z"
"2020-10-28T01:46:24Z"
NONE
null
null
null
It seems the DPRContextEncoder, DPRContextEncoderTokenizer cited[ in this documentation](https://huggingface.co/nlp/faiss_and_ea.html) is not implemented ? It didnot work with the standard nlp installation . Also, I couldn't find or use it with the latest nlp install from github in Colab. Is there any dependency on the latest PyArrow 1.0.0 ? Is it yet to be made generally available ?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/439/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/439/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/438
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/438/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/438/comments
https://api.github.com/repos/huggingface/datasets/issues/438/events
https://github.com/huggingface/datasets/issues/438
665,865,490
MDU6SXNzdWU2NjU4NjU0OTA=
438
New Datasets: IWSLT15+, ITTB
{ "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sshleifer", "id": 6045025, "login": "sshleifer", "node_id": "MDQ6VXNlcjYwNDUwMjU=", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "repos_url": "https://api.github.com/users/sshleifer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "type": "User", "url": "https://api.github.com/users/sshleifer" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
[]
null
2
"2020-07-26T21:43:04Z"
"2020-08-24T15:12:15Z"
null
CONTRIBUTOR
null
null
null
**Links:** [iwslt](https://pytorchnlp.readthedocs.io/en/latest/_modules/torchnlp/datasets/iwslt.html) Don't know if that link is up to date. [ittb](http://www.cfilt.iitb.ac.in/iitb_parallel/) **Motivation**: replicate mbart finetuning results (table below) ![image](https://user-images.githubusercontent.com/6045025/88490093-0c1c8c00-cf67-11ea-960d-8dcaad2aa8eb.png) For future readers, we already have the following language pairs in the wmt namespaces: ``` wmt14: ['cs-en', 'de-en', 'fr-en', 'hi-en', 'ru-en'] wmt15: ['cs-en', 'de-en', 'fi-en', 'fr-en', 'ru-en'] wmt16: ['cs-en', 'de-en', 'fi-en', 'ro-en', 'ru-en', 'tr-en'] wmt17: ['cs-en', 'de-en', 'fi-en', 'lv-en', 'ru-en', 'tr-en', 'zh-en'] wmt18: ['cs-en', 'de-en', 'et-en', 'fi-en', 'kk-en', 'ru-en', 'tr-en', 'zh-en'] wmt19: ['cs-en', 'de-en', 'fi-en', 'gu-en', 'kk-en', 'lt-en', 'ru-en', 'zh-en', 'fr-de'] ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/438/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/438/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/437
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/437/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/437/comments
https://api.github.com/repos/huggingface/datasets/issues/437/events
https://github.com/huggingface/datasets/pull/437
665,597,176
MDExOlB1bGxSZXF1ZXN0NDU2NjIzNjc3
437
Fix XTREME PAN-X loading
{ "avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4", "events_url": "https://api.github.com/users/lvwerra/events{/privacy}", "followers_url": "https://api.github.com/users/lvwerra/followers", "following_url": "https://api.github.com/users/lvwerra/following{/other_user}", "gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lvwerra", "id": 8264887, "login": "lvwerra", "node_id": "MDQ6VXNlcjgyNjQ4ODc=", "organizations_url": "https://api.github.com/users/lvwerra/orgs", "received_events_url": "https://api.github.com/users/lvwerra/received_events", "repos_url": "https://api.github.com/users/lvwerra/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions", "type": "User", "url": "https://api.github.com/users/lvwerra" }
[]
closed
false
null
[]
null
4
"2020-07-25T14:44:57Z"
"2020-07-30T08:28:15Z"
"2020-07-30T08:28:15Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/437.diff", "html_url": "https://github.com/huggingface/datasets/pull/437", "merged_at": "2020-07-30T08:28:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/437.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/437" }
Hi 🤗 In response to the discussion in #425 @lewtun and I made some fixes to the repo. In the original XTREME implementation the PAN-X dataset for named entity recognition loaded each word/tag pair as a single row and the sentence relation was lost. With the fix each row contains the list of all words in a single sentence and their NER tags. This is also in agreement with the [NER example](https://github.com/huggingface/transformers/tree/master/examples/token-classification) in the transformers repo. With the fix the output of the dataset should look as follows: ```python >>> dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data') >>> dataset['train'][0] {'words': ['R.H.', 'Saunders', '(', 'St.', 'Lawrence', 'River', ')', '(', '968', 'MW', ')'], 'ner_tags': ['B-ORG', 'I-ORG', 'O', 'B-ORG', 'I-ORG', 'I-ORG', 'O', 'O', 'O', 'O', 'O'], 'langs': ['en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en']} ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/437/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/437/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/436
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/436/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/436/comments
https://api.github.com/repos/huggingface/datasets/issues/436/events
https://github.com/huggingface/datasets/issues/436
665,582,167
MDU6SXNzdWU2NjU1ODIxNjc=
436
Google Colab - load_dataset - PyArrow exception
{ "avatar_url": "https://avatars.githubusercontent.com/u/431890?v=4", "events_url": "https://api.github.com/users/nsankar/events{/privacy}", "followers_url": "https://api.github.com/users/nsankar/followers", "following_url": "https://api.github.com/users/nsankar/following{/other_user}", "gists_url": "https://api.github.com/users/nsankar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nsankar", "id": 431890, "login": "nsankar", "node_id": "MDQ6VXNlcjQzMTg5MA==", "organizations_url": "https://api.github.com/users/nsankar/orgs", "received_events_url": "https://api.github.com/users/nsankar/received_events", "repos_url": "https://api.github.com/users/nsankar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nsankar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nsankar/subscriptions", "type": "User", "url": "https://api.github.com/users/nsankar" }
[]
closed
false
null
[]
null
9
"2020-07-25T13:05:20Z"
"2020-08-20T08:08:18Z"
"2020-08-20T08:08:18Z"
NONE
null
null
null
With latest PyArrow 1.0.0 installed, I get the following exception . Restarting colab has the same issue ImportWarning: To use `nlp`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition. If you are running this in a Google Colab, you should probably just restart the runtime to use the right version of `pyarrow`. The error goes only when I install version 0.16.0 i.e. !pip install pyarrow==0.16.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/436/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/436/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/435
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/435/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/435/comments
https://api.github.com/repos/huggingface/datasets/issues/435/events
https://github.com/huggingface/datasets/issues/435
665,507,141
MDU6SXNzdWU2NjU1MDcxNDE=
435
ImportWarning for pyarrow 1.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/18187806?v=4", "events_url": "https://api.github.com/users/HanGuo97/events{/privacy}", "followers_url": "https://api.github.com/users/HanGuo97/followers", "following_url": "https://api.github.com/users/HanGuo97/following{/other_user}", "gists_url": "https://api.github.com/users/HanGuo97/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/HanGuo97", "id": 18187806, "login": "HanGuo97", "node_id": "MDQ6VXNlcjE4MTg3ODA2", "organizations_url": "https://api.github.com/users/HanGuo97/orgs", "received_events_url": "https://api.github.com/users/HanGuo97/received_events", "repos_url": "https://api.github.com/users/HanGuo97/repos", "site_admin": false, "starred_url": "https://api.github.com/users/HanGuo97/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HanGuo97/subscriptions", "type": "User", "url": "https://api.github.com/users/HanGuo97" }
[]
closed
false
null
[]
null
4
"2020-07-25T03:44:39Z"
"2020-09-08T17:57:15Z"
"2020-08-03T16:37:32Z"
NONE
null
null
null
The following PR raised ImportWarning at `pyarrow ==1.0.0` https://github.com/huggingface/nlp/pull/265/files
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/435/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/435/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/434
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/434/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/434/comments
https://api.github.com/repos/huggingface/datasets/issues/434/events
https://github.com/huggingface/datasets/pull/434
665,477,638
MDExOlB1bGxSZXF1ZXN0NDU2NTM3Njgz
434
Fixed check for pyarrow
{ "avatar_url": "https://avatars.githubusercontent.com/u/58701810?v=4", "events_url": "https://api.github.com/users/nadahlberg/events{/privacy}", "followers_url": "https://api.github.com/users/nadahlberg/followers", "following_url": "https://api.github.com/users/nadahlberg/following{/other_user}", "gists_url": "https://api.github.com/users/nadahlberg/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nadahlberg", "id": 58701810, "login": "nadahlberg", "node_id": "MDQ6VXNlcjU4NzAxODEw", "organizations_url": "https://api.github.com/users/nadahlberg/orgs", "received_events_url": "https://api.github.com/users/nadahlberg/received_events", "repos_url": "https://api.github.com/users/nadahlberg/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nadahlberg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nadahlberg/subscriptions", "type": "User", "url": "https://api.github.com/users/nadahlberg" }
[]
closed
false
null
[]
null
1
"2020-07-25T00:16:53Z"
"2020-07-25T06:36:34Z"
"2020-07-25T06:36:34Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/434.diff", "html_url": "https://github.com/huggingface/datasets/pull/434", "merged_at": "2020-07-25T06:36:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/434.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/434" }
Fix check for pyarrow in __init__.py. Previously would raise an error for pyarrow >= 1.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/434/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/434/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/433
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/433/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/433/comments
https://api.github.com/repos/huggingface/datasets/issues/433/events
https://github.com/huggingface/datasets/issues/433
665,311,025
MDU6SXNzdWU2NjUzMTEwMjU=
433
How to reuse functionality of a (generic) dataset?
{ "avatar_url": "https://avatars.githubusercontent.com/u/3375489?v=4", "events_url": "https://api.github.com/users/ArneBinder/events{/privacy}", "followers_url": "https://api.github.com/users/ArneBinder/followers", "following_url": "https://api.github.com/users/ArneBinder/following{/other_user}", "gists_url": "https://api.github.com/users/ArneBinder/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArneBinder", "id": 3375489, "login": "ArneBinder", "node_id": "MDQ6VXNlcjMzNzU0ODk=", "organizations_url": "https://api.github.com/users/ArneBinder/orgs", "received_events_url": "https://api.github.com/users/ArneBinder/received_events", "repos_url": "https://api.github.com/users/ArneBinder/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArneBinder/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArneBinder/subscriptions", "type": "User", "url": "https://api.github.com/users/ArneBinder" }
[]
closed
false
null
[]
null
4
"2020-07-24T17:27:37Z"
"2022-10-04T17:59:34Z"
"2022-10-04T17:59:33Z"
NONE
null
null
null
I have written a generic dataset for corpora created with the Brat annotation tool ([specification](https://brat.nlplab.org/standoff.html), [dataset code](https://github.com/ArneBinder/nlp/blob/brat/datasets/brat/brat.py)). Now I wonder how to use that to create specific dataset instances. What's the recommended way to reuse formats and loading functionality for datasets with a common format? In my case, it took a bit of time to create the Brat dataset and I think others would appreciate to not have to think about that again. Also, I assume there are other formats (e.g. conll) that are widely used, so having this would really ease dataset onboarding and adoption of the library.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/433/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/433/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/432
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/432/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/432/comments
https://api.github.com/repos/huggingface/datasets/issues/432/events
https://github.com/huggingface/datasets/pull/432
665,234,340
MDExOlB1bGxSZXF1ZXN0NDU2MzQxNDk3
432
Fix handling of config files while loading datasets from multiple processes
{ "avatar_url": "https://avatars.githubusercontent.com/u/99543?v=4", "events_url": "https://api.github.com/users/orsharir/events{/privacy}", "followers_url": "https://api.github.com/users/orsharir/followers", "following_url": "https://api.github.com/users/orsharir/following{/other_user}", "gists_url": "https://api.github.com/users/orsharir/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/orsharir", "id": 99543, "login": "orsharir", "node_id": "MDQ6VXNlcjk5NTQz", "organizations_url": "https://api.github.com/users/orsharir/orgs", "received_events_url": "https://api.github.com/users/orsharir/received_events", "repos_url": "https://api.github.com/users/orsharir/repos", "site_admin": false, "starred_url": "https://api.github.com/users/orsharir/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/orsharir/subscriptions", "type": "User", "url": "https://api.github.com/users/orsharir" }
[]
closed
false
null
[]
null
4
"2020-07-24T15:10:57Z"
"2020-08-01T17:11:42Z"
"2020-07-30T08:25:28Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/432.diff", "html_url": "https://github.com/huggingface/datasets/pull/432", "merged_at": "2020-07-30T08:25:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/432.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/432" }
When loading shards on several processes, each process upon loading the dataset will overwrite dataset_infos.json in <package path>/datasets/<dataset name>/<hash>/dataset_infos.json. It does so every time, even when the target file already exists and is identical. Because multiple processes rewrite the same file in parallel, it creates a race condition when a process tries to load the file, often resulting in a JSON decoding exception because the file is only partially written. This pull requests partially address this by comparing if the files are already identical before copying over the downloaded copy to the cached destination. There's still a race condition, but now it's less likely to occur if some basic precautions are taken by the library user, e.g., download all datasets to cache before spawning multiple processes.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/432/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/432/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/431
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/431/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/431/comments
https://api.github.com/repos/huggingface/datasets/issues/431/events
https://github.com/huggingface/datasets/pull/431
665,044,416
MDExOlB1bGxSZXF1ZXN0NDU2MTgyNDE2
431
Specify split post processing + Add post processing resources downloading
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
4
"2020-07-24T09:29:19Z"
"2020-07-31T09:05:04Z"
"2020-07-31T09:05:03Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/431.diff", "html_url": "https://github.com/huggingface/datasets/pull/431", "merged_at": "2020-07-31T09:05:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/431.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/431" }
Previously if you tried to do ```python from nlp import load_dataset wiki = load_dataset("wiki_dpr", "psgs_w100_with_nq_embeddings", split="train[:100]", with_index=True) ``` Then you'd get an error `Index size should match Dataset size...` This was because it was trying to use the full index (21M elements). To fix that I made it so post processing resources can be named according to the split. I'm going to add tests on post processing too. Note that the CI will fail as I added a new argument in `_post_processing_resources`: the AWS version of wiki_dpr fails, and there's also an error telling that it is not synced (it'll be synced once it's merged): ``` =========================== short test summary info ============================ FAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_wiki_dpr FAILED tests/test_hf_gcp.py::TestDatasetSynced::test_script_synced_with_s3_wiki_dpr ``` EDIT: I did a change to ignore the script hash to locate the arrow files on GCS, so I removed the sync test. It was there just because of the hash logic for files on GCS
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/431/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/431/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/430
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/430/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/430/comments
https://api.github.com/repos/huggingface/datasets/issues/430/events
https://github.com/huggingface/datasets/pull/430
664,583,837
MDExOlB1bGxSZXF1ZXN0NDU1ODAxOTI2
430
add DatasetDict
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
4
"2020-07-23T15:43:49Z"
"2020-08-04T01:01:53Z"
"2020-07-29T09:06:22Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/430.diff", "html_url": "https://github.com/huggingface/datasets/pull/430", "merged_at": "2020-07-29T09:06:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/430.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/430" }
## Add DatasetDict ### Overview When you call `load_dataset` it can return a dictionary of datasets if there are several splits (train/test for example). If you wanted to apply dataset transforms you had to iterate over each split and apply the transform. Instead of returning a dict, it now returns a `nlp.DatasetDict` object which inherits from dict and contains the same data as before, except that now users can call dataset transforms directly from the output, and they'll be applied on each split. Before: ```python from nlp import load_dataset squad = load_dataset("squad") print(squad.keys()) # dict_keys(['train', 'validation']) squad = { split_name: dataset.map(my_func) for split_name, dataset in squad.items() } print(squad.keys()) # dict_keys(['train', 'validation']) ``` Now: ```python from nlp import load_dataset squad = load_dataset("squad") print(squad.keys()) # dict_keys(['train', 'validation']) squad = squad.map(my_func) print(squad.keys()) # dict_keys(['train', 'validation']) ``` ### Dataset transforms `nlp.DatasetDict` implements the following dataset transforms: - map - filter - sort - shuffle ### Arguments The arguments of the methods are the same except for split-specific arguments like `cache_file_name`. For such arguments, the expected input is a dictionary `{split_name: argument_value}` It concerns: - `cache_file_name` in map, filter, sort, shuffle - `seed` and `generator` in shuffle
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/430/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/430/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/429
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/429/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/429/comments
https://api.github.com/repos/huggingface/datasets/issues/429/events
https://github.com/huggingface/datasets/pull/429
664,412,137
MDExOlB1bGxSZXF1ZXN0NDU1NjU2MDk5
429
mlsum
{ "avatar_url": "https://avatars.githubusercontent.com/u/36986299?v=4", "events_url": "https://api.github.com/users/RachelKer/events{/privacy}", "followers_url": "https://api.github.com/users/RachelKer/followers", "following_url": "https://api.github.com/users/RachelKer/following{/other_user}", "gists_url": "https://api.github.com/users/RachelKer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/RachelKer", "id": 36986299, "login": "RachelKer", "node_id": "MDQ6VXNlcjM2OTg2Mjk5", "organizations_url": "https://api.github.com/users/RachelKer/orgs", "received_events_url": "https://api.github.com/users/RachelKer/received_events", "repos_url": "https://api.github.com/users/RachelKer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/RachelKer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RachelKer/subscriptions", "type": "User", "url": "https://api.github.com/users/RachelKer" }
[]
closed
false
null
[]
null
6
"2020-07-23T11:52:39Z"
"2020-07-31T11:46:20Z"
"2020-07-31T11:46:20Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/429.diff", "html_url": "https://github.com/huggingface/datasets/pull/429", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/429.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/429" }
Hello, The tests for the load_real_data fail, as there is no default language subset to download it looks for a file that does not exist. This bug does not happen when using the load_dataset function, as it asks you to specify a language if you do not, so I submit this PR anyway. The dataset is avalaible on : https://gitlab.lip6.fr/scialom/mlsum_data
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/429/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/429/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/428
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/428/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/428/comments
https://api.github.com/repos/huggingface/datasets/issues/428/events
https://github.com/huggingface/datasets/pull/428
664,367,086
MDExOlB1bGxSZXF1ZXN0NDU1NjE3Nzcy
428
fix concatenate_datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
0
"2020-07-23T10:30:59Z"
"2020-07-23T10:35:00Z"
"2020-07-23T10:34:58Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/428.diff", "html_url": "https://github.com/huggingface/datasets/pull/428", "merged_at": "2020-07-23T10:34:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/428.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/428" }
`concatenate_datatsets` used to test that the different`nlp.Dataset.schema` match, but this attribute was removed in #423
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/428/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/428/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/427
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/427/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/427/comments
https://api.github.com/repos/huggingface/datasets/issues/427/events
https://github.com/huggingface/datasets/pull/427
664,341,623
MDExOlB1bGxSZXF1ZXN0NDU1NTk1Nzc3
427
Allow sequence features for beam + add processed Natural Questions
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
0
"2020-07-23T09:52:41Z"
"2020-07-23T13:09:30Z"
"2020-07-23T13:09:29Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/427.diff", "html_url": "https://github.com/huggingface/datasets/pull/427", "merged_at": "2020-07-23T13:09:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/427.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/427" }
## Allow Sequence features for Beam Datasets + add Natural Questions ### The issue The steps of beam datasets processing is the following: - download the source files and send them in a remote storage (gcs) - process the files using a beam runner (dataflow) - save output in remote storage (gcs) - convert output to arrow in remote storage (gcs) However it wasn't possible to process `natural_questions` because apache beam's processing outputs parquet files, and it's not yet possible to read parquet files with list features. ### The proposed solution To allow sequence features for beam I added a workaround that serializes the values using `json.dumps`, so that we end up with strings instead of the original features. Then when the arrow file is created, the serialized objects are transformed back to normal with `json.loads`. Not sure if there's a better way to do it. ### Natural Questions I was able to process NQ with it, and so I added the json infos file in this PR too. The processed arrow files are also stored in gcs. It allows you to load NQ with ```python from nlp import load_dataset nq = load_dataset("natural_questions") # download the 90GB arrow files from gcs and return the dataset ``` ### Tests I added a test case to make sure it works as expected. Note that the CI will fail because I am updating `natural_questions.py`: it's not synced with the script on S3. It will be synced as soon as this PR is merged. ``` =========================== short test summary info ============================ FAILED tests/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_natural_questions/default ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 3, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/427/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/427/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/426
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/426/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/426/comments
https://api.github.com/repos/huggingface/datasets/issues/426/events
https://github.com/huggingface/datasets/issues/426
664,203,897
MDU6SXNzdWU2NjQyMDM4OTc=
426
[FEATURE REQUEST] Multiprocessing with for dataset.map, dataset.filter
{ "avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4", "events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}", "followers_url": "https://api.github.com/users/timothyjlaurent/followers", "following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}", "gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/timothyjlaurent", "id": 2000204, "login": "timothyjlaurent", "node_id": "MDQ6VXNlcjIwMDAyMDQ=", "organizations_url": "https://api.github.com/users/timothyjlaurent/orgs", "received_events_url": "https://api.github.com/users/timothyjlaurent/received_events", "repos_url": "https://api.github.com/users/timothyjlaurent/repos", "site_admin": false, "starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions", "type": "User", "url": "https://api.github.com/users/timothyjlaurent" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
6
"2020-07-23T05:00:41Z"
"2021-03-12T09:34:12Z"
"2020-09-07T14:48:04Z"
NONE
null
null
null
It would be nice to be able to speed up `dataset.map` or `dataset.filter`. Perhaps this is as easy as sharding the dataset sending each shard to a process/thread/dask pool and using the new `nlp.concatenate_dataset()` function to join them all together?
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/426/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/426/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/425
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/425/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/425/comments
https://api.github.com/repos/huggingface/datasets/issues/425/events
https://github.com/huggingface/datasets/issues/425
664,029,848
MDU6SXNzdWU2NjQwMjk4NDg=
425
Correct data structure for PAN-X task in XTREME dataset?
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
[]
closed
false
null
[]
null
7
"2020-07-22T20:29:20Z"
"2020-08-02T13:30:34Z"
"2020-08-02T13:30:34Z"
MEMBER
null
null
null
Hi 🤗 team! ## Description of the problem Thanks to the fix from #416 I am now able to load the NER task in the XTREME dataset as follows: ```python from nlp import load_dataset # AmazonPhotos.zip is located in data/ dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data') dataset_train = dataset['train'] ``` However, I am not sure that `load_dataset()` is returning the correct data structure for NER. Currently, every row in `dataset_train` is of the form ```python {'word': str, 'ner_tag': str, 'lang': str} ``` but I think we actually want something like ```python {'words': List[str], 'ner_tags': List[str], 'langs': List[str]} ``` so that each row corresponds to a _sequence_ of words associated with each example. With the current data structure I do not think it is possible to transform `dataset_train` into a form suitable for training because we do not know the boundaries between examples. Indeed, [this line](https://github.com/google-research/xtreme/blob/522434d1aece34131d997a97ce7e9242a51a688a/third_party/utils_tag.py#L58) in the XTREME repo, processes the texts as lists of sentences, tags, and languages. ## Proposed solution Replace ```python with open(filepath) as f: data = csv.reader(f, delimiter="\t", quoting=csv.QUOTE_NONE) for id_, row in enumerate(data): if row: lang, word = row[0].split(":")[0], row[0].split(":")[1] tag = row[1] yield id_, {"word": word, "ner_tag": tag, "lang": lang} ``` from [these lines](https://github.com/huggingface/nlp/blob/ce7d3a1d630b78fe27188d1706f3ea980e8eec43/datasets/xtreme/xtreme.py#L881-L887) of the `_generate_examples()` function with something like ```python guid_index = 1 with open(filepath, encoding="utf-8") as f: words = [] ner_tags = [] langs = [] for line in f: if line.startswith("-DOCSTART-") or line == "" or line == "\n": if words: yield guid_index, {"words": words, "ner_tags": ner_tags, "langs": langs} guid_index += 1 words = [] ner_tags = [] else: # pan-x data is tab separated splits = line.split("\t") # strip out en: prefix langs.append(splits[0][:2]) words.append(splits[0][3:]) if len(splits) > 1: labels.append(splits[-1].replace("\n", "")) else: # examples have no label in test set labels.append("O") ``` If you agree, me or @lvwerra would be happy to implement this and create a PR.
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/425/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/425/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/424
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/424/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/424/comments
https://api.github.com/repos/huggingface/datasets/issues/424/events
https://github.com/huggingface/datasets/pull/424
663,858,552
MDExOlB1bGxSZXF1ZXN0NDU1MTk4MTY0
424
Web of science
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
[]
closed
false
null
[]
null
0
"2020-07-22T15:38:31Z"
"2020-07-23T14:27:58Z"
"2020-07-23T14:27:56Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/424.diff", "html_url": "https://github.com/huggingface/datasets/pull/424", "merged_at": "2020-07-23T14:27:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/424.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/424" }
this PR adds the WebofScience dataset #353
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/424/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/424/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/423
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/423/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/423/comments
https://api.github.com/repos/huggingface/datasets/issues/423/events
https://github.com/huggingface/datasets/pull/423
663,079,359
MDExOlB1bGxSZXF1ZXN0NDU0NTU4OTA0
423
Change features vs schema logic
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
2
"2020-07-21T14:52:47Z"
"2020-07-25T09:08:34Z"
"2020-07-23T10:15:17Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/423.diff", "html_url": "https://github.com/huggingface/datasets/pull/423", "merged_at": "2020-07-23T10:15:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/423.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/423" }
## New logic for `nlp.Features` in datasets Previously, it was confusing to have `features` and pyarrow's `schema` in `nlp.Dataset`. However `features` is supposed to be the front-facing object to define the different fields of a dataset, while `schema` is only used to write arrow files. Changes: - Remove `schema` field in `nlp.Dataset` - Make `features` the source of truth to read/write examples - `features` can no longer be `None` in `nlp.Dataset` - Update `features` after each dataset transform such as `nlp.Dataset.map` Todo: change the tests to take these changes into account
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/423/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/423/timeline
null
null
true