url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.6B
| node_id
stringlengths 18
32
| number
int64 1
5.57k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
int64 0
54
| created_at
unknown | updated_at
unknown | closed_at
unknown | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
float64 0
1
⌀ | pull_request
dict | is_pull_request
bool 2
classes | handling_time
float64 6
72.4M
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/5566 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5566/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5566/comments | https://api.github.com/repos/huggingface/datasets/issues/5566/events | https://github.com/huggingface/datasets/issues/5566 | 1,595,916,674 | I_kwDODunzps5fH8GC | 5,566 | Directly reading parquet files in a s3 bucket from the load_dataset method | {
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shamanez",
"id": 16892570,
"login": "shamanez",
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"repos_url": "https://api.github.com/users/shamanez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shamanez"
} | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
},
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | 1 | "2023-02-22T22:13:40Z" | "2023-02-23T11:03:29Z" | null | NONE | null | ### Feature request
Right now, we have to read the get the parquet file to the local storage. So having ability to read given the bucket directly address would be benificial
### Motivation
In a production set up, this feature can help us a lot. So we do not need move training datafiles in between storage.
### Your contribution
I am willing to help if there's anyway. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5566/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5566/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/5565 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5565/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5565/comments | https://api.github.com/repos/huggingface/datasets/issues/5565/events | https://github.com/huggingface/datasets/pull/5565 | 1,595,281,752 | PR_kwDODunzps5KhfTH | 5,565 | Add writer_batch_size for ArrowBasedBuilder | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | open | false | null | [] | null | 3 | "2023-02-22T15:09:30Z" | "2023-02-22T15:41:58Z" | null | MEMBER | null | This way we can control the size of the record_batches/row_groups of arrow/parquet files.
This can be useful for `datasets-server` to keep control of the row groups size which can affect random access performance for audio/image/video datasets | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5565/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5565/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5565.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5565",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5565.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5565"
} | true | null |
https://api.github.com/repos/huggingface/datasets/issues/5564 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5564/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5564/comments | https://api.github.com/repos/huggingface/datasets/issues/5564/events | https://github.com/huggingface/datasets/pull/5564 | 1,595,064,698 | PR_kwDODunzps5KgwzU | 5,564 | Set dev version | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 3 | "2023-02-22T13:00:09Z" | "2023-02-22T13:09:26Z" | "2023-02-22T13:00:25Z" | MEMBER | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5564/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5564/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5564.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5564",
"merged_at": "2023-02-22T13:00:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5564.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5564"
} | true | 16 |
https://api.github.com/repos/huggingface/datasets/issues/5563 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5563/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5563/comments | https://api.github.com/repos/huggingface/datasets/issues/5563/events | https://github.com/huggingface/datasets/pull/5563 | 1,595,049,025 | PR_kwDODunzps5KgtbL | 5,563 | Release: 2.10.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 4 | "2023-02-22T12:48:52Z" | "2023-02-22T13:05:55Z" | "2023-02-22T12:56:48Z" | MEMBER | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5563/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5563/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5563.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5563",
"merged_at": "2023-02-22T12:56:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5563.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5563"
} | true | 476 |
https://api.github.com/repos/huggingface/datasets/issues/5562 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5562/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5562/comments | https://api.github.com/repos/huggingface/datasets/issues/5562/events | https://github.com/huggingface/datasets/pull/5562 | 1,594,625,539 | PR_kwDODunzps5KfTUT | 5,562 | Update csv.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/54279069?v=4",
"events_url": "https://api.github.com/users/XDoubleU/events{/privacy}",
"followers_url": "https://api.github.com/users/XDoubleU/followers",
"following_url": "https://api.github.com/users/XDoubleU/following{/other_user}",
"gists_url": "https://api.github.com/users/XDoubleU/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/XDoubleU",
"id": 54279069,
"login": "XDoubleU",
"node_id": "MDQ6VXNlcjU0Mjc5MDY5",
"organizations_url": "https://api.github.com/users/XDoubleU/orgs",
"received_events_url": "https://api.github.com/users/XDoubleU/received_events",
"repos_url": "https://api.github.com/users/XDoubleU/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/XDoubleU/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XDoubleU/subscriptions",
"type": "User",
"url": "https://api.github.com/users/XDoubleU"
} | [] | closed | false | null | [] | null | 4 | "2023-02-22T07:56:10Z" | "2023-02-23T11:07:49Z" | "2023-02-23T11:00:58Z" | CONTRIBUTOR | null | Removed mangle_dup_cols=True from BuilderConfig.
It triggered following deprecation warning:
/usr/local/lib/python3.8/dist-packages/datasets/download/streaming_download_manager.py:776: FutureWarning: the 'mangle_dupe_cols' keyword is deprecated and will be removed in a future version. Please take steps to stop the use of 'mangle_dupe_cols'
return pd.read_csv(xopen(filepath_or_buffer, "rb", use_auth_token=use_auth_token), **kwargs)
Further documentation of pandas: https://pandas.pydata.org/docs/whatsnew/v1.4.0.html#mangle-dupe-cols-in-read-csv-no-longer-renames-unique-columns-conflicting-with-target-names
At first sight it seems like this flag is resolved internally, it might need some more research. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5562/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5562/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5562.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5562",
"merged_at": "2023-02-23T11:00:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5562.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5562"
} | true | 97,488 |
https://api.github.com/repos/huggingface/datasets/issues/5561 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5561/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5561/comments | https://api.github.com/repos/huggingface/datasets/issues/5561/events | https://github.com/huggingface/datasets/pull/5561 | 1,593,862,388 | PR_kwDODunzps5Kcxw_ | 5,561 | Add pre-commit config yaml file to enable automatic code formatting | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [] | open | false | null | [] | null | 2 | "2023-02-21T17:35:07Z" | "2023-02-22T20:41:11Z" | null | CONTRIBUTOR | null | @huggingface/datasets do you think it would be useful? Motivation - sometimes PRs are like 30% "fix: style" commits :)
If so - I need to double check the config but for me locally it works as expected. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5561/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5561/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5561.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5561",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5561.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5561"
} | true | null |
https://api.github.com/repos/huggingface/datasets/issues/5560 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5560/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5560/comments | https://api.github.com/repos/huggingface/datasets/issues/5560/events | https://github.com/huggingface/datasets/pull/5560 | 1,593,809,978 | PR_kwDODunzps5Kcml6 | 5,560 | Ensure last tqdm update in `map` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 10 | "2023-02-21T16:56:17Z" | "2023-02-21T18:26:23Z" | "2023-02-21T18:19:09Z" | CONTRIBUTOR | null | This PR modifies `map` to:
* ensure the TQDM bar gets the last progress update
* when a map function fails, avoid throwing a chained exception in the single-proc mode | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5560/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5560/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5560.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5560",
"merged_at": "2023-02-21T18:19:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5560.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5560"
} | true | 4,972 |
https://api.github.com/repos/huggingface/datasets/issues/5559 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5559/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5559/comments | https://api.github.com/repos/huggingface/datasets/issues/5559/events | https://github.com/huggingface/datasets/pull/5559 | 1,593,676,489 | PR_kwDODunzps5KcKSb | 5,559 | Fix map suffix_template | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 4 | "2023-02-21T15:26:26Z" | "2023-02-21T17:21:37Z" | "2023-02-21T17:14:29Z" | MEMBER | null | #5455 introduced a small bug that lead `map` to ignore the `suffix_template` argument and not put suffixes to cached files in multiprocessing.
I fixed this and also improved a few things:
- regarding logging: "Loading cached processed dataset" is now logged only once even in multiprocessing (it used to be logged `num_proc` times)
- regarding new_fingerprint: I made sure that the returned dataset satisfies `ds._fingerprint==new_fingerprint` if `new_fingerprint` is passed to `map` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5559/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5559/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5559.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5559",
"merged_at": "2023-02-21T17:14:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5559.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5559"
} | true | 6,483 |
https://api.github.com/repos/huggingface/datasets/issues/5558 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5558/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5558/comments | https://api.github.com/repos/huggingface/datasets/issues/5558/events | https://github.com/huggingface/datasets/pull/5558 | 1,593,655,815 | PR_kwDODunzps5KcF5E | 5,558 | Remove instructions for `ffmpeg` system package installation on Colab | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [] | open | false | null | [] | null | 1 | "2023-02-21T15:13:36Z" | "2023-02-22T21:04:09Z" | null | CONTRIBUTOR | null | Colab now has Ubuntu 20.04 which already has `ffmpeg` of required (>4) version. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5558/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5558/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5558.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5558",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5558.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5558"
} | true | null |
https://api.github.com/repos/huggingface/datasets/issues/5557 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5557/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5557/comments | https://api.github.com/repos/huggingface/datasets/issues/5557/events | https://github.com/huggingface/datasets/pull/5557 | 1,593,545,324 | PR_kwDODunzps5Kbube | 5,557 | Add filter desc | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 3 | "2023-02-21T14:04:42Z" | "2023-02-21T14:19:54Z" | "2023-02-21T14:12:39Z" | MEMBER | null | Otherwise it would show a `Map` progress bar, since it uses `map` under the hood | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5557/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5557/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5557.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5557",
"merged_at": "2023-02-21T14:12:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5557.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5557"
} | true | 477 |
https://api.github.com/repos/huggingface/datasets/issues/5556 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5556/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5556/comments | https://api.github.com/repos/huggingface/datasets/issues/5556/events | https://github.com/huggingface/datasets/pull/5556 | 1,593,246,936 | PR_kwDODunzps5KauVL | 5,556 | Use default audio resampling type | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 5 | "2023-02-21T10:45:50Z" | "2023-02-21T12:49:50Z" | "2023-02-21T12:42:52Z" | MEMBER | null | ...instead of relying on the optional librosa dependency `resampy`.
It was only used for `_decode_non_mp3_file_like` anyway and not for the other ones - removing it fixes consistency between decoding methods (except torchaudio decoding)
Therefore I think it is a better solution than adding `resampy` as a dependency in https://github.com/huggingface/datasets/pull/5554
cc @polinaeterna | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5556/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5556/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5556.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5556",
"merged_at": "2023-02-21T12:42:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5556.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5556"
} | true | 7,022 |
https://api.github.com/repos/huggingface/datasets/issues/5555 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5555/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5555/comments | https://api.github.com/repos/huggingface/datasets/issues/5555/events | https://github.com/huggingface/datasets/issues/5555 | 1,592,469,938 | I_kwDODunzps5e6ymy | 5,555 | `.shuffle` throwing error `ValueError: Protocol not known: parent` | {
"avatar_url": "https://avatars.githubusercontent.com/u/10768588?v=4",
"events_url": "https://api.github.com/users/prabhakar267/events{/privacy}",
"followers_url": "https://api.github.com/users/prabhakar267/followers",
"following_url": "https://api.github.com/users/prabhakar267/following{/other_user}",
"gists_url": "https://api.github.com/users/prabhakar267/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/prabhakar267",
"id": 10768588,
"login": "prabhakar267",
"node_id": "MDQ6VXNlcjEwNzY4NTg4",
"organizations_url": "https://api.github.com/users/prabhakar267/orgs",
"received_events_url": "https://api.github.com/users/prabhakar267/received_events",
"repos_url": "https://api.github.com/users/prabhakar267/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/prabhakar267/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prabhakar267/subscriptions",
"type": "User",
"url": "https://api.github.com/users/prabhakar267"
} | [] | open | false | null | [] | null | 1 | "2023-02-20T21:33:45Z" | "2023-02-21T13:16:02Z" | null | NONE | null | ### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle()
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:551, in transmit_format.<locals>.wrapper(*args, **kwargs)
544 self_format = {
545 "type": self._format_type,
546 "format_kwargs": self._format_kwargs,
547 "columns": self._format_columns,
548 "output_all_columns": self._output_all_columns,
549 }
550 # apply actual function
--> 551 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
552 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
553 # re-apply format to the output
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
476 validate_fingerprint(kwargs[fingerprint_name])
478 # Call actual function
--> 480 out = func(self, *args, **kwargs)
482 # Update fingerprint of in-place transforms + update in-place history of transforms
484 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:3616, in Dataset.shuffle(self, seed, generator, keep_in_memory, load_from_cache_file, indices_cache_file_name, writer_batch_size, new_fingerprint)
3610 return self._new_dataset_with_indices(
3611 fingerprint=new_fingerprint, indices_cache_file_name=indices_cache_file_name
3612 )
3614 permutation = generator.permutation(len(self))
-> 3616 return self.select(
3617 indices=permutation,
3618 keep_in_memory=keep_in_memory,
3619 indices_cache_file_name=indices_cache_file_name if not keep_in_memory else None,
3620 writer_batch_size=writer_batch_size,
3621 new_fingerprint=new_fingerprint,
3622 )
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:551, in transmit_format.<locals>.wrapper(*args, **kwargs)
544 self_format = {
545 "type": self._format_type,
546 "format_kwargs": self._format_kwargs,
547 "columns": self._format_columns,
548 "output_all_columns": self._output_all_columns,
549 }
550 # apply actual function
--> 551 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
552 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
553 # re-apply format to the output
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
476 validate_fingerprint(kwargs[fingerprint_name])
478 # Call actual function
--> 480 out = func(self, *args, **kwargs)
482 # Update fingerprint of in-place transforms + update in-place history of transforms
484 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:3266, in Dataset.select(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint)
3263 return self._select_contiguous(start, length, new_fingerprint=new_fingerprint)
3265 # If not contiguous, we need to create a new indices mapping
-> 3266 return self._select_with_indices_mapping(
3267 indices,
3268 keep_in_memory=keep_in_memory,
3269 indices_cache_file_name=indices_cache_file_name,
3270 writer_batch_size=writer_batch_size,
3271 new_fingerprint=new_fingerprint,
3272 )
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:551, in transmit_format.<locals>.wrapper(*args, **kwargs)
544 self_format = {
545 "type": self._format_type,
546 "format_kwargs": self._format_kwargs,
547 "columns": self._format_columns,
548 "output_all_columns": self._output_all_columns,
549 }
550 # apply actual function
--> 551 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
552 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
553 # re-apply format to the output
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
476 validate_fingerprint(kwargs[fingerprint_name])
478 # Call actual function
--> 480 out = func(self, *args, **kwargs)
482 # Update fingerprint of in-place transforms + update in-place history of transforms
484 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:3389, in Dataset._select_with_indices_mapping(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint)
3387 logger.info(f"Caching indices mapping at {indices_cache_file_name}")
3388 tmp_file = tempfile.NamedTemporaryFile("wb", dir=os.path.dirname(indices_cache_file_name), delete=False)
-> 3389 writer = ArrowWriter(
3390 path=tmp_file.name, writer_batch_size=writer_batch_size, fingerprint=new_fingerprint, unit="indices"
3391 )
3393 indices = indices if isinstance(indices, list) else list(indices)
3395 size = len(self)
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_writer.py:315, in ArrowWriter.__init__(self, schema, features, path, stream, fingerprint, writer_batch_size, hash_salt, check_duplicates, disable_nullable, update_features, with_metadata, unit, embed_local_files, storage_options)
312 self._disable_nullable = disable_nullable
314 if stream is None:
--> 315 fs_token_paths = fsspec.get_fs_token_paths(path, storage_options=storage_options)
316 self._fs: fsspec.AbstractFileSystem = fs_token_paths[0]
317 self._path = (
318 fs_token_paths[2][0]
319 if not is_remote_filesystem(self._fs)
320 else self._fs.unstrip_protocol(fs_token_paths[2][0])
321 )
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/fsspec/core.py:593, in get_fs_token_paths(urlpath, mode, num, name_function, storage_options, protocol, expand)
591 else:
592 urlpath = stringify_path(urlpath)
--> 593 chain = _un_chain(urlpath, storage_options or {})
594 if len(chain) > 1:
595 inkwargs = {}
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/fsspec/core.py:330, in _un_chain(path, kwargs)
328 for bit in reversed(bits):
329 protocol = split_protocol(bit)[0] or "file"
--> 330 cls = get_filesystem_class(protocol)
331 extra_kwargs = cls._get_kwargs_from_urls(bit)
332 kws = kwargs.get(protocol, {})
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/fsspec/registry.py:240, in get_filesystem_class(protocol)
238 if protocol not in registry:
239 if protocol not in known_implementations:
--> 240 raise ValueError("Protocol not known: %s" % protocol)
241 bit = known_implementations[protocol]
242 try:
ValueError: Protocol not known: parent
```
This is what the `train_dataset` object looks like
```
Dataset({
features: ['label', 'input_ids', 'attention_mask'],
num_rows: 364166
})
```
### Steps to reproduce the bug
The `train_dataset` obj is created by concatenating two datasets
And then shuffle is called, but it throws the mentioned error.
### Expected behavior
Should shuffle the dataset properly.
### Environment info
- `datasets` version: 2.6.1
- Platform: Linux-5.15.0-1022-aws-x86_64-with-glibc2.31
- Python version: 3.9.13
- PyArrow version: 10.0.0
- Pandas version: 1.4.4
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5555/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5555/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/5554 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5554/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5554/comments | https://api.github.com/repos/huggingface/datasets/issues/5554/events | https://github.com/huggingface/datasets/pull/5554 | 1,592,285,062 | PR_kwDODunzps5KXhZh | 5,554 | Add resampy dep | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 5 | "2023-02-20T18:15:43Z" | "2023-02-21T12:46:10Z" | "2023-02-21T12:43:38Z" | MEMBER | null | In librosa 0.10 they removed the `resmpy` dependency and set it to optional.
However it is necessary for resampling. I added it to the "audio" extra dependencies. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5554/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5554/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5554.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5554",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5554.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5554"
} | true | 66,475 |
https://api.github.com/repos/huggingface/datasets/issues/5553 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5553/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5553/comments | https://api.github.com/repos/huggingface/datasets/issues/5553/events | https://github.com/huggingface/datasets/pull/5553 | 1,592,236,998 | PR_kwDODunzps5KXXUq | 5,553 | improved message error row formatting | {
"avatar_url": "https://avatars.githubusercontent.com/u/26489385?v=4",
"events_url": "https://api.github.com/users/Plutone11011/events{/privacy}",
"followers_url": "https://api.github.com/users/Plutone11011/followers",
"following_url": "https://api.github.com/users/Plutone11011/following{/other_user}",
"gists_url": "https://api.github.com/users/Plutone11011/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Plutone11011",
"id": 26489385,
"login": "Plutone11011",
"node_id": "MDQ6VXNlcjI2NDg5Mzg1",
"organizations_url": "https://api.github.com/users/Plutone11011/orgs",
"received_events_url": "https://api.github.com/users/Plutone11011/received_events",
"repos_url": "https://api.github.com/users/Plutone11011/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Plutone11011/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Plutone11011/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Plutone11011"
} | [] | closed | false | null | [] | null | 2 | "2023-02-20T17:29:14Z" | "2023-02-21T13:08:25Z" | "2023-02-21T12:58:12Z" | CONTRIBUTOR | null | Solves #5539 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5553/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5553/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5553.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5553",
"merged_at": "2023-02-21T12:58:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5553.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5553"
} | true | 70,138 |
https://api.github.com/repos/huggingface/datasets/issues/5552 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5552/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5552/comments | https://api.github.com/repos/huggingface/datasets/issues/5552/events | https://github.com/huggingface/datasets/pull/5552 | 1,592,186,703 | PR_kwDODunzps5KXMjA | 5,552 | Make tiktoken tokenizers hashable | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 4 | "2023-02-20T16:50:09Z" | "2023-02-21T13:20:42Z" | "2023-02-21T13:13:05Z" | CONTRIBUTOR | null | Fix for https://discord.com/channels/879548962464493619/1075729627546406912/1075729627546406912
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5552/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5552/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5552.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5552",
"merged_at": "2023-02-21T13:13:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5552.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5552"
} | true | 73,376 |
https://api.github.com/repos/huggingface/datasets/issues/5551 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5551/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5551/comments | https://api.github.com/repos/huggingface/datasets/issues/5551/events | https://github.com/huggingface/datasets/pull/5551 | 1,592,140,836 | PR_kwDODunzps5KXCof | 5,551 | Suggest scikit-learn instead of sklearn | {
"avatar_url": "https://avatars.githubusercontent.com/u/74963545?v=4",
"events_url": "https://api.github.com/users/osbm/events{/privacy}",
"followers_url": "https://api.github.com/users/osbm/followers",
"following_url": "https://api.github.com/users/osbm/following{/other_user}",
"gists_url": "https://api.github.com/users/osbm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/osbm",
"id": 74963545,
"login": "osbm",
"node_id": "MDQ6VXNlcjc0OTYzNTQ1",
"organizations_url": "https://api.github.com/users/osbm/orgs",
"received_events_url": "https://api.github.com/users/osbm/received_events",
"repos_url": "https://api.github.com/users/osbm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/osbm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osbm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/osbm"
} | [] | closed | false | null | [] | null | 4 | "2023-02-20T16:16:57Z" | "2023-02-21T13:27:57Z" | "2023-02-21T13:21:07Z" | CONTRIBUTOR | null | This is kinda unimportant fix but, the suggested `pip install sklearn` does not work.
The current error message if sklearn is not installed:
```
ImportError: To be able to use [dataset name], you need to install the following dependency: sklearn.
Please install it using 'pip install sklearn' for instance.
```
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5551/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5551/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5551.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5551",
"merged_at": "2023-02-21T13:21:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5551.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5551"
} | true | 75,850 |
https://api.github.com/repos/huggingface/datasets/issues/5550 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5550/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5550/comments | https://api.github.com/repos/huggingface/datasets/issues/5550/events | https://github.com/huggingface/datasets/pull/5550 | 1,591,409,475 | PR_kwDODunzps5KUl5i | 5,550 | Resolve four broken refs in the docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4",
"events_url": "https://api.github.com/users/tomaarsen/events{/privacy}",
"followers_url": "https://api.github.com/users/tomaarsen/followers",
"following_url": "https://api.github.com/users/tomaarsen/following{/other_user}",
"gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tomaarsen",
"id": 37621491,
"login": "tomaarsen",
"node_id": "MDQ6VXNlcjM3NjIxNDkx",
"organizations_url": "https://api.github.com/users/tomaarsen/orgs",
"received_events_url": "https://api.github.com/users/tomaarsen/received_events",
"repos_url": "https://api.github.com/users/tomaarsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tomaarsen"
} | [] | closed | false | null | [] | null | 3 | "2023-02-20T08:52:11Z" | "2023-02-20T15:16:13Z" | "2023-02-20T15:09:13Z" | CONTRIBUTOR | null | Hello!
## Pull Request overview
* Resolve 4 broken references in the docs
## The problems
Two broken references [here](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.class_encode_column):

---
One broken reference [here](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.unique):

---
One missing reference [here](https://huggingface.co/docs/datasets/v2.9.0/en/package_reference/main_classes#datasets.DatasetDict.class_encode_column):

- Tom Aarsen | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5550/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5550/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5550.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5550",
"merged_at": "2023-02-20T15:09:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5550.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5550"
} | true | 22,622 |
https://api.github.com/repos/huggingface/datasets/issues/5549 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5549/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5549/comments | https://api.github.com/repos/huggingface/datasets/issues/5549/events | https://github.com/huggingface/datasets/pull/5549 | 1,590,836,848 | PR_kwDODunzps5KSsi3 | 5,549 | Apply ruff flake8-comprehension checks | {
"avatar_url": "https://avatars.githubusercontent.com/u/2053727?v=4",
"events_url": "https://api.github.com/users/Skylion007/events{/privacy}",
"followers_url": "https://api.github.com/users/Skylion007/followers",
"following_url": "https://api.github.com/users/Skylion007/following{/other_user}",
"gists_url": "https://api.github.com/users/Skylion007/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Skylion007",
"id": 2053727,
"login": "Skylion007",
"node_id": "MDQ6VXNlcjIwNTM3Mjc=",
"organizations_url": "https://api.github.com/users/Skylion007/orgs",
"received_events_url": "https://api.github.com/users/Skylion007/received_events",
"repos_url": "https://api.github.com/users/Skylion007/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Skylion007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Skylion007/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Skylion007"
} | [] | open | false | null | [] | null | 1 | "2023-02-19T20:09:28Z" | "2023-02-22T16:45:10Z" | null | NONE | null | Fix #5548
Apply ruff's flake8-comprehension checks for better performance, and more readable code. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5549/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5549/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5549.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5549",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5549.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5549"
} | true | null |
https://api.github.com/repos/huggingface/datasets/issues/5548 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5548/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5548/comments | https://api.github.com/repos/huggingface/datasets/issues/5548/events | https://github.com/huggingface/datasets/issues/5548 | 1,590,835,479 | I_kwDODunzps5e0jkX | 5,548 | Apply flake8-comprehensions to codebase | {
"avatar_url": "https://avatars.githubusercontent.com/u/2053727?v=4",
"events_url": "https://api.github.com/users/Skylion007/events{/privacy}",
"followers_url": "https://api.github.com/users/Skylion007/followers",
"following_url": "https://api.github.com/users/Skylion007/following{/other_user}",
"gists_url": "https://api.github.com/users/Skylion007/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Skylion007",
"id": 2053727,
"login": "Skylion007",
"node_id": "MDQ6VXNlcjIwNTM3Mjc=",
"organizations_url": "https://api.github.com/users/Skylion007/orgs",
"received_events_url": "https://api.github.com/users/Skylion007/received_events",
"repos_url": "https://api.github.com/users/Skylion007/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Skylion007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Skylion007/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Skylion007"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | 0 | "2023-02-19T20:05:38Z" | "2023-02-19T20:05:38Z" | null | NONE | null | ### Feature request
Apply ruff flake8 comprehension checks to codebase.
### Motivation
This should strictly improve the performance / readability of the codebase by removing unnecessary iteration, function calls, etc. This should generate better Python bytecode which should strictly improve performance.
I already applied this fixes to PyTorch and Sympy with little issue and have opened PRs to diffusers and transformers todo this as well.
### Your contribution
Making a PR. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5548/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5548/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/5547 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5547/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5547/comments | https://api.github.com/repos/huggingface/datasets/issues/5547/events | https://github.com/huggingface/datasets/pull/5547 | 1,590,468,200 | PR_kwDODunzps5KRmcf | 5,547 | Add JAX device selection when formatting | {
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt"
} | [] | closed | false | null | [] | null | 9 | "2023-02-18T20:57:40Z" | "2023-02-21T16:10:55Z" | "2023-02-21T16:04:03Z" | CONTRIBUTOR | null | ## What's in this PR?
After exploring for a while the JAX integration in 🤗`datasets`, I found out that, even though JAX prioritizes the TPU and GPU as the default device when available, the `JaxFormatter` doesn't let you specify the device where you want to place the `jax.Array`s in case you don't want to rely on JAX's default array placement.
So on, I've included the `device` param in `JaxFormatter` but there are some things to take into consideration:
* A formatted `Dataset` is copied with `copy.deepcopy` which means that if one adds the param `device` in `JaxFormatter` as a `jaxlib.xla_extension.Device`, it "fails" because that object cannot be serialized (instead of serializing the param adds a random hash instead). That's the reason why I added a function `_map_devices_to_str` to basically create a mapping of strings to `jaxlib.xla_extension.Device`s so that `self.device` is a string and not a `jaxlib.xla_extension.Device`.
* To create a `jax.Array` in a device you need to either create it in the default device and then move it to the desired device with `jax.device_put` or directly create it in the device you want with `jax.default_device()` context manager.
* JAX will create an array by default in `jax.devices()[0]`
More information on JAX device management is available at https://jax.readthedocs.io/en/latest/faq.html#controlling-data-and-computation-placement-on-devices
## What's missing in this PR?
I've tested it both locally in CPU (Mac M2 and Mac M1, as no GPU support for Mac yet), and in GPU and TPU in Google Colab, let me know if you want me to provide you the Notebook for the latter.
But I did not implement any integration test as I wanted to get your feedback first. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5547/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5547/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5547.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5547",
"merged_at": "2023-02-21T16:04:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5547.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5547"
} | true | 241,583 |
https://api.github.com/repos/huggingface/datasets/issues/5546 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5546/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5546/comments | https://api.github.com/repos/huggingface/datasets/issues/5546/events | https://github.com/huggingface/datasets/issues/5546 | 1,590,346,349 | I_kwDODunzps5eysJt | 5,546 | Downloaded datasets do not cache at $HF_HOME | {
"avatar_url": "https://avatars.githubusercontent.com/u/79091831?v=4",
"events_url": "https://api.github.com/users/ErfanMoosaviMonazzah/events{/privacy}",
"followers_url": "https://api.github.com/users/ErfanMoosaviMonazzah/followers",
"following_url": "https://api.github.com/users/ErfanMoosaviMonazzah/following{/other_user}",
"gists_url": "https://api.github.com/users/ErfanMoosaviMonazzah/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ErfanMoosaviMonazzah",
"id": 79091831,
"login": "ErfanMoosaviMonazzah",
"node_id": "MDQ6VXNlcjc5MDkxODMx",
"organizations_url": "https://api.github.com/users/ErfanMoosaviMonazzah/orgs",
"received_events_url": "https://api.github.com/users/ErfanMoosaviMonazzah/received_events",
"repos_url": "https://api.github.com/users/ErfanMoosaviMonazzah/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ErfanMoosaviMonazzah/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ErfanMoosaviMonazzah/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ErfanMoosaviMonazzah"
} | [] | open | false | null | [] | null | 1 | "2023-02-18T13:30:35Z" | "2023-02-21T13:18:04Z" | null | NONE | null | ### Describe the bug
In the huggingface course (https://huggingface.co/course/chapter3/2?fw=pt) it said that if we set HF_HOME, downloaded datasets would be cached at specified address but it does not. downloaded models from checkpoint names are downloaded and cached at HF_HOME but this is not the case for datasets, they are still cached at ~/.cache/huggingface/datasets.
### Steps to reproduce the bug
Run the following code
```
from datasets import load_dataset
raw_datasets = load_dataset("glue", "mrpc")
raw_datasets
```
it downloads and store dataset at ~/.cache/huggingface/datasets
### Expected behavior
to cache dataset at HF_HOME.
### Environment info
python 3.10.6
Kubuntu 22.04
HF_HOME located on a separate partition | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5546/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5546/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/5545 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5545/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5545/comments | https://api.github.com/repos/huggingface/datasets/issues/5545/events | https://github.com/huggingface/datasets/pull/5545 | 1,590,315,972 | PR_kwDODunzps5KRKct | 5,545 | Added return methods for URL-references to the pushed dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/25269220?v=4",
"events_url": "https://api.github.com/users/davidberenstein1957/events{/privacy}",
"followers_url": "https://api.github.com/users/davidberenstein1957/followers",
"following_url": "https://api.github.com/users/davidberenstein1957/following{/other_user}",
"gists_url": "https://api.github.com/users/davidberenstein1957/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/davidberenstein1957",
"id": 25269220,
"login": "davidberenstein1957",
"node_id": "MDQ6VXNlcjI1MjY5MjIw",
"organizations_url": "https://api.github.com/users/davidberenstein1957/orgs",
"received_events_url": "https://api.github.com/users/davidberenstein1957/received_events",
"repos_url": "https://api.github.com/users/davidberenstein1957/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/davidberenstein1957/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidberenstein1957/subscriptions",
"type": "User",
"url": "https://api.github.com/users/davidberenstein1957"
} | [] | open | false | null | [] | null | 4 | "2023-02-18T11:26:25Z" | "2023-02-21T14:17:28Z" | null | NONE | null | Hi,
I was missing the ability to easily open the pushed dataset and it seemed like a quick fix.
Maybe we also want to log this info somewhere, but let me know if I need to add that too.
Cheers,
David | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5545/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5545/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5545.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5545",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5545.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5545"
} | true | null |
https://api.github.com/repos/huggingface/datasets/issues/5543 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5543/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5543/comments | https://api.github.com/repos/huggingface/datasets/issues/5543/events | https://github.com/huggingface/datasets/issues/5543 | 1,588,951,379 | I_kwDODunzps5etXlT | 5,543 | the pile datasets url seems to change back | {
"avatar_url": "https://avatars.githubusercontent.com/u/5126316?v=4",
"events_url": "https://api.github.com/users/wjfwzzc/events{/privacy}",
"followers_url": "https://api.github.com/users/wjfwzzc/followers",
"following_url": "https://api.github.com/users/wjfwzzc/following{/other_user}",
"gists_url": "https://api.github.com/users/wjfwzzc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wjfwzzc",
"id": 5126316,
"login": "wjfwzzc",
"node_id": "MDQ6VXNlcjUxMjYzMTY=",
"organizations_url": "https://api.github.com/users/wjfwzzc/orgs",
"received_events_url": "https://api.github.com/users/wjfwzzc/received_events",
"repos_url": "https://api.github.com/users/wjfwzzc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wjfwzzc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wjfwzzc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wjfwzzc"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | 2 | "2023-02-17T08:40:11Z" | "2023-02-21T06:37:00Z" | "2023-02-20T08:41:33Z" | NONE | null | ### Describe the bug
in #3627, the host url of the pile dataset became `https://mystic.the-eye.eu`. Now the new url is broken, but `https://the-eye.eu` seems to work again.
### Steps to reproduce the bug
```python3
from datasets import load_dataset
dataset = load_dataset("bookcorpusopen")
```
shows
```python3
ConnectionError: Couldn't reach https://mystic.the-eye.eu/public/AI/pile_preliminary_components/books1.tar.gz (ProxyError(MaxRetryError("HTTPSConnectionPool(host='mystic.the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_pr
eliminary_components/books1.tar.gz (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 504 Gateway Timeout')))")))
```
### Expected behavior
Downloading as normal.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.4.143.bsk.7-amd64-x86_64-with-glibc2.31
- Python version: 3.9.2
- PyArrow version: 6.0.1
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5543/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5543/timeline | null | completed | null | null | false | 259,282 |
https://api.github.com/repos/huggingface/datasets/issues/5542 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5542/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5542/comments | https://api.github.com/repos/huggingface/datasets/issues/5542/events | https://github.com/huggingface/datasets/pull/5542 | 1,588,633,724 | PR_kwDODunzps5KLjMl | 5,542 | Avoid saving sparse ChunkedArrays in pyarrow tables | {
"avatar_url": "https://avatars.githubusercontent.com/u/6591505?v=4",
"events_url": "https://api.github.com/users/marioga/events{/privacy}",
"followers_url": "https://api.github.com/users/marioga/followers",
"following_url": "https://api.github.com/users/marioga/following{/other_user}",
"gists_url": "https://api.github.com/users/marioga/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/marioga",
"id": 6591505,
"login": "marioga",
"node_id": "MDQ6VXNlcjY1OTE1MDU=",
"organizations_url": "https://api.github.com/users/marioga/orgs",
"received_events_url": "https://api.github.com/users/marioga/received_events",
"repos_url": "https://api.github.com/users/marioga/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/marioga/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marioga/subscriptions",
"type": "User",
"url": "https://api.github.com/users/marioga"
} | [] | closed | false | null | [] | null | 2 | "2023-02-17T01:52:38Z" | "2023-02-17T19:20:49Z" | "2023-02-17T11:12:32Z" | CONTRIBUTOR | null | Fixes https://github.com/huggingface/datasets/issues/5541 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5542/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5542/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5542.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5542",
"merged_at": "2023-02-17T11:12:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5542.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5542"
} | true | 33,594 |
https://api.github.com/repos/huggingface/datasets/issues/5541 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5541/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5541/comments | https://api.github.com/repos/huggingface/datasets/issues/5541/events | https://github.com/huggingface/datasets/issues/5541 | 1,588,633,555 | I_kwDODunzps5esJ_T | 5,541 | Flattening indices in selected datasets is extremely inefficient | {
"avatar_url": "https://avatars.githubusercontent.com/u/6591505?v=4",
"events_url": "https://api.github.com/users/marioga/events{/privacy}",
"followers_url": "https://api.github.com/users/marioga/followers",
"following_url": "https://api.github.com/users/marioga/following{/other_user}",
"gists_url": "https://api.github.com/users/marioga/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/marioga",
"id": 6591505,
"login": "marioga",
"node_id": "MDQ6VXNlcjY1OTE1MDU=",
"organizations_url": "https://api.github.com/users/marioga/orgs",
"received_events_url": "https://api.github.com/users/marioga/received_events",
"repos_url": "https://api.github.com/users/marioga/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/marioga/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marioga/subscriptions",
"type": "User",
"url": "https://api.github.com/users/marioga"
} | [] | closed | false | null | [] | null | 3 | "2023-02-17T01:52:24Z" | "2023-02-22T13:15:20Z" | "2023-02-17T11:12:33Z" | CONTRIBUTOR | null | ### Describe the bug
If we perform a `select` (or `shuffle`, `train_test_split`, etc.) operation on a dataset , we end up with a dataset with an `indices_table`. Currently, flattening such dataset consumes a lot of memory and the resulting flat dataset contains ChunkedArrays with as many chunks as there are rows. This is extremely inefficient and slows down the operations on the flat dataset, e.g., saving/loading the dataset to disk becomes really slow.
Perhaps more importantly, loading the dataset back from disk basically loads the whole table into RAM, as it cannot take advantage of memory mapping.
### Steps to reproduce the bug
The following script reproduces the issue:
```python
import gc
import os
import psutil
import tempfile
import time
from datasets import Dataset
DATASET_SIZE = 5000000
def profile(func):
def wrapper(*args, **kwargs):
mem_before = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)
start = time.time()
# Run function here
out = func(*args, **kwargs)
end = time.time()
mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)
print(f"{func.__name__} -- RAM memory used: {mem_after - mem_before} MB -- Total time: {end - start:.6f} s")
return out
return wrapper
def main():
ds = Dataset.from_list([{'col': i} for i in range(DATASET_SIZE)])
print(f"Num chunks for original ds: {ds.data['col'].num_chunks}")
with tempfile.TemporaryDirectory() as tmpdir:
path1 = os.path.join(tmpdir, 'ds1')
print("Original ds save/load")
profile(ds.save_to_disk)(path1)
ds_loaded = profile(Dataset.load_from_disk)(path1)
print(f"Num chunks for original ds after reloading: {ds_loaded.data['col'].num_chunks}")
print("")
ds_select = ds.select(reversed(range(len(ds))))
print(f"Num chunks for selected ds: {ds_select.data['col'].num_chunks}")
del ds
del ds_loaded
gc.collect()
# This would happen anyway when we call save_to_disk
ds_select = profile(ds_select.flatten_indices)()
print(f"Num chunks for selected ds after flattening: {ds_select.data['col'].num_chunks}")
print("")
path2 = os.path.join(tmpdir, 'ds2')
print("Selected ds save/load")
profile(ds_select.save_to_disk)(path2)
del ds_select
gc.collect()
ds_select_loaded = profile(Dataset.load_from_disk)(path2)
print(f"Num chunks for selected ds after reloading: {ds_select_loaded.data['col'].num_chunks}")
if __name__ == '__main__':
main()
```
Sample result:
```
Num chunks for original ds: 1
Original ds save/load
save_to_disk -- RAM memory used: 0.515625 MB -- Total time: 0.253888 s
load_from_disk -- RAM memory used: 42.765625 MB -- Total time: 0.015176 s
Num chunks for original ds after reloading: 5000
Num chunks for selected ds: 1
flatten_indices -- RAM memory used: 4852.609375 MB -- Total time: 46.116774 s
Num chunks for selected ds after flattening: 5000000
Selected ds save/load
save_to_disk -- RAM memory used: 1326.65625 MB -- Total time: 42.309825 s
load_from_disk -- RAM memory used: 2085.953125 MB -- Total time: 11.659137 s
Num chunks for selected ds after reloading: 5000000
```
### Expected behavior
Saving/loading the dataset should be much faster and consume almost no extra memory thanks to pyarrow memory mapping.
### Environment info
- `datasets` version: 2.9.1.dev0
- Platform: macOS-13.1-arm64-arm-64bit
- Python version: 3.10.8
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5541/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5541/timeline | null | completed | null | null | false | 33,609 |
https://api.github.com/repos/huggingface/datasets/issues/5540 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5540/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5540/comments | https://api.github.com/repos/huggingface/datasets/issues/5540/events | https://github.com/huggingface/datasets/pull/5540 | 1,588,438,344 | PR_kwDODunzps5KK5qz | 5,540 | Tutorial for creating a dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu"
} | [] | closed | false | null | [] | null | 2 | "2023-02-16T22:09:35Z" | "2023-02-17T18:50:46Z" | "2023-02-17T18:41:28Z" | MEMBER | null | A tutorial for creating datasets based on the folder-based builders and `from_dict` and `from_generator` methods. I've also mentioned loading scripts as a next step, but I think we should keep the tutorial focused on the low-code methods. Let me know what you think! 🙂 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5540/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5540/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5540.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5540",
"merged_at": "2023-02-17T18:41:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5540.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5540"
} | true | 73,913 |
https://api.github.com/repos/huggingface/datasets/issues/5539 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5539/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5539/comments | https://api.github.com/repos/huggingface/datasets/issues/5539/events | https://github.com/huggingface/datasets/issues/5539 | 1,587,970,083 | I_kwDODunzps5epoAj | 5,539 | IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number | {
"avatar_url": "https://avatars.githubusercontent.com/u/41912135?v=4",
"events_url": "https://api.github.com/users/aalbersk/events{/privacy}",
"followers_url": "https://api.github.com/users/aalbersk/followers",
"following_url": "https://api.github.com/users/aalbersk/following{/other_user}",
"gists_url": "https://api.github.com/users/aalbersk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aalbersk",
"id": 41912135,
"login": "aalbersk",
"node_id": "MDQ6VXNlcjQxOTEyMTM1",
"organizations_url": "https://api.github.com/users/aalbersk/orgs",
"received_events_url": "https://api.github.com/users/aalbersk/received_events",
"repos_url": "https://api.github.com/users/aalbersk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aalbersk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aalbersk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aalbersk"
} | [
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | null | [] | null | 4 | "2023-02-16T16:08:51Z" | "2023-02-22T10:30:30Z" | "2023-02-21T13:03:57Z" | NONE | null | ### Describe the bug
When dataset contains a 0-dim tensor, formatting.py raises a following error and fails.
```bash
Traceback (most recent call last):
File "<path>/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 501, in format_row
return _unnest(formatted_batch)
File "<path>/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 137, in _unnest
return {key: array[0] for key, array in py_dict.items()}
File "<path>/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 137, in <dictcomp>
return {key: array[0] for key, array in py_dict.items()}
IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number
```
### Steps to reproduce the bug
Load whichever dataset and add transform method to add 0-dim tensor. Or create/find a dataset containing 0-dim tensor. E.g.
```python
from datasets import load_dataset
import torch
dataset = load_dataset("lambdalabs/pokemon-blip-captions", split='train')
def t(batch):
return {"test": torch.tensor(1)}
dataset.set_transform(t)
d_0 = dataset[0]
```
### Expected behavior
Extractor will correctly get a row from the dataset, even if it contains 0-dim tensor.
### Environment info
`datasets==2.8.0`, but it looks like it is also applicable to main branch version (as of 16th February) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5539/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5539/timeline | null | completed | null | null | false | 420,906 |
https://api.github.com/repos/huggingface/datasets/issues/5538 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5538/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5538/comments | https://api.github.com/repos/huggingface/datasets/issues/5538/events | https://github.com/huggingface/datasets/issues/5538 | 1,587,732,596 | I_kwDODunzps5eouB0 | 5,538 | load_dataset in seaborn is not working for me. getting this error. | {
"avatar_url": "https://avatars.githubusercontent.com/u/125575109?v=4",
"events_url": "https://api.github.com/users/reemaranibarik/events{/privacy}",
"followers_url": "https://api.github.com/users/reemaranibarik/followers",
"following_url": "https://api.github.com/users/reemaranibarik/following{/other_user}",
"gists_url": "https://api.github.com/users/reemaranibarik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/reemaranibarik",
"id": 125575109,
"login": "reemaranibarik",
"node_id": "U_kgDOB3wfxQ",
"organizations_url": "https://api.github.com/users/reemaranibarik/orgs",
"received_events_url": "https://api.github.com/users/reemaranibarik/received_events",
"repos_url": "https://api.github.com/users/reemaranibarik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/reemaranibarik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/reemaranibarik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/reemaranibarik"
} | [] | closed | false | null | [] | null | 1 | "2023-02-16T14:01:58Z" | "2023-02-16T14:44:36Z" | "2023-02-16T14:44:36Z" | NONE | null | TimeoutError Traceback (most recent call last)
~\anaconda3\lib\urllib\request.py in do_open(self, http_class, req, **http_conn_args)
1345 try:
-> 1346 h.request(req.get_method(), req.selector, req.data, headers,
1347 encode_chunked=req.has_header('Transfer-encoding'))
~\anaconda3\lib\http\client.py in request(self, method, url, body, headers, encode_chunked)
1278 """Send a complete request to the server."""
-> 1279 self._send_request(method, url, body, headers, encode_chunked)
1280
~\anaconda3\lib\http\client.py in _send_request(self, method, url, body, headers, encode_chunked)
1324 body = _encode(body, 'body')
-> 1325 self.endheaders(body, encode_chunked=encode_chunked)
1326
~\anaconda3\lib\http\client.py in endheaders(self, message_body, encode_chunked)
1273 raise CannotSendHeader()
-> 1274 self._send_output(message_body, encode_chunked=encode_chunked)
1275
~\anaconda3\lib\http\client.py in _send_output(self, message_body, encode_chunked)
1033 del self._buffer[:]
-> 1034 self.send(msg)
1035
~\anaconda3\lib\http\client.py in send(self, data)
973 if self.auto_open:
--> 974 self.connect()
975 else:
~\anaconda3\lib\http\client.py in connect(self)
1440
-> 1441 super().connect()
1442
~\anaconda3\lib\http\client.py in connect(self)
944 """Connect to the host and port specified in __init__."""
--> 945 self.sock = self._create_connection(
946 (self.host,self.port), self.timeout, self.source_address)
~\anaconda3\lib\socket.py in create_connection(address, timeout, source_address)
843 try:
--> 844 raise err
845 finally:
~\anaconda3\lib\socket.py in create_connection(address, timeout, source_address)
831 sock.bind(source_address)
--> 832 sock.connect(sa)
833 # Break explicitly a reference cycle
TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
During handling of the above exception, another exception occurred:
URLError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_12220/2927704185.py in <module>
1 import seaborn as sn
----> 2 iris = sn.load_dataset('iris')
~\anaconda3\lib\site-packages\seaborn\utils.py in load_dataset(name, cache, data_home, **kws)
594 if name not in get_dataset_names():
595 raise ValueError(f"'{name}' is not one of the example datasets.")
--> 596 urlretrieve(url, cache_path)
597 full_path = cache_path
598 else:
~\anaconda3\lib\urllib\request.py in urlretrieve(url, filename, reporthook, data)
237 url_type, path = _splittype(url)
238
--> 239 with contextlib.closing(urlopen(url, data)) as fp:
240 headers = fp.info()
241
~\anaconda3\lib\urllib\request.py in urlopen(url, data, timeout, cafile, capath, cadefault, context)
212 else:
213 opener = _opener
--> 214 return opener.open(url, data, timeout)
215
216 def install_opener(opener):
~\anaconda3\lib\urllib\request.py in open(self, fullurl, data, timeout)
515
516 sys.audit('urllib.Request', req.full_url, req.data, req.headers, req.get_method())
--> 517 response = self._open(req, data)
518
519 # post-process response
~\anaconda3\lib\urllib\request.py in _open(self, req, data)
532
533 protocol = req.type
--> 534 result = self._call_chain(self.handle_open, protocol, protocol +
535 '_open', req)
536 if result:
~\anaconda3\lib\urllib\request.py in _call_chain(self, chain, kind, meth_name, *args)
492 for handler in handlers:
493 func = getattr(handler, meth_name)
--> 494 result = func(*args)
495 if result is not None:
496 return result
~\anaconda3\lib\urllib\request.py in https_open(self, req)
1387
1388 def https_open(self, req):
-> 1389 return self.do_open(http.client.HTTPSConnection, req,
1390 context=self._context, check_hostname=self._check_hostname)
1391
~\anaconda3\lib\urllib\request.py in do_open(self, http_class, req, **http_conn_args)
1347 encode_chunked=req.has_header('Transfer-encoding'))
1348 except OSError as err: # timeout error
-> 1349 raise URLError(err)
1350 r = h.getresponse()
1351 except:
URLError: <urlopen error [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond>
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5538/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5538/timeline | null | completed | null | null | false | 2,558 |
https://api.github.com/repos/huggingface/datasets/issues/5537 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5537/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5537/comments | https://api.github.com/repos/huggingface/datasets/issues/5537/events | https://github.com/huggingface/datasets/issues/5537 | 1,587,567,464 | I_kwDODunzps5eoFto | 5,537 | Increase speed of data files resolution | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues",
"id": 3761482852,
"name": "good second issue",
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue"
}
] | open | false | null | [] | null | 0 | "2023-02-16T12:11:45Z" | "2023-02-16T12:11:45Z" | null | MEMBER | null | Certain datasets like `bigcode/the-stack-dedup` have so many files that loading them takes forever right from the data files resolution step.
`datasets` uses file patterns to check the structure of the repository but it takes too much time to iterate over and over again on all the data files.
This come from `resolve_patterns_in_dataset_repository` which calls `_resolve_single_pattern_in_dataset_repository`, which iterates on all the files at
```python
glob_iter = [PurePath(filepath) for filepath in fs.glob(PurePath(pattern).as_posix()) if fs.isfile(filepath)]
```
but calling `glob` on such a dataset is too expensive. Indeed it calls `ls()` in `hffilesystem.py` too many times.
Maybe `glob` can be more optimized in `hffilesystem.py`, or the data files resolution can directly be implemented in the filesystem by checking its `dir_cache` ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5537/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5537/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/5536 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5536/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5536/comments | https://api.github.com/repos/huggingface/datasets/issues/5536/events | https://github.com/huggingface/datasets/issues/5536 | 1,586,930,643 | I_kwDODunzps5elqPT | 5,536 | Failure to hash function when using .map() | {
"avatar_url": "https://avatars.githubusercontent.com/u/6916056?v=4",
"events_url": "https://api.github.com/users/venzen/events{/privacy}",
"followers_url": "https://api.github.com/users/venzen/followers",
"following_url": "https://api.github.com/users/venzen/following{/other_user}",
"gists_url": "https://api.github.com/users/venzen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/venzen",
"id": 6916056,
"login": "venzen",
"node_id": "MDQ6VXNlcjY5MTYwNTY=",
"organizations_url": "https://api.github.com/users/venzen/orgs",
"received_events_url": "https://api.github.com/users/venzen/received_events",
"repos_url": "https://api.github.com/users/venzen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/venzen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/venzen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/venzen"
} | [] | closed | false | null | [] | null | 3 | "2023-02-16T03:12:07Z" | "2023-02-22T13:11:14Z" | "2023-02-16T14:56:41Z" | NONE | null | ### Describe the bug
_Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed._
This issue with `.map()` happens for me consistently, as also described in closed issue #4506
Dataset indices can be individually serialized using dill and pickle without any errors. I'm using tiktoken to encode in the function passed to map(). Similarly, indices can be individually encoded without error.
### Steps to reproduce the bug
```py
from datasets import load_dataset
import tiktoken
dataset = load_dataset("stas/openwebtext-10k")
enc = tiktoken.get_encoding("gpt2")
tokenized = dataset.map(
process,
remove_columns=['text'],
desc="tokenizing the OWT splits",
)
def process(example):
ids = enc.encode(example['text'])
ids.append(enc.eot_token)
out = {'ids': ids, 'len': len(ids)}
return out
```
### Expected behavior
Should encode simple text objects.
### Environment info
Python versions tried: both 3.8 and 3.10.10
`PYTHONUTF8=1` as env variable
Datasets tried:
- stas/openwebtext-10k
- rotten_tomatoes
- local text file
OS: Ubuntu Linux 20.04
Package versions:
- torch 1.13.1
- dill 0.3.4 (if using 0.3.6 - same issue)
- datasets 2.9.0
- tiktoken 0.2.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5536/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5536/timeline | null | completed | null | null | false | 42,274 |
https://api.github.com/repos/huggingface/datasets/issues/5535 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5535/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5535/comments | https://api.github.com/repos/huggingface/datasets/issues/5535/events | https://github.com/huggingface/datasets/pull/5535 | 1,586,520,369 | PR_kwDODunzps5KEb5L | 5,535 | Add JAX-formatting documentation | {
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt"
} | [] | closed | false | null | [] | null | 9 | "2023-02-15T20:35:11Z" | "2023-02-20T10:39:42Z" | "2023-02-20T10:32:39Z" | CONTRIBUTOR | null | ## What's in this PR?
As a follow-up of #5522, I've created this entry in the documentation to explain how to use `.with_format("jax")` and why is it useful.
@lhoestq Feel free to drop any feedback and/or suggestion, as probably more useful features can be included there! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5535/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5535/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5535.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5535",
"merged_at": "2023-02-20T10:32:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5535.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5535"
} | true | 395,848 |
https://api.github.com/repos/huggingface/datasets/issues/5534 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5534/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5534/comments | https://api.github.com/repos/huggingface/datasets/issues/5534/events | https://github.com/huggingface/datasets/issues/5534 | 1,586,177,862 | I_kwDODunzps5eiydG | 5,534 | map() breaks at certain dataset size when using Array3D | {
"avatar_url": "https://avatars.githubusercontent.com/u/3375489?v=4",
"events_url": "https://api.github.com/users/ArneBinder/events{/privacy}",
"followers_url": "https://api.github.com/users/ArneBinder/followers",
"following_url": "https://api.github.com/users/ArneBinder/following{/other_user}",
"gists_url": "https://api.github.com/users/ArneBinder/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArneBinder",
"id": 3375489,
"login": "ArneBinder",
"node_id": "MDQ6VXNlcjMzNzU0ODk=",
"organizations_url": "https://api.github.com/users/ArneBinder/orgs",
"received_events_url": "https://api.github.com/users/ArneBinder/received_events",
"repos_url": "https://api.github.com/users/ArneBinder/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArneBinder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArneBinder/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArneBinder"
} | [] | open | false | null | [] | null | 0 | "2023-02-15T16:34:25Z" | "2023-02-15T17:12:02Z" | null | NONE | null | ### Describe the bug
`map()` magically breaks when using a `Array3D` feature and mapping it. I created a very simple dummy dataset (see below). When filtering it down to 95 elements I can apply map, but it breaks when filtering it down to just 96 entries with the following exception:
```
Traceback (most recent call last):
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3255, in _map_single
writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_writer.py", line 581, in finalize
self.write_examples_on_file()
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_writer.py", line 440, in write_examples_on_file
batch_examples[col] = array_concat(arrays)
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1931, in array_concat
return _concat_arrays(arrays)
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1901, in _concat_arrays
return array_type.wrap_array(_concat_arrays([array.storage for array in arrays]))
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1922, in _concat_arrays
_concat_arrays([array.values for array in arrays]),
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1922, in _concat_arrays
_concat_arrays([array.values for array in arrays]),
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1920, in _concat_arrays
return pa.ListArray.from_arrays(
File "pyarrow/array.pxi", line 1997, in pyarrow.lib.ListArray.from_arrays
File "pyarrow/array.pxi", line 1527, in pyarrow.lib.Array.validate
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Negative offsets in list array
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2815, in map
return self._map_single(
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 546, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 513, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/fingerprint.py", line 480, in wrapper
out = func(self, *args, **kwargs)
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3259, in _map_single
writer.finalize()
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_writer.py", line 581, in finalize
self.write_examples_on_file()
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_writer.py", line 440, in write_examples_on_file
batch_examples[col] = array_concat(arrays)
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1931, in array_concat
return _concat_arrays(arrays)
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1901, in _concat_arrays
return array_type.wrap_array(_concat_arrays([array.storage for array in arrays]))
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1922, in _concat_arrays
_concat_arrays([array.values for array in arrays]),
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1922, in _concat_arrays
_concat_arrays([array.values for array in arrays]),
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1920, in _concat_arrays
return pa.ListArray.from_arrays(
File "pyarrow/array.pxi", line 1997, in pyarrow.lib.ListArray.from_arrays
File "pyarrow/array.pxi", line 1527, in pyarrow.lib.Array.validate
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Negative offsets in list array
```
### Steps to reproduce the bug
1. put following dataset loading script into: debug/debug.py
```python
import datasets
import numpy as np
class DEBUG(datasets.GeneratorBasedBuilder):
"""DEBUG dataset."""
def _info(self):
return datasets.DatasetInfo(
features=datasets.Features(
{
"id": datasets.Value("uint8"),
"img_data": datasets.Array3D(shape=(3, 224, 224), dtype="uint8"),
},
),
supervised_keys=None,
)
def _split_generators(self, dl_manager):
return [datasets.SplitGenerator(name=datasets.Split.TRAIN)]
def _generate_examples(self):
for i in range(149):
image_np = np.zeros(shape=(3, 224, 224), dtype=np.int8).tolist()
yield f"id_{i}", {"id": i, "img_data": image_np}
```
2. try the following code:
```python
import datasets
def add_dummy_col(ex):
ex["dummy"] = "test"
return ex
ds = datasets.load_dataset(path="debug", split="train")
# works
ds_filtered_works = ds.filter(lambda example: example["id"] < 95)
print(f"filtered result size: {len(ds_filtered_works)}")
# output:
# filtered result size: 95
ds_mapped_works = ds_filtered_works.map(add_dummy_col)
# fails
ds_filtered_error = ds.filter(lambda example: example["id"] < 96)
print(f"filtered result size: {len(ds_filtered_error)}")
# output:
# filtered result size: 96
ds_mapped_error = ds_filtered_error.map(add_dummy_col)
```
### Expected behavior
The example code does not fail.
### Environment info
Python 3.9.16 (main, Jan 11 2023, 16:05:54); [GCC 11.2.0] :: Anaconda, Inc. on linux
datasets 2.9.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5534/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5534/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/5533 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5533/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5533/comments | https://api.github.com/repos/huggingface/datasets/issues/5533/events | https://github.com/huggingface/datasets/pull/5533 | 1,585,885,871 | PR_kwDODunzps5KCR5I | 5,533 | Add reduce function | {
"avatar_url": "https://avatars.githubusercontent.com/u/38854604?v=4",
"events_url": "https://api.github.com/users/AJDERS/events{/privacy}",
"followers_url": "https://api.github.com/users/AJDERS/followers",
"following_url": "https://api.github.com/users/AJDERS/following{/other_user}",
"gists_url": "https://api.github.com/users/AJDERS/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AJDERS",
"id": 38854604,
"login": "AJDERS",
"node_id": "MDQ6VXNlcjM4ODU0NjA0",
"organizations_url": "https://api.github.com/users/AJDERS/orgs",
"received_events_url": "https://api.github.com/users/AJDERS/received_events",
"repos_url": "https://api.github.com/users/AJDERS/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AJDERS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AJDERS/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AJDERS"
} | [] | open | false | null | [] | null | 15 | "2023-02-15T13:44:01Z" | "2023-02-22T19:05:00Z" | null | NONE | null | This PR closes #5496 .
I tried to imitate the `reduce`-method from `functools`, i.e. the function input must be a binary operation. I assume that the input type has an empty element, i.e. `input_type()` is defined, as the acumulant is instantiated as this object - im not sure that is this a reasonable assumption?
If `batched= True` the reduction of each shard is _not_ returned, but the reduction of the entire dataset. I was unsure wether this was an intuitive API, or it would make more sense to return the reduction of each shard? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5533/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5533/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5533.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5533",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5533.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5533"
} | true | null |
https://api.github.com/repos/huggingface/datasets/issues/5532 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5532/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5532/comments | https://api.github.com/repos/huggingface/datasets/issues/5532/events | https://github.com/huggingface/datasets/issues/5532 | 1,584,505,128 | I_kwDODunzps5ecaEo | 5,532 | train_test_split in arrow_dataset does not ensure to keep single classes in test set | {
"avatar_url": "https://avatars.githubusercontent.com/u/37191008?v=4",
"events_url": "https://api.github.com/users/Ulipenitz/events{/privacy}",
"followers_url": "https://api.github.com/users/Ulipenitz/followers",
"following_url": "https://api.github.com/users/Ulipenitz/following{/other_user}",
"gists_url": "https://api.github.com/users/Ulipenitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Ulipenitz",
"id": 37191008,
"login": "Ulipenitz",
"node_id": "MDQ6VXNlcjM3MTkxMDA4",
"organizations_url": "https://api.github.com/users/Ulipenitz/orgs",
"received_events_url": "https://api.github.com/users/Ulipenitz/received_events",
"repos_url": "https://api.github.com/users/Ulipenitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Ulipenitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ulipenitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Ulipenitz"
} | [] | closed | false | null | [] | null | 1 | "2023-02-14T16:52:29Z" | "2023-02-15T16:09:19Z" | "2023-02-15T16:09:19Z" | NONE | null | ### Describe the bug
When I have a dataset with very few (e.g. 1) examples per class and I call the train_test_split function on it, sometimes the single class will be in the test set. thus will never be considered for training.
### Steps to reproduce the bug
```
import numpy as np
from datasets import Dataset
data = [
{'label': 0, 'text': "example1"},
{'label': 1, 'text': "example2"},
{'label': 1, 'text': "example3"},
{'label': 1, 'text': "example4"},
{'label': 0, 'text': "example5"},
{'label': 1, 'text': "example6"},
{'label': 2, 'text': "example7"},
{'label': 2, 'text': "example8"}
]
for _ in range(10):
data_set = Dataset.from_list(data)
data_set = data_set.train_test_split(test_size=0.5)
data_set["train"]
unique_labels_train = np.unique(data_set["train"][:]["label"])
unique_labels_test = np.unique(data_set["test"][:]["label"])
assert len(unique_labels_train) >= len(unique_labels_test)
```
### Expected behavior
I expect to have every available class at least once in my training set.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.15.65+-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- PyArrow version: 11.0.0
- Pandas version: 1.3.5
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5532/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5532/timeline | null | completed | null | null | false | 83,810 |
https://api.github.com/repos/huggingface/datasets/issues/5531 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5531/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5531/comments | https://api.github.com/repos/huggingface/datasets/issues/5531/events | https://github.com/huggingface/datasets/issues/5531 | 1,584,387,276 | I_kwDODunzps5eb9TM | 5,531 | Invalid Arrow data from JSONL | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | 0 | "2023-02-14T15:39:49Z" | "2023-02-14T15:46:09Z" | null | MEMBER | null | This code fails:
```python
from datasets import Dataset
ds = Dataset.from_json(path_to_file)
ds.data.validate()
```
raises
```python
ArrowInvalid: Column 2: In chunk 1: Invalid: Struct child array #3 invalid: Invalid: Length spanned by list offsets (4064) larger than values array (length 4063)
```
This causes many issues for @TevenLeScao:
- `map` fails because it fails to rewrite invalid arrow arrays
```python
~/Desktop/hf/datasets/src/datasets/arrow_writer.py in write_examples_on_file(self)
438 if all(isinstance(row[0][col], (pa.Array, pa.ChunkedArray)) for row in self.current_examples):
439 arrays = [row[0][col] for row in self.current_examples]
--> 440 batch_examples[col] = array_concat(arrays)
441 else:
442 batch_examples[col] = [
~/Desktop/hf/datasets/src/datasets/table.py in array_concat(arrays)
1885
1886 if not _is_extension_type(array_type):
-> 1887 return pa.concat_arrays(arrays)
1888
1889 def _offsets_concat(offsets):
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.concat_arrays()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowIndexError: array slice would exceed array length
```
- `to_dict()` **segfaults** ⚠️
```python
/Users/runner/work/crossbow/crossbow/arrow/cpp/src/arrow/array/data.cc:99: Check failed: (off) <= (length) Slice offset greater
than array length
```
To reproduce: unzip the archive and run the above code using `sanity_oscar_en.jsonl`
[sanity_oscar_en.jsonl.zip](https://github.com/huggingface/datasets/files/10734124/sanity_oscar_en.jsonl.zip)
PS: reading using pandas and converting to Arrow works though (note that the dataset lives in RAM in this case):
```python
ds = Dataset.from_pandas(pd.read_json(path_to_file, lines=True))
ds.data.validate()
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5531/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5531/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/5530 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5530/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5530/comments | https://api.github.com/repos/huggingface/datasets/issues/5530/events | https://github.com/huggingface/datasets/pull/5530 | 1,582,938,241 | PR_kwDODunzps5J4W_4 | 5,530 | Add missing license in `NumpyFormatter` | {
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt"
} | [] | closed | false | null | [] | null | 2 | "2023-02-13T19:33:23Z" | "2023-02-14T14:40:41Z" | "2023-02-14T12:23:58Z" | CONTRIBUTOR | null | ## What's in this PR?
As discussed with @lhoestq in https://github.com/huggingface/datasets/pull/5522, the license for `NumpyFormatter` at `datasets/formatting/np_formatter.py` was missing, but present on the rest of the `formatting/*.py` files. So this PR is basically to include it there. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5530/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5530/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5530.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5530",
"merged_at": "2023-02-14T12:23:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5530.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5530"
} | true | 60,635 |
https://api.github.com/repos/huggingface/datasets/issues/5529 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5529/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5529/comments | https://api.github.com/repos/huggingface/datasets/issues/5529/events | https://github.com/huggingface/datasets/pull/5529 | 1,582,501,233 | PR_kwDODunzps5J26Fq | 5,529 | Fix `datasets.load_from_disk`, `DatasetDict.load_from_disk` and `Dataset.load_from_disk` | {
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt"
} | [] | open | false | null | [] | null | 10 | "2023-02-13T14:54:55Z" | "2023-02-23T11:14:39Z" | null | CONTRIBUTOR | null | ## What's in this PR?
After playing around a little bit with 🤗`datasets` in Google Cloud Storage (GCS), I found out some things that should be fixed IMO in the code:
* `datasets.load_from_disk` is not checking whether `state.json` is there too when trying to load a `Dataset`, just `dataset_info.json` is checked
* `DatasetDict.load_from_disk` is not checking whether `state.json` is there too when redirecting the user to load it as `datasets.load_from_disk`, just `dataset_info.json` is checked, which is misleading, as it won't be loadable that way either
* `Dataset.load_from_disk` is missing the `extract_path_from_uri` call before checking in the `fs` whether `dataset_info.json` and `dataset_dict.json` exist, which when using `gcsfs` leads to 400 error code (not blocking) due to `gcsfs.retry.HttpError: Invalid bucket name: 'gs:', 400`
* And, finally, the exception messages are a little bit misleading / incomplete IMO so I've tried to include all the relevant information in the messages to avoid issues when interpreting the exceptions | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5529/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5529/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5529.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5529",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5529.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5529"
} | true | null |
https://api.github.com/repos/huggingface/datasets/issues/5528 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5528/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5528/comments | https://api.github.com/repos/huggingface/datasets/issues/5528/events | https://github.com/huggingface/datasets/pull/5528 | 1,582,195,085 | PR_kwDODunzps5J13wC | 5,528 | Push to hub in a pull request | {
"avatar_url": "https://avatars.githubusercontent.com/u/38854604?v=4",
"events_url": "https://api.github.com/users/AJDERS/events{/privacy}",
"followers_url": "https://api.github.com/users/AJDERS/followers",
"following_url": "https://api.github.com/users/AJDERS/following{/other_user}",
"gists_url": "https://api.github.com/users/AJDERS/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AJDERS",
"id": 38854604,
"login": "AJDERS",
"node_id": "MDQ6VXNlcjM4ODU0NjA0",
"organizations_url": "https://api.github.com/users/AJDERS/orgs",
"received_events_url": "https://api.github.com/users/AJDERS/received_events",
"repos_url": "https://api.github.com/users/AJDERS/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AJDERS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AJDERS/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AJDERS"
} | [] | open | false | null | [] | null | 9 | "2023-02-13T11:43:47Z" | "2023-02-21T21:13:28Z" | null | NONE | null | Fixes #5492.
Introduce new kwarg `create_pr` in `push_to_hub`, which is passed to `HFapi.upload_file`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5528/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5528/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5528.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5528",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5528.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5528"
} | true | null |
https://api.github.com/repos/huggingface/datasets/issues/5527 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5527/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5527/comments | https://api.github.com/repos/huggingface/datasets/issues/5527/events | https://github.com/huggingface/datasets/pull/5527 | 1,581,228,531 | PR_kwDODunzps5JysSM | 5,527 | Fix benchmarks CI - pin protobuf | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 5 | "2023-02-12T11:51:25Z" | "2023-02-13T10:29:03Z" | "2023-02-13T09:24:16Z" | MEMBER | null | fix https://github.com/huggingface/datasets/actions/runs/4156059127/jobs/7189576331 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5527/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5527/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5527.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5527",
"merged_at": "2023-02-13T09:24:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5527.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5527"
} | true | 77,571 |
https://api.github.com/repos/huggingface/datasets/issues/5526 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5526/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5526/comments | https://api.github.com/repos/huggingface/datasets/issues/5526/events | https://github.com/huggingface/datasets/pull/5526 | 1,580,488,133 | PR_kwDODunzps5JwVol | 5,526 | Allow loading/saving of FAISS index using fsspec | {
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Dref360",
"id": 8976546,
"login": "Dref360",
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"repos_url": "https://api.github.com/users/Dref360/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Dref360"
} | [] | open | false | null | [] | null | 3 | "2023-02-10T23:37:14Z" | "2023-02-22T16:37:18Z" | null | CONTRIBUTOR | null | Fixes #5428
Allow loading/saving of FAISS index using fsspec:
1. Simply use BufferedIOWriter/Reader to Read/Write indices on fsspec stream.
2. Needed `mockfs` in the test, so I took it out of the `TestCase`. Let me know if that makes sense.
I can work on the documentation once the code changes are approved.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5526/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5526/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5526.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5526",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5526.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5526"
} | true | null |
https://api.github.com/repos/huggingface/datasets/issues/5525 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5525/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5525/comments | https://api.github.com/repos/huggingface/datasets/issues/5525/events | https://github.com/huggingface/datasets/issues/5525 | 1,580,342,729 | I_kwDODunzps5eMh3J | 5,525 | TypeError: Couldn't cast array of type string to null | {
"avatar_url": "https://avatars.githubusercontent.com/u/74564958?v=4",
"events_url": "https://api.github.com/users/TJ-Solergibert/events{/privacy}",
"followers_url": "https://api.github.com/users/TJ-Solergibert/followers",
"following_url": "https://api.github.com/users/TJ-Solergibert/following{/other_user}",
"gists_url": "https://api.github.com/users/TJ-Solergibert/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TJ-Solergibert",
"id": 74564958,
"login": "TJ-Solergibert",
"node_id": "MDQ6VXNlcjc0NTY0OTU4",
"organizations_url": "https://api.github.com/users/TJ-Solergibert/orgs",
"received_events_url": "https://api.github.com/users/TJ-Solergibert/received_events",
"repos_url": "https://api.github.com/users/TJ-Solergibert/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TJ-Solergibert/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TJ-Solergibert/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TJ-Solergibert"
} | [] | closed | false | null | [] | null | 6 | "2023-02-10T21:12:36Z" | "2023-02-14T17:41:08Z" | "2023-02-14T09:35:49Z" | NONE | null | ### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentioned error.
I alredy tried reseting the shorter strings (reset_cortas function). It only happends with NL, PL, RO and PT. It does not make sense since when processing the other languages I also use the corpus of those that fail and it does not cause any errors.
I suspect that the error may be in this direction:
We use cast_array_to_feature to support casting to custom types like Audio and Image # Also, when trying type "string", we don't want to convert integers or floats to "string". # We only do it if trying_type is False - since this is what the user asks for.
### Steps to reproduce the bug
Here I link a colab notebook to reproduce the error:
https://colab.research.google.com/drive/1JCrS7FlGfu_kFqChMrwKZ_bpabnIMqbP?authuser=1#scrollTo=FBAvlhMxIzpA
### Expected behavior
Data processing does not fail. A correct example can be seen here: https://huggingface.co/datasets/tj-solergibert/Europarl-ST-processed-mt-en
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 9.0.0
- Pandas version: 1.3.5 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5525/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5525/timeline | null | completed | null | null | false | 303,793 |
https://api.github.com/repos/huggingface/datasets/issues/5524 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5524/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5524/comments | https://api.github.com/repos/huggingface/datasets/issues/5524/events | https://github.com/huggingface/datasets/pull/5524 | 1,580,219,454 | PR_kwDODunzps5JvbMw | 5,524 | [INVALID PR] | {
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt"
} | [] | closed | false | null | [] | null | 1 | "2023-02-10T19:35:50Z" | "2023-02-10T19:51:45Z" | "2023-02-10T19:49:12Z" | CONTRIBUTOR | null | Hi to whoever is reading this! 🤗
## What's in this PR?
~~Basically, I've removed the 🤗`datasets` installation as `python -m pip install ".[quality]" in the `check_code_quality` job in `.github/workflows/ci.yaml`, as we don't need to install the whole package to run the CI, unless that's done on purpose e.g. to check that the Python package installation succeeds before running the tests over the matrix of os?~~
~~So I just wanted to check whether the time was reduced doing this (which I assume it will), plus whether this is something that can be improved, or just discarded in case you're also using that step to make sure that the package can be installed.~~
## What's missing?
~~I was just wondering whether you consider replacing `isort` and `flake8` with `ruff` (if possible), since it's way faster, more information at [`ruff`](https://github.com/charliermarsh/ruff). Before creating this PR the average time of the `check_code_quality` job was around 40s.~~
## Edit
Sorry for the inconvenience this may have caused, didn't realise that the config is defined in `setup.cfg` and `pyproject.toml`, so running those without installing the Python package leads to failure, my bad 😞
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5524/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5524/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5524.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5524",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5524.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5524"
} | true | 802 |
https://api.github.com/repos/huggingface/datasets/issues/5523 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5523/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5523/comments | https://api.github.com/repos/huggingface/datasets/issues/5523/events | https://github.com/huggingface/datasets/issues/5523 | 1,580,193,015 | I_kwDODunzps5eL9T3 | 5,523 | Checking that split name is correct happens only after the data is downloaded | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
}
] | null | 0 | "2023-02-10T19:13:03Z" | "2023-02-10T19:14:50Z" | null | CONTRIBUTOR | null | ### Describe the bug
Verification of split names (=indexing data by split) happens after downloading the data. So when the split name is incorrect, users learn about that only after the data is fully downloaded, for large datasets it might take a lot of time.
### Steps to reproduce the bug
Load any dataset with random split name, for example:
```python
from datasets import load_dataset
load_dataset("mozilla-foundation/common_voice_11_0", "en", split="blabla")
```
and the download will start smoothly, despite there is no split named "blabla".
### Expected behavior
Raise error when split name is incorrect.
### Environment info
`datasets==2.9.1.dev0` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5523/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5523/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/5522 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5522/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5522/comments | https://api.github.com/repos/huggingface/datasets/issues/5522/events | https://github.com/huggingface/datasets/pull/5522 | 1,580,183,124 | PR_kwDODunzps5JvTVp | 5,522 | Minor changes in JAX-formatting docstrings & type-hints | {
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt"
} | [] | closed | false | null | [] | null | 16 | "2023-02-10T19:05:00Z" | "2023-02-15T14:48:27Z" | "2023-02-15T13:19:06Z" | CONTRIBUTOR | null | Hi to whoever is reading this! 🤗
## What's in this PR?
I was exploring the code regarding the `JaxFormatter` implemented in 🤗`datasets`, and found some things that IMO could be changed. Those are mainly regarding the docstrings and the type-hints based on `jax`'s 0.4.1 release where `jax.Array` was introduced as the default type for JAX-arrays (instead of `jnp.DeviceArray`, `jnp.SharedDeviceArray`, and `jnp.GlobalDeviceArray`). Even though `isinstance(..., jax.Array)` also works with lower versions such as e.g. `0.3.25`.
More information about the latter at [`jax` v0.4.1 - Release Notes](https://github.com/google/jax/releases/tag/jax-v0.4.1) and [jax.Array migration - JAX documentation](https://jax.readthedocs.io/en/latest/jax_array_migration.html).
## What's missing?
* Do you want me to write an entry in the documentation on how to use 🤗`datasets` with JAX as https://huggingface.co/docs/datasets/use_with_pytorch with PyTorch?
* Do we need to actually include `pyarrow` under the `TYPE_CHECKING` when needed? I just did it for JAX, but if we are OK with that, I can do that with the rest of the formatters, just LMK.
* Should the License header be included in `datasets.formatting.np_formatter`? If so, do I include the one from 2020 e.g. https://github.com/huggingface/datasets/blob/b065547654efa0ec633cf373ac1512884c68b2e1/src/datasets/formatting/tf_formatter.py#L1-L13
* Is there any reason why `jnp.array` is being used instead of `jnp.asarray`? There's no difference between both, just that `jnp.asarray` has `copy=False` as default, even though `numpy` to `jax.numpy` conversion is not zero-copy, but just asking :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5522/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5522/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5522.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5522",
"merged_at": "2023-02-15T13:19:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5522.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5522"
} | true | 411,246 |
https://api.github.com/repos/huggingface/datasets/issues/5521 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5521/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5521/comments | https://api.github.com/repos/huggingface/datasets/issues/5521/events | https://github.com/huggingface/datasets/pull/5521 | 1,578,418,289 | PR_kwDODunzps5JpWnp | 5,521 | Fix bug when casting empty array to class labels | {
"avatar_url": "https://avatars.githubusercontent.com/u/6591505?v=4",
"events_url": "https://api.github.com/users/marioga/events{/privacy}",
"followers_url": "https://api.github.com/users/marioga/followers",
"following_url": "https://api.github.com/users/marioga/following{/other_user}",
"gists_url": "https://api.github.com/users/marioga/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/marioga",
"id": 6591505,
"login": "marioga",
"node_id": "MDQ6VXNlcjY1OTE1MDU=",
"organizations_url": "https://api.github.com/users/marioga/orgs",
"received_events_url": "https://api.github.com/users/marioga/received_events",
"repos_url": "https://api.github.com/users/marioga/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/marioga/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marioga/subscriptions",
"type": "User",
"url": "https://api.github.com/users/marioga"
} | [] | closed | false | null | [] | null | 1 | "2023-02-09T18:47:59Z" | "2023-02-13T20:40:48Z" | "2023-02-12T11:17:17Z" | CONTRIBUTOR | null | Fix https://github.com/huggingface/datasets/issues/5520. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5521/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5521/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5521.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5521",
"merged_at": "2023-02-12T11:17:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5521.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5521"
} | true | 232,158 |
https://api.github.com/repos/huggingface/datasets/issues/5520 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5520/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5520/comments | https://api.github.com/repos/huggingface/datasets/issues/5520/events | https://github.com/huggingface/datasets/issues/5520 | 1,578,417,074 | I_kwDODunzps5eFLuy | 5,520 | ClassLabel.cast_storage raises TypeError when called on an empty IntegerArray | {
"avatar_url": "https://avatars.githubusercontent.com/u/6591505?v=4",
"events_url": "https://api.github.com/users/marioga/events{/privacy}",
"followers_url": "https://api.github.com/users/marioga/followers",
"following_url": "https://api.github.com/users/marioga/following{/other_user}",
"gists_url": "https://api.github.com/users/marioga/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/marioga",
"id": 6591505,
"login": "marioga",
"node_id": "MDQ6VXNlcjY1OTE1MDU=",
"organizations_url": "https://api.github.com/users/marioga/orgs",
"received_events_url": "https://api.github.com/users/marioga/received_events",
"repos_url": "https://api.github.com/users/marioga/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/marioga/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marioga/subscriptions",
"type": "User",
"url": "https://api.github.com/users/marioga"
} | [] | closed | false | null | [] | null | 0 | "2023-02-09T18:46:52Z" | "2023-02-12T11:17:18Z" | "2023-02-12T11:17:18Z" | CONTRIBUTOR | null | ### Describe the bug
`ClassLabel.cast_storage` raises `TypeError` when called on an empty `IntegerArray`.
### Steps to reproduce the bug
Minimal steps:
```python
import pyarrow as pa
from datasets import ClassLabel
ClassLabel(names=['foo', 'bar']).cast_storage(pa.array([], pa.int64()))
```
In practice, this bug arises in situations like the one below:
```python
from datasets import ClassLabel, Dataset, Features, Sequence
dataset = Dataset.from_dict({'labels': [[], []]}, features=Features({'labels': Sequence(ClassLabel(names=['foo', 'bar']))}))
# this raises TypeError
dataset.map(batched=True, batch_size=1)
```
### Expected behavior
`ClassLabel.cast_storage` should return an empty Int64Array.
### Environment info
- `datasets` version: 2.9.1.dev0
- Platform: Linux-4.15.0-1032-aws-x86_64-with-glibc2.27
- Python version: 3.10.6
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5520/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5520/timeline | null | completed | null | null | false | 232,226 |
https://api.github.com/repos/huggingface/datasets/issues/5519 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5519/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5519/comments | https://api.github.com/repos/huggingface/datasets/issues/5519/events | https://github.com/huggingface/datasets/pull/5519 | 1,578,341,785 | PR_kwDODunzps5JpGPl | 5,519 | Format code with `ruff` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 5 | "2023-02-09T17:50:21Z" | "2023-02-14T16:28:27Z" | "2023-02-14T16:18:38Z" | CONTRIBUTOR | null | Use `ruff` for formatting instead of `isort` and `black` to be consistent with [`transformers`](https://github.com/huggingface/transformers/pull/21480) and [`hfh`](https://github.com/huggingface/huggingface_hub/pull/1323).
TODO:
- [x] ~Merge the community contributors' PR to avoid having to run `make style` on their PR branches~ (we have some new PRs, but fixing those shouldn't be too big of a problem) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5519/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5519/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5519.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5519",
"merged_at": "2023-02-14T16:18:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5519.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5519"
} | true | 426,497 |
https://api.github.com/repos/huggingface/datasets/issues/5518 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5518/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5518/comments | https://api.github.com/repos/huggingface/datasets/issues/5518/events | https://github.com/huggingface/datasets/pull/5518 | 1,578,203,962 | PR_kwDODunzps5Joom3 | 5,518 | Remove py.typed | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 3 | "2023-02-09T16:22:29Z" | "2023-02-13T13:55:49Z" | "2023-02-13T13:48:40Z" | CONTRIBUTOR | null | Fix https://github.com/huggingface/datasets/issues/3841 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5518/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5518/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5518.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5518",
"merged_at": "2023-02-13T13:48:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5518.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5518"
} | true | 336,371 |
https://api.github.com/repos/huggingface/datasets/issues/5517 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5517/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5517/comments | https://api.github.com/repos/huggingface/datasets/issues/5517/events | https://github.com/huggingface/datasets/issues/5517 | 1,577,976,608 | I_kwDODunzps5eDgMg | 5,517 | `with_format("numpy")` silently downcasts float64 to float32 features | {
"avatar_url": "https://avatars.githubusercontent.com/u/1250234?v=4",
"events_url": "https://api.github.com/users/ernestum/events{/privacy}",
"followers_url": "https://api.github.com/users/ernestum/followers",
"following_url": "https://api.github.com/users/ernestum/following{/other_user}",
"gists_url": "https://api.github.com/users/ernestum/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ernestum",
"id": 1250234,
"login": "ernestum",
"node_id": "MDQ6VXNlcjEyNTAyMzQ=",
"organizations_url": "https://api.github.com/users/ernestum/orgs",
"received_events_url": "https://api.github.com/users/ernestum/received_events",
"repos_url": "https://api.github.com/users/ernestum/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ernestum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ernestum/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ernestum"
} | [] | open | false | null | [] | {
"closed_at": null,
"closed_issues": 0,
"created_at": "2023-02-13T16:22:42Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
},
"description": "Next major release",
"due_on": null,
"html_url": "https://github.com/huggingface/datasets/milestone/10",
"id": 9038583,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/10/labels",
"node_id": "MI_kwDODunzps4Aier3",
"number": 10,
"open_issues": 1,
"state": "open",
"title": "3.0",
"updated_at": "2023-02-13T16:23:25Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/10"
} | 10 | "2023-02-09T14:18:00Z" | "2023-02-14T15:38:54Z" | null | NONE | null | ### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_dict({'a': [1.0, 2.0, 3.0]}).with_format("numpy")
print("feature dtype:", dataset.features['a'].dtype)
print("array dtype:", dataset['a'].dtype)
```
output:
```
feature dtype: float64
array dtype: float32
```
### Expected behavior
```
feature dtype: float64
array dtype: float64
```
### Environment info
- `datasets` version: 2.8.0
- Platform: Linux-5.4.0-135-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 10.0.1
- Pandas version: 1.4.4
### Suggested Fix
Changing [the `_tensorize` function of the numpy formatter](https://github.com/huggingface/datasets/blob/b065547654efa0ec633cf373ac1512884c68b2e1/src/datasets/formatting/np_formatter.py#L32) to
```python
def _tensorize(self, value):
if isinstance(value, (str, bytes, type(None))):
return value
elif isinstance(value, (np.character, np.ndarray)) and np.issubdtype(value.dtype, np.character):
return value
elif isinstance(value, np.number):
return value
return np.asarray(value, **self.np_array_kwargs)
```
fixes this particular issue for me. Not sure if this would break other tests. This should also avoid unnecessary copying of the array.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5517/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5517/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/5516 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5516/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5516/comments | https://api.github.com/repos/huggingface/datasets/issues/5516/events | https://github.com/huggingface/datasets/pull/5516 | 1,577,661,640 | PR_kwDODunzps5JmzPQ | 5,516 | Reload features from Parquet metadata | {
"avatar_url": "https://avatars.githubusercontent.com/u/6368040?v=4",
"events_url": "https://api.github.com/users/MFreidank/events{/privacy}",
"followers_url": "https://api.github.com/users/MFreidank/followers",
"following_url": "https://api.github.com/users/MFreidank/following{/other_user}",
"gists_url": "https://api.github.com/users/MFreidank/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MFreidank",
"id": 6368040,
"login": "MFreidank",
"node_id": "MDQ6VXNlcjYzNjgwNDA=",
"organizations_url": "https://api.github.com/users/MFreidank/orgs",
"received_events_url": "https://api.github.com/users/MFreidank/received_events",
"repos_url": "https://api.github.com/users/MFreidank/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MFreidank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MFreidank/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MFreidank"
} | [] | closed | false | null | [] | null | 4 | "2023-02-09T10:52:15Z" | "2023-02-12T16:00:00Z" | "2023-02-12T15:57:01Z" | CONTRIBUTOR | null | Resolves #5482.
Attaches feature metadata to parquet files serialised using `Dataset.to_parquet`.
This allows retrieving data with "rich" feature types (e.g., `datasets.features.image.Image` or `datasets.features.audio.Audio`) from parquet files without cumbersome casting (for an example, see #5482).
@lhoestq It seems that it is sufficient to attach metadata to the schema prior to serialising and features are loaded back with correct types afterwards automatically.
I used the following script to test the implementation:
```python
from pathlib import Path
import datasets
dataset_name = "Maysee/tiny-imagenet"
ds = datasets.load_dataset(dataset_name, split=datasets.Split.TRAIN)
output_directory_path = Path(__file__).parent.joinpath("example_test_outputs", dataset_name.replace("/", "_"))
output_directory_path.mkdir(exist_ok=True, parents=True)
output_filepath = output_directory_path.joinpath("ds.parquet")
ds.to_parquet(str(output_filepath))
reloaded_ds = datasets.load_dataset(str(output_directory_path), split=datasets.Split.TRAIN)
assert ds.features == reloaded_ds.features
```
Prior to the change in this PR this script raises an `AssertionError` and the `Image` features lose their type after serialisation. After the change in this PR, the assertion does not raise an error and manual inspection of the features shows type `Image` for the respective columns of `reloaded_ds `.
Some open questions:
* How/where can I best add new unit tests for this implementation?
* What dataset would I best use in the tests? I chose `Maysee/tiny-imagenet` mainly because it is small and contains an ?Image` feature that can be used to test, but I'd be happy for suggestions on a suitable data source to use.
* Currently I'm calling `datasets.arrow_writer.ArrowWriter._build_metadata` as I need the same logic. However, I'm not happy with the coupling between `datasets.io.parquet` and `datasets.arrow_writer` it leaves me with. Suggest to factor this common logic out into a helper function and reuse it from both of these. Do you agree and if yes, could you please guide me where I would best place this function?
Many thanks in advance and kind regards,
MFreidank
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5516/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5516/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5516.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5516",
"merged_at": "2023-02-12T15:57:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5516.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5516"
} | true | 277,486 |
https://api.github.com/repos/huggingface/datasets/issues/5515 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5515/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5515/comments | https://api.github.com/repos/huggingface/datasets/issues/5515/events | https://github.com/huggingface/datasets/pull/5515 | 1,577,590,611 | PR_kwDODunzps5Jmj5X | 5,515 | Unify `load_from_cache_file` type and logic | {
"avatar_url": "https://avatars.githubusercontent.com/u/22773355?v=4",
"events_url": "https://api.github.com/users/HallerPatrick/events{/privacy}",
"followers_url": "https://api.github.com/users/HallerPatrick/followers",
"following_url": "https://api.github.com/users/HallerPatrick/following{/other_user}",
"gists_url": "https://api.github.com/users/HallerPatrick/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/HallerPatrick",
"id": 22773355,
"login": "HallerPatrick",
"node_id": "MDQ6VXNlcjIyNzczMzU1",
"organizations_url": "https://api.github.com/users/HallerPatrick/orgs",
"received_events_url": "https://api.github.com/users/HallerPatrick/received_events",
"repos_url": "https://api.github.com/users/HallerPatrick/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/HallerPatrick/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HallerPatrick/subscriptions",
"type": "User",
"url": "https://api.github.com/users/HallerPatrick"
} | [] | closed | false | null | [] | null | 4 | "2023-02-09T10:04:46Z" | "2023-02-14T15:38:13Z" | "2023-02-14T14:26:42Z" | CONTRIBUTOR | null | * Updating type annotations for #`load_from_cache_file`
* Added logic for cache checking if needed
* Updated documentation following the wording of `Dataset.map` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5515/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5515/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5515.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5515",
"merged_at": "2023-02-14T14:26:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5515.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5515"
} | true | 447,716 |
https://api.github.com/repos/huggingface/datasets/issues/5514 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5514/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5514/comments | https://api.github.com/repos/huggingface/datasets/issues/5514/events | https://github.com/huggingface/datasets/issues/5514 | 1,576,453,837 | I_kwDODunzps5d9sbN | 5,514 | Improve inconsistency of `Dataset.map` interface for `load_from_cache_file` | {
"avatar_url": "https://avatars.githubusercontent.com/u/22773355?v=4",
"events_url": "https://api.github.com/users/HallerPatrick/events{/privacy}",
"followers_url": "https://api.github.com/users/HallerPatrick/followers",
"following_url": "https://api.github.com/users/HallerPatrick/following{/other_user}",
"gists_url": "https://api.github.com/users/HallerPatrick/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/HallerPatrick",
"id": 22773355,
"login": "HallerPatrick",
"node_id": "MDQ6VXNlcjIyNzczMzU1",
"organizations_url": "https://api.github.com/users/HallerPatrick/orgs",
"received_events_url": "https://api.github.com/users/HallerPatrick/received_events",
"repos_url": "https://api.github.com/users/HallerPatrick/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/HallerPatrick/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HallerPatrick/subscriptions",
"type": "User",
"url": "https://api.github.com/users/HallerPatrick"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | 4 | "2023-02-08T16:40:44Z" | "2023-02-14T14:26:44Z" | "2023-02-14T14:26:44Z" | CONTRIBUTOR | null | ### Feature request
1. Replace the `load_from_cache_file` default value to `True`.
2. Remove or alter checks from `is_caching_enabled` logic.
### Motivation
I stumbled over an inconsistency in the `Dataset.map` interface. The documentation (and source) states for the parameter `load_from_cache_file`:
```
load_from_cache_file (`bool`, defaults to `True` if caching is enabled):
If a cache file storing the current computation from `function`
can be identified, use it instead of recomputing.
```
1. `load_from_cache_file` default value is `None`, while being annotated as `bool`
2. It is inconsistent with other method signatures like `filter`, that have the default value `True`
3. The logic is inconsistent, as the `map` method checks if caching is enabled through `is_caching_enabled`. This logic is not used for other similar methods.
### Your contribution
I am not fully aware of the logic behind caching checks. If this is just a inconsistency that historically grew, I would suggest to remove the `is_caching_enabled` logic as the "default" logic. Maybe someone can give insights, if environment variables have a higher priority than local variables or vice versa.
If this is clarified, I could adjust the source according to the "Feature request" section of this issue. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5514/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5514/timeline | null | completed | null | null | false | 510,360 |
https://api.github.com/repos/huggingface/datasets/issues/5513 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5513/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5513/comments | https://api.github.com/repos/huggingface/datasets/issues/5513/events | https://github.com/huggingface/datasets/issues/5513 | 1,576,300,803 | I_kwDODunzps5d9HED | 5,513 | Some functions use a param named `type` shouldn't that be avoided since it's a Python reserved name? | {
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt"
} | [] | open | false | null | [] | null | 2 | "2023-02-08T15:13:46Z" | "2023-02-08T16:01:07Z" | null | CONTRIBUTOR | null | Hi @mariosasko, @lhoestq, or whoever reads this! :)
After going through `ArrowDataset.set_format` I found out that the `type` param is actually named `type` which is a Python reserved name as you may already know, shouldn't that be renamed to `format_type` before the 3.0.0 is released?
Just wanted to get your input, and if applicable, tackle this issue myself! Thanks 🤗 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5513/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5513/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/5512 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5512/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5512/comments | https://api.github.com/repos/huggingface/datasets/issues/5512/events | https://github.com/huggingface/datasets/pull/5512 | 1,576,142,432 | PR_kwDODunzps5JhtQy | 5,512 | Speed up batched PyTorch DataLoader | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 9 | "2023-02-08T13:38:59Z" | "2023-02-19T18:35:09Z" | "2023-02-19T18:27:29Z" | MEMBER | null | I implemented `__getitems__` to speed up batched data loading in PyTorch
close https://github.com/huggingface/datasets/issues/5505 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5512/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5512/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5512.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5512",
"merged_at": "2023-02-19T18:27:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5512.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5512"
} | true | 967,710 |
https://api.github.com/repos/huggingface/datasets/issues/5511 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5511/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5511/comments | https://api.github.com/repos/huggingface/datasets/issues/5511/events | https://github.com/huggingface/datasets/issues/5511 | 1,575,851,768 | I_kwDODunzps5d7Zb4 | 5,511 | Creating a dummy dataset from a bigger one | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | 2 | "2023-02-08T10:18:41Z" | "2023-02-08T10:35:48Z" | "2023-02-08T10:35:48Z" | MEMBER | null | ### Describe the bug
I often want to create a dummy dataset from a bigger dataset for fast iteration when training. However, I'm having a hard time doing this especially when trying to upload the dataset to the Hub.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("lambdalabs/pokemon-blip-captions")
dataset["train"] = dataset["train"].select(range(20))
dataset.push_to_hub("patrickvonplaten/dummy_image_data")
```
gives:
```
~/python_bin/datasets/arrow_dataset.py in _push_parquet_shards_to_hub(self, repo_id, split, private, token, branch, max_shard_size, embed_external_files)
4003 base_wait_time=2.0,
4004 max_retries=5,
-> 4005 max_wait_time=20.0,
4006 )
4007 return repo_id, split, uploaded_size, dataset_nbytes
~/python_bin/datasets/utils/file_utils.py in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time)
328 while True:
329 try:
--> 330 return func(*func_args, **func_kwargs)
331 except exceptions as err:
332 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes):
~/hf/lib/python3.7/site-packages/huggingface_hub/utils/_validators.py in _inner_fn(*args, **kwargs)
122 )
123
--> 124 return fn(*args, **kwargs)
125
126 return _inner_fn # type: ignore
TypeError: upload_file() got an unexpected keyword argument 'identical_ok'
In [2]:
```
### Expected behavior
I would have expected this to work. It's for me the most intuitive way of creating a dummy dataset.
### Environment info
```
- `datasets` version: 2.1.1.dev0
- Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-debian-10.13
- Python version: 3.7.3
- PyArrow version: 11.0.0
- Pandas version: 1.3.5
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5511/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5511/timeline | null | completed | null | null | false | 1,027 |
https://api.github.com/repos/huggingface/datasets/issues/5510 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5510/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5510/comments | https://api.github.com/repos/huggingface/datasets/issues/5510/events | https://github.com/huggingface/datasets/pull/5510 | 1,575,191,549 | PR_kwDODunzps5JehbR | 5,510 | Milvus integration for search | {
"avatar_url": "https://avatars.githubusercontent.com/u/81822489?v=4",
"events_url": "https://api.github.com/users/filip-halt/events{/privacy}",
"followers_url": "https://api.github.com/users/filip-halt/followers",
"following_url": "https://api.github.com/users/filip-halt/following{/other_user}",
"gists_url": "https://api.github.com/users/filip-halt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/filip-halt",
"id": 81822489,
"login": "filip-halt",
"node_id": "MDQ6VXNlcjgxODIyNDg5",
"organizations_url": "https://api.github.com/users/filip-halt/orgs",
"received_events_url": "https://api.github.com/users/filip-halt/received_events",
"repos_url": "https://api.github.com/users/filip-halt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/filip-halt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/filip-halt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/filip-halt"
} | [] | open | false | null | [] | null | 5 | "2023-02-07T23:30:26Z" | "2023-02-20T23:57:54Z" | null | NONE | null | Signed-off-by: Filip Haltmayer <[email protected]> | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5510/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5510/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5510.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5510",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5510.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5510"
} | true | null |
https://api.github.com/repos/huggingface/datasets/issues/5509 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5509/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5509/comments | https://api.github.com/repos/huggingface/datasets/issues/5509/events | https://github.com/huggingface/datasets/pull/5509 | 1,574,177,320 | PR_kwDODunzps5JbH-u | 5,509 | Add a static `__all__` to `__init__.py` for typecheckers | {
"avatar_url": "https://avatars.githubusercontent.com/u/14248012?v=4",
"events_url": "https://api.github.com/users/LoicGrobol/events{/privacy}",
"followers_url": "https://api.github.com/users/LoicGrobol/followers",
"following_url": "https://api.github.com/users/LoicGrobol/following{/other_user}",
"gists_url": "https://api.github.com/users/LoicGrobol/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LoicGrobol",
"id": 14248012,
"login": "LoicGrobol",
"node_id": "MDQ6VXNlcjE0MjQ4MDEy",
"organizations_url": "https://api.github.com/users/LoicGrobol/orgs",
"received_events_url": "https://api.github.com/users/LoicGrobol/received_events",
"repos_url": "https://api.github.com/users/LoicGrobol/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LoicGrobol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LoicGrobol/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LoicGrobol"
} | [] | open | false | null | [] | null | 2 | "2023-02-07T11:42:40Z" | "2023-02-08T17:48:24Z" | null | NONE | null | This adds a static `__all__` field to `__init__.py`, allowing typecheckers to know which symbols are accessible from `datasets` at runtime. In particular [Pyright](https://github.com/microsoft/pylance-release/issues/2328#issuecomment-1029381258) seems to rely on this. At this point I have added all (modulo oversight) the symbols mentioned in the Reference part of [the docs](https://huggingface.co/docs/datasets), but that could be adjusted. As a side effect, only these symbols will be imported by `from datasets import *`, which may or may not be a good thing (and if it isn't, that's easy to fix).
Another option would be to add a pyi stub, but I think `__all__` should be the most pythonic solution.
This should fix #3841. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5509/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5509/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5509.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5509",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5509.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5509"
} | true | null |
https://api.github.com/repos/huggingface/datasets/issues/5508 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5508/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5508/comments | https://api.github.com/repos/huggingface/datasets/issues/5508/events | https://github.com/huggingface/datasets/issues/5508 | 1,573,290,359 | I_kwDODunzps5dxoF3 | 5,508 | Saving a dataset after setting format to torch doesn't work, but only if filtering | {
"avatar_url": "https://avatars.githubusercontent.com/u/13984157?v=4",
"events_url": "https://api.github.com/users/joebhakim/events{/privacy}",
"followers_url": "https://api.github.com/users/joebhakim/followers",
"following_url": "https://api.github.com/users/joebhakim/following{/other_user}",
"gists_url": "https://api.github.com/users/joebhakim/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/joebhakim",
"id": 13984157,
"login": "joebhakim",
"node_id": "MDQ6VXNlcjEzOTg0MTU3",
"organizations_url": "https://api.github.com/users/joebhakim/orgs",
"received_events_url": "https://api.github.com/users/joebhakim/received_events",
"repos_url": "https://api.github.com/users/joebhakim/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/joebhakim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joebhakim/subscriptions",
"type": "User",
"url": "https://api.github.com/users/joebhakim"
} | [] | closed | false | null | [] | null | 2 | "2023-02-06T21:08:58Z" | "2023-02-09T14:55:26Z" | "2023-02-09T14:55:26Z" | NONE | null | ### Describe the bug
Saving a dataset after setting format to torch doesn't work, but only if filtering
### Steps to reproduce the bug
```
a = Dataset.from_dict({"b": [1, 2]})
a.set_format('torch')
a.save_to_disk("test_save") # saves successfully
a.filter(None).save_to_disk("test_save_filter") # does not
>> [...] TypeError: Provided `function` which is applied to all elements of table returns a `dict` of types [<class 'torch.Tensor'>]. When using `batched=True`, make sure provided `function` returns a `dict` of types like `(<class 'list'>, <class 'numpy.ndarray'>)`.
# note: skipping the format change to torch lets this work.
### Expected behavior
Saving to work
### Environment info
- `datasets` version: 2.4.0
- Platform: Linux-6.1.9-arch1-1-x86_64-with-glibc2.36
- Python version: 3.10.9
- PyArrow version: 9.0.0
- Pandas version: 1.4.4 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5508/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5508/timeline | null | completed | null | null | false | 236,788 |
https://api.github.com/repos/huggingface/datasets/issues/5507 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5507/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5507/comments | https://api.github.com/repos/huggingface/datasets/issues/5507/events | https://github.com/huggingface/datasets/issues/5507 | 1,572,667,036 | I_kwDODunzps5dvP6c | 5,507 | Optimise behaviour in respect to indices mapping | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
] | null | 0 | "2023-02-06T14:25:55Z" | "2023-02-06T14:25:55Z" | null | CONTRIBUTOR | null | _Originally [posted](https://huggingface.slack.com/archives/C02V51Q3800/p1675443873878489?thread_ts=1675418893.373479&cid=C02V51Q3800) on Slack_
Considering all this, perhaps for Datasets 3.0, we can do the following:
* have `continuous=True` by default in `.shard` (requested in the survey and makes more sense for us since it doesn't create an indices mapping)
* allow calling `save_to_disk` on "unflattened" datasets
* remove "hidden" expensive calls in `save_to_disk`, `unique`, `concatenate_datasets`, etc. For instance, instead of silently calling `flatten_indices` where it's needed, it's probably better to be explicit (considering how expensive these ops can be) and raise an error instead | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5507/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5507/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/5506 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5506/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5506/comments | https://api.github.com/repos/huggingface/datasets/issues/5506/events | https://github.com/huggingface/datasets/issues/5506 | 1,571,838,641 | I_kwDODunzps5dsFqx | 5,506 | IterableDataset and Dataset return different batch sizes when using Trainer with multiple GPUs | {
"avatar_url": "https://avatars.githubusercontent.com/u/38166299?v=4",
"events_url": "https://api.github.com/users/kheyer/events{/privacy}",
"followers_url": "https://api.github.com/users/kheyer/followers",
"following_url": "https://api.github.com/users/kheyer/following{/other_user}",
"gists_url": "https://api.github.com/users/kheyer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kheyer",
"id": 38166299,
"login": "kheyer",
"node_id": "MDQ6VXNlcjM4MTY2Mjk5",
"organizations_url": "https://api.github.com/users/kheyer/orgs",
"received_events_url": "https://api.github.com/users/kheyer/received_events",
"repos_url": "https://api.github.com/users/kheyer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kheyer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kheyer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kheyer"
} | [] | closed | false | null | [] | null | 4 | "2023-02-06T03:26:03Z" | "2023-02-08T18:30:08Z" | "2023-02-08T18:30:07Z" | NONE | null | ### Describe the bug
I am training a Roberta model using 2 GPUs and the `Trainer` API with a batch size of 256.
Initially I used a standard `Dataset`, but had issues with slow data loading. After reading [this issue](https://github.com/huggingface/datasets/issues/2252), I swapped to loading my dataset as contiguous shards and passing those to an `IterableDataset`. I observed an unexpected drop in GPU memory utilization, and found the batch size returned from the model had been cut in half.
When using `Trainer` with 2 GPUs and a batch size of 256, `Dataset` returns a batch of size 512 (256 per GPU), while `IterableDataset` returns a batch size of 256 (256 total). My guess is `IterableDataset` isn't accounting for multiple cards.
### Steps to reproduce the bug
```python
import datasets
from datasets import IterableDataset
from transformers import RobertaConfig
from transformers import RobertaTokenizerFast
from transformers import RobertaForMaskedLM
from transformers import DataCollatorForLanguageModeling
from transformers import Trainer, TrainingArguments
use_iterable_dataset = True
def gen_from_shards(shards):
for shard in shards:
for example in shard:
yield example
dataset = datasets.load_from_disk('my_dataset.hf')
if use_iterable_dataset:
n_shards = 100
shards = [dataset.shard(num_shards=n_shards, index=i) for i in range(n_shards)]
dataset = IterableDataset.from_generator(gen_from_shards, gen_kwargs={"shards": shards})
tokenizer = RobertaTokenizerFast.from_pretrained("./my_tokenizer", max_len=160, use_fast=True)
config = RobertaConfig(
vocab_size=8248,
max_position_embeddings=256,
num_attention_heads=8,
num_hidden_layers=6,
type_vocab_size=1)
model = RobertaForMaskedLM(config=config)
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15)
training_args = TrainingArguments(
per_device_train_batch_size=256
# other args removed for brevity
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dataset,
)
trainer.train()
```
### Expected behavior
Expected `Dataset` and `IterableDataset` to have the same batch size behavior. If the current behavior is intentional, the batch size printout at the start of training should be updated. Currently, both dataset classes result in `Trainer` printing the same total batch size, even though the batch size sent to the GPUs are different.
### Environment info
datasets 2.7.1
transformers 4.25.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5506/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5506/timeline | null | completed | null | null | false | 227,044 |
https://api.github.com/repos/huggingface/datasets/issues/5505 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5505/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5505/comments | https://api.github.com/repos/huggingface/datasets/issues/5505/events | https://github.com/huggingface/datasets/issues/5505 | 1,571,720,814 | I_kwDODunzps5dro5u | 5,505 | PyTorch BatchSampler still loads from Dataset one-by-one | {
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/davidgilbertson",
"id": 4443482,
"login": "davidgilbertson",
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/davidgilbertson"
} | [] | closed | false | null | [] | null | 2 | "2023-02-06T01:14:55Z" | "2023-02-19T18:27:30Z" | "2023-02-19T18:27:30Z" | NONE | null | ### Describe the bug
In [the docs here](https://huggingface.co/docs/datasets/use_with_pytorch#use-a-batchsampler), it mentions the issue of the Dataset being read one-by-one, then states that using a BatchSampler resolves the issue.
I'm not sure if this is a mistake in the docs or the code, but it seems that the only way for a Dataset to be passed a list of indexes by PyTorch (instead of one index at a time) is to define a `__getitems__` method (note the plural) on the Dataset object, and since the HF Dataset doesn't have this, PyTorch executes [this line of code](https://github.com/pytorch/pytorch/blob/master/torch/utils/data/_utils/fetch.py#L58), reverting to fetching one-by-one.
### Steps to reproduce the bug
You can put a breakpoint in `Dataset.__getitem__()` or just print the args from there and see that it's called multiple times for a single `next(iter(dataloader))`, even when using the code from the docs:
```py
from torch.utils.data.sampler import BatchSampler, RandomSampler
batch_sampler = BatchSampler(RandomSampler(ds), batch_size=32, drop_last=False)
dataloader = DataLoader(ds, batch_sampler=batch_sampler)
```
### Expected behavior
The expected behaviour would be for it to fetch batches from the dataset, rather than one-by-one.
To demonstrate that there is room for improvement: once I have a HF dataset `ds`, if I just add this line:
```py
ds.__getitems__ = ds.__getitem__
```
...then the time taken to loop over the dataset improves considerably (for wikitext-103, from one minute to 13 seconds with batch size 32). Probably not a big deal in the grand scheme of things, but seems like an easy win.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.8
- PyArrow version: 10.0.1
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5505/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5505/timeline | null | completed | null | null | false | 1,185,155 |
https://api.github.com/repos/huggingface/datasets/issues/5504 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5504/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5504/comments | https://api.github.com/repos/huggingface/datasets/issues/5504/events | https://github.com/huggingface/datasets/pull/5504 | 1,570,621,242 | PR_kwDODunzps5JPoWy | 5,504 | don't zero copy timestamps | {
"avatar_url": "https://avatars.githubusercontent.com/u/2512762?v=4",
"events_url": "https://api.github.com/users/dwyatte/events{/privacy}",
"followers_url": "https://api.github.com/users/dwyatte/followers",
"following_url": "https://api.github.com/users/dwyatte/following{/other_user}",
"gists_url": "https://api.github.com/users/dwyatte/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dwyatte",
"id": 2512762,
"login": "dwyatte",
"node_id": "MDQ6VXNlcjI1MTI3NjI=",
"organizations_url": "https://api.github.com/users/dwyatte/orgs",
"received_events_url": "https://api.github.com/users/dwyatte/received_events",
"repos_url": "https://api.github.com/users/dwyatte/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dwyatte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dwyatte/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dwyatte"
} | [] | closed | false | null | [] | null | 3 | "2023-02-03T23:39:04Z" | "2023-02-08T17:28:50Z" | "2023-02-08T14:33:17Z" | CONTRIBUTOR | null | Fixes https://github.com/huggingface/datasets/issues/5495
I'm not sure whether we prefer a test here or if timestamps are known to be unsupported (like booleans). The current test at least covers the bug | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5504/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5504/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5504.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5504",
"merged_at": "2023-02-08T14:33:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5504.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5504"
} | true | 399,253 |
https://api.github.com/repos/huggingface/datasets/issues/5502 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5502/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5502/comments | https://api.github.com/repos/huggingface/datasets/issues/5502/events | https://github.com/huggingface/datasets/pull/5502 | 1,570,091,225 | PR_kwDODunzps5JN0aX | 5,502 | Added functionality: sort datasets by multiple keys | {
"avatar_url": "https://avatars.githubusercontent.com/u/7805682?v=4",
"events_url": "https://api.github.com/users/MichlF/events{/privacy}",
"followers_url": "https://api.github.com/users/MichlF/followers",
"following_url": "https://api.github.com/users/MichlF/following{/other_user}",
"gists_url": "https://api.github.com/users/MichlF/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MichlF",
"id": 7805682,
"login": "MichlF",
"node_id": "MDQ6VXNlcjc4MDU2ODI=",
"organizations_url": "https://api.github.com/users/MichlF/orgs",
"received_events_url": "https://api.github.com/users/MichlF/received_events",
"repos_url": "https://api.github.com/users/MichlF/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MichlF/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MichlF/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MichlF"
} | [] | closed | false | null | [] | null | 5 | "2023-02-03T16:17:00Z" | "2023-02-21T14:46:49Z" | "2023-02-21T14:39:23Z" | CONTRIBUTOR | null | Added functionality implementation: sort datasets by multiple keys/columns as discussed in https://github.com/huggingface/datasets/issues/5425. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5502/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5502/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5502.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5502",
"merged_at": "2023-02-21T14:39:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5502.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5502"
} | true | 1,549,343 |
https://api.github.com/repos/huggingface/datasets/issues/5501 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5501/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5501/comments | https://api.github.com/repos/huggingface/datasets/issues/5501/events | https://github.com/huggingface/datasets/pull/5501 | 1,569,644,159 | PR_kwDODunzps5JMTn8 | 5,501 | Increase chunk size for speeding up file downloads | {
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Narsil",
"id": 204321,
"login": "Narsil",
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"repos_url": "https://api.github.com/users/Narsil/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Narsil"
} | [] | open | false | null | [] | null | 4 | "2023-02-03T10:50:10Z" | "2023-02-09T11:04:11Z" | null | CONTRIBUTOR | null | Original fix: https://github.com/huggingface/huggingface_hub/pull/1267
Not sure this function is actually still called though.
I haven't done benches on this. Is there a dataset where files are hosted on the hub through cloudfront so we can have the same setup as in `hf_hub` ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5501/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5501/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5501.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5501",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5501.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5501"
} | true | null |
https://api.github.com/repos/huggingface/datasets/issues/5500 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5500/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5500/comments | https://api.github.com/repos/huggingface/datasets/issues/5500/events | https://github.com/huggingface/datasets/issues/5500 | 1,569,257,240 | I_kwDODunzps5diPcY | 5,500 | WMT19 custom download checksum error | {
"avatar_url": "https://avatars.githubusercontent.com/u/38466901?v=4",
"events_url": "https://api.github.com/users/Hannibal046/events{/privacy}",
"followers_url": "https://api.github.com/users/Hannibal046/followers",
"following_url": "https://api.github.com/users/Hannibal046/following{/other_user}",
"gists_url": "https://api.github.com/users/Hannibal046/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Hannibal046",
"id": 38466901,
"login": "Hannibal046",
"node_id": "MDQ6VXNlcjM4NDY2OTAx",
"organizations_url": "https://api.github.com/users/Hannibal046/orgs",
"received_events_url": "https://api.github.com/users/Hannibal046/received_events",
"repos_url": "https://api.github.com/users/Hannibal046/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Hannibal046/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hannibal046/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Hannibal046"
} | [] | closed | false | null | [] | null | 1 | "2023-02-03T05:45:37Z" | "2023-02-03T05:52:56Z" | "2023-02-03T05:52:56Z" | NONE | null | ### Describe the bug
I use the following scripts to download data from WMT19:
```python
import datasets
from datasets import inspect_dataset, load_dataset_builder
from wmt19.wmt_utils import _TRAIN_SUBSETS,_DEV_SUBSETS
## this is a must due to: https://discuss.huggingface.co/t/load-dataset-hangs-with-local-files/28034/3
if __name__ == '__main__':
dev_subsets,train_subsets = [],[]
for subset in _TRAIN_SUBSETS:
if subset.target=='en' and 'de' in subset.sources:
train_subsets.append(subset.name)
for subset in _DEV_SUBSETS:
if subset.target=='en' and 'de' in subset.sources:
dev_subsets.append(subset.name)
inspect_dataset("wmt19", "./wmt19")
builder = load_dataset_builder(
"./wmt19/wmt_utils.py",
language_pair=("de", "en"),
subsets={
datasets.Split.TRAIN: train_subsets,
datasets.Split.VALIDATION: dev_subsets,
},
)
builder.download_and_prepare()
ds = builder.as_dataset()
ds.to_json("../data/wmt19/ende/data.json")
```
And I got the following error:
```
Traceback (most recent call last): | 0/2 [00:00<?, ?obj/s]
File "draft.py", line 26, in <module>
builder.download_and_prepare() | 0/1 [00:00<?, ?obj/s]
File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/builder.py", line 605, in download_and_prepare
self._download_and_prepare(%| | 0/1 [00:00<?, ?obj/s]
File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/builder.py", line 1104, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) | 0/1 [00:00<?, ?obj/s]
File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/builder.py", line 676, in _download_and_prepare
verify_checksums(s #13: 0%| | 0/1 [00:00<?, ?obj/s]
File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 35, in verify_checksums
raise UnexpectedDownloadedFile(str(set(recorded_checksums) - set(expected_checksums))) | 0/1 [00:00<?, ?obj/s]
datasets.utils.info_utils.UnexpectedDownloadedFile: {'https://s3.amazonaws.com/web-language-models/paracrawl/release1/paracrawl-release1.en-de.zipporah0-dedup-clean.tgz', 'https://huggingface.co/datasets/wmt/wmt13/resolve/main-zip/training-parallel-europarl-v7.zip', 'https://huggingface.co/datasets/wmt/wmt18/resolve/main-zip/translation-task/rapid2016.zip', 'https://huggingface.co/datasets/wmt/wmt18/resolve/main-zip/translation-task/training-parallel-nc-v13.zip', 'https://huggingface.co/datasets/wmt/wmt17/resolve/main-zip/translation-task/training-parallel-nc-v12.zip', 'https://huggingface.co/datasets/wmt/wmt14/resolve/main-zip/training-parallel-nc-v9.zip', 'https://huggingface.co/datasets/wmt/wmt15/resolve/main-zip/training-parallel-nc-v10.zip', 'https://huggingface.co/datasets/wmt/wmt16/resolve/main-zip/translation-task/training-parallel-nc-v11.zip'}
```
### Steps to reproduce the bug
see above
### Expected behavior
download data successfully
### Environment info
datasets==2.1.0
python==3.8
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5500/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5500/timeline | null | completed | null | null | false | 439 |
https://api.github.com/repos/huggingface/datasets/issues/5499 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5499/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5499/comments | https://api.github.com/repos/huggingface/datasets/issues/5499/events | https://github.com/huggingface/datasets/issues/5499 | 1,568,937,026 | I_kwDODunzps5dhBRC | 5,499 | `load_dataset` has ~4 seconds of overhead for cached data | {
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/davidgilbertson",
"id": 4443482,
"login": "davidgilbertson",
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/davidgilbertson"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | 2 | "2023-02-02T23:34:50Z" | "2023-02-07T19:35:11Z" | null | NONE | null | ### Feature request
When loading a dataset that has been cached locally, the `load_dataset` function takes a lot longer than it should take to fetch the dataset from disk (or memory).
This is particularly noticeable for smaller datasets. For example, wikitext-2, comparing `load_data` (once cached) and `load_from_disk`, the `load_dataset` method takes 40 times longer.
⏱ 4.84s ⮜ load_dataset
⏱ 119ms ⮜ load_from_disk
### Motivation
I assume this is doing something like checking for a newer version.
If so, that's an age old problem: do you make the user wait _every single time they load from cache_ or do you do something like load from cache always, _then_ check for a newer version and alert if they have stale data. The decision usually revolves around what percentage of the time the data will have been updated, and how dangerous old data is.
For most datasets it's extremely unlikely that there will be a newer version on any given run, so 99% of the time this is just wasted time.
Maybe you don't want to make that decision for all users, but at least having the _option_ to not wait for checks would be an improvement.
### Your contribution
. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5499/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5499/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/5498 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5498/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5498/comments | https://api.github.com/repos/huggingface/datasets/issues/5498/events | https://github.com/huggingface/datasets/issues/5498 | 1,568,190,529 | I_kwDODunzps5deLBB | 5,498 | TypeError: 'bool' object is not iterable when filtering a datasets.arrow_dataset.Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/91255010?v=4",
"events_url": "https://api.github.com/users/vmuel/events{/privacy}",
"followers_url": "https://api.github.com/users/vmuel/followers",
"following_url": "https://api.github.com/users/vmuel/following{/other_user}",
"gists_url": "https://api.github.com/users/vmuel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vmuel",
"id": 91255010,
"login": "vmuel",
"node_id": "MDQ6VXNlcjkxMjU1MDEw",
"organizations_url": "https://api.github.com/users/vmuel/orgs",
"received_events_url": "https://api.github.com/users/vmuel/received_events",
"repos_url": "https://api.github.com/users/vmuel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vmuel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vmuel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vmuel"
} | [] | closed | false | null | [] | null | 2 | "2023-02-02T14:46:49Z" | "2023-02-04T17:19:37Z" | "2023-02-04T17:19:36Z" | NONE | null | ### Describe the bug
Hi,
Thanks for the amazing work on the library!
**Describe the bug**
I think I might have noticed a small bug in the filter method.
Having loaded a dataset using `load_dataset`, when I try to filter out empty entries with `batched=True`, I get a TypeError.
### Steps to reproduce the bug
```
train_dataset = train_dataset.filter(
function=lambda example: example["image"] is not None,
batched=True,
batch_size=10)
```
Error message:
```
File .../lib/python3.9/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
476 validate_fingerprint(kwargs[fingerprint_name])
478 # Call actual function
--> 480 out = func(self, *args, **kwargs)
...
-> 5666 indices_array = [i for i, to_keep in zip(indices, mask) if to_keep]
5667 if indices_mapping is not None:
5668 indices_array = pa.array(indices_array, type=pa.uint64())
TypeError: 'bool' object is not iterable
```
**Removing batched=True allows to bypass the issue.**
### Expected behavior
According to the doc, "[batch_size corresponds to the] number of examples per batch provided to function if batched = True", so we shouldn't need to remove the batchd=True arg?
source: https://huggingface.co/docs/datasets/v2.9.0/en/package_reference/main_classes#datasets.Dataset.filter
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.31
- Python version: 3.9.10
- PyArrow version: 10.0.1
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5498/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5498/timeline | null | completed | null | null | false | 181,967 |
https://api.github.com/repos/huggingface/datasets/issues/5497 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5497/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5497/comments | https://api.github.com/repos/huggingface/datasets/issues/5497/events | https://github.com/huggingface/datasets/pull/5497 | 1,567,601,264 | PR_kwDODunzps5JFhvc | 5,497 | Improved error message for gated/private repos | {
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/osanseviero",
"id": 7246357,
"login": "osanseviero",
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"type": "User",
"url": "https://api.github.com/users/osanseviero"
} | [] | closed | false | null | [] | null | 3 | "2023-02-02T08:56:15Z" | "2023-02-02T11:26:08Z" | "2023-02-02T11:17:15Z" | MEMBER | null | Using `use_auth_token=True` is not needed anymore. If a user logged in, the token will be automatically retrieved. Also include a mention for gated repos
See https://github.com/huggingface/huggingface_hub/pull/1064 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5497/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5497/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5497.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5497",
"merged_at": "2023-02-02T11:17:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5497.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5497"
} | true | 8,460 |
https://api.github.com/repos/huggingface/datasets/issues/5496 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5496/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5496/comments | https://api.github.com/repos/huggingface/datasets/issues/5496/events | https://github.com/huggingface/datasets/issues/5496 | 1,567,301,765 | I_kwDODunzps5dayCF | 5,496 | Add a `reduce` method | {
"avatar_url": "https://avatars.githubusercontent.com/u/59542043?v=4",
"events_url": "https://api.github.com/users/zhangir-azerbayev/events{/privacy}",
"followers_url": "https://api.github.com/users/zhangir-azerbayev/followers",
"following_url": "https://api.github.com/users/zhangir-azerbayev/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangir-azerbayev/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zhangir-azerbayev",
"id": 59542043,
"login": "zhangir-azerbayev",
"node_id": "MDQ6VXNlcjU5NTQyMDQz",
"organizations_url": "https://api.github.com/users/zhangir-azerbayev/orgs",
"received_events_url": "https://api.github.com/users/zhangir-azerbayev/received_events",
"repos_url": "https://api.github.com/users/zhangir-azerbayev/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zhangir-azerbayev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangir-azerbayev/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zhangir-azerbayev"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | 2 | "2023-02-02T04:30:22Z" | "2023-02-13T15:02:54Z" | null | NONE | null | ### Feature request
Right now the `Dataset` class implements `map()` and `filter()`, but leaves out the third functional idiom popular among Python users: `reduce`.
### Motivation
A `reduce` method is often useful when calculating dataset statistics, for example, the occurrence of a particular n-gram or the average line length of a code dataset.
### Your contribution
I haven't contributed to `datasets` before, but I don't expect this will be too difficult, since the implementation will closely follow that of `map` and `filter`. I could have a crack over the weekend. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5496/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5496/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/5495 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5495/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5495/comments | https://api.github.com/repos/huggingface/datasets/issues/5495/events | https://github.com/huggingface/datasets/issues/5495 | 1,566,803,452 | I_kwDODunzps5dY4X8 | 5,495 | to_tf_dataset fails with datetime UTC columns even if not included in columns argument | {
"avatar_url": "https://avatars.githubusercontent.com/u/2512762?v=4",
"events_url": "https://api.github.com/users/dwyatte/events{/privacy}",
"followers_url": "https://api.github.com/users/dwyatte/followers",
"following_url": "https://api.github.com/users/dwyatte/following{/other_user}",
"gists_url": "https://api.github.com/users/dwyatte/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dwyatte",
"id": 2512762,
"login": "dwyatte",
"node_id": "MDQ6VXNlcjI1MTI3NjI=",
"organizations_url": "https://api.github.com/users/dwyatte/orgs",
"received_events_url": "https://api.github.com/users/dwyatte/received_events",
"repos_url": "https://api.github.com/users/dwyatte/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dwyatte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dwyatte/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dwyatte"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | null | [] | null | 2 | "2023-02-01T20:47:33Z" | "2023-02-08T14:33:19Z" | "2023-02-08T14:33:19Z" | CONTRIBUTOR | null | ### Describe the bug
There appears to be some eager behavior in `to_tf_dataset` that runs against every column in a dataset even if they aren't included in the columns argument. This is problematic with datetime UTC columns due to them not working with zero copy. If I don't have UTC information in my datetime column, then everything works as expected.
### Steps to reproduce the bug
```python
import numpy as np
import pandas as pd
from datasets import Dataset
df = pd.DataFrame(np.random.rand(2, 1), columns=["x"])
# df["dt"] = pd.to_datetime(["2023-01-01", "2023-01-01"]) # works fine
df["dt"] = pd.to_datetime(["2023-01-01 00:00:00.00000+00:00", "2023-01-01 00:00:00.00000+00:00"])
df.to_parquet("test.pq")
ds = Dataset.from_parquet("test.pq")
tf_ds = ds.to_tf_dataset(columns=["x"], batch_size=2, shuffle=True)
```
```
ArrowInvalid Traceback (most recent call last)
Cell In[1], line 12
8 df.to_parquet("test.pq")
11 ds = Dataset.from_parquet("test.pq")
---> 12 tf_ds = ds.to_tf_dataset(columns=["r"], batch_size=2, shuffle=True)
File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:411, in TensorflowDatasetMixin.to_tf_dataset(self, batch_size, columns, shuffle, collate_fn, drop_remainder, collate_fn_args, label_cols, prefetch, num_workers)
407 dataset = self
409 # TODO(Matt, QL): deprecate the retention of label_ids and label
--> 411 output_signature, columns_to_np_types = dataset._get_output_signature(
412 dataset,
413 collate_fn=collate_fn,
414 collate_fn_args=collate_fn_args,
415 cols_to_retain=cols_to_retain,
416 batch_size=batch_size if drop_remainder else None,
417 )
419 if "labels" in output_signature:
420 if ("label_ids" in columns or "label" in columns) and "labels" not in columns:
File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:254, in TensorflowDatasetMixin._get_output_signature(dataset, collate_fn, collate_fn_args, cols_to_retain, batch_size, num_test_batches)
252 for _ in range(num_test_batches):
253 indices = sample(range(len(dataset)), test_batch_size)
--> 254 test_batch = dataset[indices]
255 if cols_to_retain is not None:
256 test_batch = {key: value for key, value in test_batch.items() if key in cols_to_retain}
File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2590, in Dataset.__getitem__(self, key)
2588 def __getitem__(self, key): # noqa: F811
2589 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 2590 return self._getitem(
2591 key,
2592 )
File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2575, in Dataset._getitem(self, key, **kwargs)
2573 formatter = get_formatter(format_type, features=self.features, **format_kwargs)
2574 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
-> 2575 formatted_output = format_table(
2576 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
2577 )
2578 return formatted_output
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:634, in format_table(table, key, formatter, format_columns, output_all_columns)
632 python_formatter = PythonFormatter(features=None)
633 if format_columns is None:
--> 634 return formatter(pa_table, query_type=query_type)
635 elif query_type == "column":
636 if key in format_columns:
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:410, in Formatter.__call__(self, pa_table, query_type)
408 return self.format_column(pa_table)
409 elif query_type == "batch":
--> 410 return self.format_batch(pa_table)
File ~/venv/lib/python3.8/site-packages/datasets/formatting/np_formatter.py:78, in NumpyFormatter.format_batch(self, pa_table)
77 def format_batch(self, pa_table: pa.Table) -> Mapping:
---> 78 batch = self.numpy_arrow_extractor().extract_batch(pa_table)
79 batch = self.python_features_decoder.decode_batch(batch)
80 batch = self.recursive_tensorize(batch)
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:164, in NumpyArrowExtractor.extract_batch(self, pa_table)
163 def extract_batch(self, pa_table: pa.Table) -> dict:
--> 164 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names}
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:164, in <dictcomp>(.0)
163 def extract_batch(self, pa_table: pa.Table) -> dict:
--> 164 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names}
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:185, in NumpyArrowExtractor._arrow_array_to_numpy(self, pa_array)
181 else:
182 zero_copy_only = _is_zero_copy_only(pa_array.type) and all(
183 not _is_array_with_nulls(chunk) for chunk in pa_array.chunks
184 )
--> 185 array: List = [
186 row for chunk in pa_array.chunks for row in chunk.to_numpy(zero_copy_only=zero_copy_only)
187 ]
188 else:
189 if isinstance(pa_array.type, _ArrayXDExtensionType):
190 # don't call to_pylist() to preserve dtype of the fixed-size array
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:186, in <listcomp>(.0)
181 else:
182 zero_copy_only = _is_zero_copy_only(pa_array.type) and all(
183 not _is_array_with_nulls(chunk) for chunk in pa_array.chunks
184 )
185 array: List = [
--> 186 row for chunk in pa_array.chunks for row in chunk.to_numpy(zero_copy_only=zero_copy_only)
187 ]
188 else:
189 if isinstance(pa_array.type, _ArrayXDExtensionType):
190 # don't call to_pylist() to preserve dtype of the fixed-size array
File ~/venv/lib/python3.8/site-packages/pyarrow/array.pxi:1475, in pyarrow.lib.Array.to_numpy()
File ~/venv/lib/python3.8/site-packages/pyarrow/error.pxi:100, in pyarrow.lib.check_status()
ArrowInvalid: Needed to copy 1 chunks with 0 nulls, but zero_copy_only was True
```
### Expected behavior
I think there are two potential issues/fixes
1. Proper handling of datetime UTC columns (perhaps there is something incorrect with zero copy handling here)
2. Not eagerly running against every column in a dataset when the columns argument of `to_tf_dataset` specifies a subset of columns (although I'm not sure if this is unavoidable)
### Environment info
- `datasets` version: 2.9.0
- Platform: macOS-13.2-x86_64-i386-64bit
- Python version: 3.8.12
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5495/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5495/timeline | null | completed | null | null | false | 582,346 |
https://api.github.com/repos/huggingface/datasets/issues/5494 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5494/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5494/comments | https://api.github.com/repos/huggingface/datasets/issues/5494/events | https://github.com/huggingface/datasets/issues/5494 | 1,566,655,348 | I_kwDODunzps5dYUN0 | 5,494 | Update audio installation doc page | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | open | false | null | [] | null | 3 | "2023-02-01T19:07:50Z" | "2023-02-02T13:11:58Z" | null | CONTRIBUTOR | null | Our [installation documentation page](https://huggingface.co/docs/datasets/installation#audio) says that one can use Datasets for mp3 only with `torchaudio<0.12`. `torchaudio>0.12` is actually supported too but requires a specific version of ffmpeg which is not easily installed on all linux versions but there is a custom ubuntu repo for it, we have insctructions in the code: https://github.com/huggingface/datasets/blob/main/src/datasets/features/audio.py#L327
So we should update the doc page. But first investigate [this issue](5488). | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5494/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5494/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/5493 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5493/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5493/comments | https://api.github.com/repos/huggingface/datasets/issues/5493/events | https://github.com/huggingface/datasets/pull/5493 | 1,566,637,806 | PR_kwDODunzps5JCSAZ | 5,493 | Remove unused `load_from_cache_file` arg from `Dataset.shard()` docstring | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [] | closed | false | null | [] | null | 3 | "2023-02-01T18:57:48Z" | "2023-02-08T15:10:46Z" | "2023-02-08T15:03:50Z" | CONTRIBUTOR | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5493/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5493/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5493.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5493",
"merged_at": "2023-02-08T15:03:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5493.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5493"
} | true | 590,762 |
https://api.github.com/repos/huggingface/datasets/issues/5492 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5492/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5492/comments | https://api.github.com/repos/huggingface/datasets/issues/5492/events | https://github.com/huggingface/datasets/issues/5492 | 1,566,604,216 | I_kwDODunzps5dYHu4 | 5,492 | Push_to_hub in a pull request | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nateraw",
"id": 32437151,
"login": "nateraw",
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"repos_url": "https://api.github.com/users/nateraw/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nateraw"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nateraw",
"id": 32437151,
"login": "nateraw",
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"repos_url": "https://api.github.com/users/nateraw/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nateraw"
},
{
"avatar_url": "https://avatars.githubusercontent.com/u/38854604?v=4",
"events_url": "https://api.github.com/users/AJDERS/events{/privacy}",
"followers_url": "https://api.github.com/users/AJDERS/followers",
"following_url": "https://api.github.com/users/AJDERS/following{/other_user}",
"gists_url": "https://api.github.com/users/AJDERS/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AJDERS",
"id": 38854604,
"login": "AJDERS",
"node_id": "MDQ6VXNlcjM4ODU0NjA0",
"organizations_url": "https://api.github.com/users/AJDERS/orgs",
"received_events_url": "https://api.github.com/users/AJDERS/received_events",
"repos_url": "https://api.github.com/users/AJDERS/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AJDERS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AJDERS/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AJDERS"
}
] | null | 2 | "2023-02-01T18:32:14Z" | "2023-02-14T22:16:40Z" | null | MEMBER | null | Right now `ds.push_to_hub()` can push a dataset on `main` or on a new branch with `branch=`, but there is no way to open a pull request. Even passing `branch=refs/pr/x` doesn't seem to work: it tries to create a branch with that name
cc @nateraw
It should be possible to tweak the use of `huggingface_hub` in `push_to_hub` to make it open a PR or push to an existing PR | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5492/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5492/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/5491 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5491/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5491/comments | https://api.github.com/repos/huggingface/datasets/issues/5491/events | https://github.com/huggingface/datasets/pull/5491 | 1,566,235,012 | PR_kwDODunzps5JA9OD | 5,491 | [MINOR] Typo | {
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cakiki",
"id": 3664563,
"login": "cakiki",
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"repos_url": "https://api.github.com/users/cakiki/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cakiki"
} | [] | closed | false | null | [] | null | 2 | "2023-02-01T14:39:39Z" | "2023-02-02T07:42:28Z" | "2023-02-02T07:35:14Z" | CONTRIBUTOR | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5491/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5491/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5491.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5491",
"merged_at": "2023-02-02T07:35:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5491.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5491"
} | true | 60,935 |
https://api.github.com/repos/huggingface/datasets/issues/5490 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5490/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5490/comments | https://api.github.com/repos/huggingface/datasets/issues/5490/events | https://github.com/huggingface/datasets/pull/5490 | 1,565,842,327 | PR_kwDODunzps5I_nz- | 5,490 | Do not add index column by default when exporting to CSV | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 2 | "2023-02-01T10:20:55Z" | "2023-02-09T09:29:08Z" | "2023-02-09T09:22:23Z" | MEMBER | null | As pointed out by @merveenoyan, default behavior of `Dataset.to_csv` adds the index as an additional column without name.
This PR changes the default behavior, so that now the index column is not written.
To add the index column, now you need to pass `index=True` and also `index_label=<name of the index colum>` to name that column.
CC: @merveenoyan | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5490/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5490/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5490.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5490",
"merged_at": "2023-02-09T09:22:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5490.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5490"
} | true | 687,688 |
https://api.github.com/repos/huggingface/datasets/issues/5489 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5489/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5489/comments | https://api.github.com/repos/huggingface/datasets/issues/5489/events | https://github.com/huggingface/datasets/pull/5489 | 1,565,761,705 | PR_kwDODunzps5I_WPH | 5,489 | Pin dill lower version | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 2 | "2023-02-01T09:33:42Z" | "2023-02-02T07:48:09Z" | "2023-02-02T07:40:43Z" | MEMBER | null | Pin `dill` lower version compatible with `datasets`.
Related to:
- #5487
- #288
Note that the required `dill._dill` module was introduced in dill-2.8.0, however we have heuristically tested that datasets can only be installed with dill>=3.0.0 (otherwise pip hangs indefinitely while preparing metadata for multiprocess-0.70.7)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5489/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5489/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5489.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5489",
"merged_at": "2023-02-02T07:40:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5489.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5489"
} | true | 79,621 |
https://api.github.com/repos/huggingface/datasets/issues/5488 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5488/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5488/comments | https://api.github.com/repos/huggingface/datasets/issues/5488/events | https://github.com/huggingface/datasets/issues/5488 | 1,565,025,262 | I_kwDODunzps5dSGPu | 5,488 | Error loading MP3 files from CommonVoice | {
"avatar_url": "https://avatars.githubusercontent.com/u/110259722?v=4",
"events_url": "https://api.github.com/users/kradonneoh/events{/privacy}",
"followers_url": "https://api.github.com/users/kradonneoh/followers",
"following_url": "https://api.github.com/users/kradonneoh/following{/other_user}",
"gists_url": "https://api.github.com/users/kradonneoh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kradonneoh",
"id": 110259722,
"login": "kradonneoh",
"node_id": "U_kgDOBpJuCg",
"organizations_url": "https://api.github.com/users/kradonneoh/orgs",
"received_events_url": "https://api.github.com/users/kradonneoh/received_events",
"repos_url": "https://api.github.com/users/kradonneoh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kradonneoh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kradonneoh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kradonneoh"
} | [] | open | false | null | [] | null | 3 | "2023-01-31T21:25:33Z" | "2023-02-01T15:28:56Z" | null | NONE | null | ### Describe the bug
When loading a CommonVoice dataset with `datasets==2.9.0` and `torchaudio>=0.12.0`, I get an error reading the audio arrays:
```python
---------------------------------------------------------------------------
LibsndfileError Traceback (most recent call last)
~/.local/lib/python3.8/site-packages/datasets/features/audio.py in _decode_mp3(self, path_or_file)
310 try: # try torchaudio anyway because sometimes it works (depending on the os and os packages installed)
--> 311 array, sampling_rate = self._decode_mp3_torchaudio(path_or_file)
312 except RuntimeError:
~/.local/lib/python3.8/site-packages/datasets/features/audio.py in _decode_mp3_torchaudio(self, path_or_file)
351
--> 352 array, sampling_rate = torchaudio.load(path_or_file, format="mp3")
353 if self.sampling_rate and self.sampling_rate != sampling_rate:
~/.local/lib/python3.8/site-packages/torchaudio/backend/soundfile_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format)
204 """
--> 205 with soundfile.SoundFile(filepath, "r") as file_:
206 if file_.format != "WAV" or normalize:
~/.local/lib/python3.8/site-packages/soundfile.py in __init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd)
654 format, subtype, endian)
--> 655 self._file = self._open(file, mode_int, closefd)
656 if set(mode).issuperset('r+') and self.seekable():
~/.local/lib/python3.8/site-packages/soundfile.py in _open(self, file, mode_int, closefd)
1212 err = _snd.sf_error(file_ptr)
-> 1213 raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name))
1214 if mode_int == _snd.SFM_WRITE:
LibsndfileError: Error opening <_io.BytesIO object at 0x7fa539462090>: File contains data in an unknown format.
```
I assume this is because there's some issue with the mp3 decoding process. I've verified that I have `ffmpeg>=4` (on a Linux distro), which appears to be the fallback backend for `torchaudio,` (at least according to #4889).
### Steps to reproduce the bug
```python
dataset = load_dataset("mozilla-foundation/common_voice_11_0", "be", split="train")
dataset[0]
```
### Expected behavior
Similar behavior to `torchaudio<0.12.0`, which doesn't result in a `LibsndfileError`
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 10.0.1
- Pandas version: 1.5.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5488/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5488/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/5487 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5487/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5487/comments | https://api.github.com/repos/huggingface/datasets/issues/5487/events | https://github.com/huggingface/datasets/issues/5487 | 1,564,480,121 | I_kwDODunzps5dQBJ5 | 5,487 | Incorrect filepath for dill module | {
"avatar_url": "https://avatars.githubusercontent.com/u/35349273?v=4",
"events_url": "https://api.github.com/users/avivbrokman/events{/privacy}",
"followers_url": "https://api.github.com/users/avivbrokman/followers",
"following_url": "https://api.github.com/users/avivbrokman/following{/other_user}",
"gists_url": "https://api.github.com/users/avivbrokman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/avivbrokman",
"id": 35349273,
"login": "avivbrokman",
"node_id": "MDQ6VXNlcjM1MzQ5Mjcz",
"organizations_url": "https://api.github.com/users/avivbrokman/orgs",
"received_events_url": "https://api.github.com/users/avivbrokman/received_events",
"repos_url": "https://api.github.com/users/avivbrokman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/avivbrokman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avivbrokman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/avivbrokman"
} | [] | open | false | null | [] | null | 5 | "2023-01-31T15:01:08Z" | "2023-02-02T07:07:55Z" | null | NONE | null | ### Describe the bug
I installed the `datasets` package and when I try to `import` it, I get the following error:
```
Traceback (most recent call last):
File "/var/folders/jt/zw5g74ln6tqfdzsl8tx378j00000gn/T/ipykernel_3805/3458380017.py", line 1, in <module>
import datasets
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/__init__.py", line 43, in <module>
from .arrow_dataset import Dataset
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 66, in <module>
from .arrow_writer import ArrowWriter, OptimizedTypedSequence
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/arrow_writer.py", line 27, in <module>
from .features import Features, Image, Value
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/features/__init__.py", line 17, in <module>
from .audio import Audio
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/features/audio.py", line 12, in <module>
from ..download.streaming_download_manager import xopen
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/download/__init__.py", line 9, in <module>
from .download_manager import DownloadManager, DownloadMode
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/download/download_manager.py", line 36, in <module>
from ..utils.py_utils import NestedDataStructure, map_nested, size_str
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 602, in <module>
class Pickler(dill.Pickler):
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 605, in Pickler
dispatch = dill._dill.MetaCatchingDict(dill.Pickler.dispatch.copy())
AttributeError: module 'dill' has no attribute '_dill'
```
Looking at the github source code for dill, it appears that `datasets` has a bug or is not compatible with the latest `dill`. Specifically, rather than `dill._dill.XXXX` it should be `dill.dill._dill.XXXX`. But given the popularity of `datasets` I feel confused about me being the first person to have this issue, so it makes me wonder if I'm misdiagnosing the issue.
### Steps to reproduce the bug
Install `dill` and `datasets` packages and then `import datasets`
### Expected behavior
I expect `datasets` to import.
### Environment info
- `datasets` version: 2.9.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.13
- PyArrow version: 11.0.0
- Pandas version: 1.4.4 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5487/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5487/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/5486 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5486/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5486/comments | https://api.github.com/repos/huggingface/datasets/issues/5486/events | https://github.com/huggingface/datasets/issues/5486 | 1,564,059,749 | I_kwDODunzps5dOahl | 5,486 | Adding `sep` to TextConfig | {
"avatar_url": "https://avatars.githubusercontent.com/u/29576434?v=4",
"events_url": "https://api.github.com/users/omar-araboghli/events{/privacy}",
"followers_url": "https://api.github.com/users/omar-araboghli/followers",
"following_url": "https://api.github.com/users/omar-araboghli/following{/other_user}",
"gists_url": "https://api.github.com/users/omar-araboghli/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/omar-araboghli",
"id": 29576434,
"login": "omar-araboghli",
"node_id": "MDQ6VXNlcjI5NTc2NDM0",
"organizations_url": "https://api.github.com/users/omar-araboghli/orgs",
"received_events_url": "https://api.github.com/users/omar-araboghli/received_events",
"repos_url": "https://api.github.com/users/omar-araboghli/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/omar-araboghli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omar-araboghli/subscriptions",
"type": "User",
"url": "https://api.github.com/users/omar-araboghli"
} | [] | open | false | null | [] | null | 2 | "2023-01-31T10:39:53Z" | "2023-01-31T14:50:18Z" | null | NONE | null | I have a local a `.txt` file that follows the `CONLL2003` format which I need to load using `load_script`. However, by using `sample_by='line'`, one can only split the dataset into lines without splitting each line into columns. Would it be reasonable to add a `sep` argument in combination with `sample_by='paragraph'` to parse a paragraph into an array for each column ? If so, I am happy to contribute!
## Environment
* `python 3.8.10`
* `datasets 2.9.0`
## Snippet of `train.txt`
```txt
Distribution NN O O
and NN O O
dynamics NN O O
of NN O O
electron NN O B-RP
complexes NN O I-RP
in NN O O
cyanobacterial NN O B-R
membranes NN O I-R
The NN O O
occurrence NN O O
of NN O O
prostaglandin NN O B-R
F2α NN O I-R
in NN O O
Pharbitis NN O B-R
seedlings NN O I-R
grown NN O O
under NN O O
short NN O B-P
days NN O I-P
or NN O I-P
days NN O I-P
```
## Current Behaviour
```python
# defining 4 features ['tokens', 'pos_tags', 'chunk_tags', 'ner_tags'] here would fail with `ValueError: Length of names (4) does not match length of arrays (1)`
dataset = datasets.load_dataset(path='text', features=features, data_files={'train': 'train.txt'}, sample_by='line')
dataset['train']['tokens'][0]
>>> 'Distribution\tNN\tO\tO'
```
## Expected Behaviour / Suggestion
```python
# suppose we defined 4 features ['tokens', 'pos_tags', 'chunk_tags', 'ner_tags']
dataset = datasets.load_dataset(path='text', features=features, data_files={'train': 'train.txt'}, sample_by='paragraph', sep='\t')
dataset['train']['tokens'][0]
>>> ['Distribution', 'and', 'dynamics', ... ]
dataset['train']['ner_tags'][0]
>>> ['O', 'O', 'O', ... ]
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5486/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5486/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/5485 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5485/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5485/comments | https://api.github.com/repos/huggingface/datasets/issues/5485/events | https://github.com/huggingface/datasets/pull/5485 | 1,563,002,829 | PR_kwDODunzps5I2ER2 | 5,485 | Add section in tutorial for IterableDataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu"
} | [] | closed | false | null | [] | null | 2 | "2023-01-30T18:43:04Z" | "2023-02-01T18:15:38Z" | "2023-02-01T18:08:46Z" | MEMBER | null | Introduces an `IterableDataset` and how to access it in the tutorial section. It also adds a brief next step section at the end to provide a path for users who want more explanation and a path for users who want something more practical and learn how to preprocess these dataset types. It'll complement the awesome new doc introduced in:
- #5410 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5485/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5485/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5485.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5485",
"merged_at": "2023-02-01T18:08:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5485.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5485"
} | true | 170,742 |
https://api.github.com/repos/huggingface/datasets/issues/5484 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5484/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5484/comments | https://api.github.com/repos/huggingface/datasets/issues/5484/events | https://github.com/huggingface/datasets/pull/5484 | 1,562,877,070 | PR_kwDODunzps5I1oaq | 5,484 | Update docs for `nyu_depth_v2` dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/36858976?v=4",
"events_url": "https://api.github.com/users/awsaf49/events{/privacy}",
"followers_url": "https://api.github.com/users/awsaf49/followers",
"following_url": "https://api.github.com/users/awsaf49/following{/other_user}",
"gists_url": "https://api.github.com/users/awsaf49/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/awsaf49",
"id": 36858976,
"login": "awsaf49",
"node_id": "MDQ6VXNlcjM2ODU4OTc2",
"organizations_url": "https://api.github.com/users/awsaf49/orgs",
"received_events_url": "https://api.github.com/users/awsaf49/received_events",
"repos_url": "https://api.github.com/users/awsaf49/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/awsaf49/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/awsaf49/subscriptions",
"type": "User",
"url": "https://api.github.com/users/awsaf49"
} | [] | closed | false | null | [] | null | 6 | "2023-01-30T17:37:08Z" | "2023-02-05T14:22:10Z" | "2023-02-05T14:15:04Z" | CONTRIBUTOR | null | This PR will fix the issue mentioned in #5461.
cc: @sayakpaul @lhoestq
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5484/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5484/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5484.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5484",
"merged_at": "2023-02-05T14:15:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5484.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5484"
} | true | 506,276 |
https://api.github.com/repos/huggingface/datasets/issues/5483 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5483/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5483/comments | https://api.github.com/repos/huggingface/datasets/issues/5483/events | https://github.com/huggingface/datasets/issues/5483 | 1,560,894,690 | I_kwDODunzps5dCVzi | 5,483 | Unable to upload dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4",
"events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}",
"followers_url": "https://api.github.com/users/yuvalkirstain/followers",
"following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}",
"gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yuvalkirstain",
"id": 57996478,
"login": "yuvalkirstain",
"node_id": "MDQ6VXNlcjU3OTk2NDc4",
"organizations_url": "https://api.github.com/users/yuvalkirstain/orgs",
"received_events_url": "https://api.github.com/users/yuvalkirstain/received_events",
"repos_url": "https://api.github.com/users/yuvalkirstain/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yuvalkirstain"
} | [] | closed | false | null | [] | null | 1 | "2023-01-28T15:18:26Z" | "2023-01-29T08:09:49Z" | "2023-01-29T08:09:49Z" | NONE | null | ### Describe the bug
Uploading a simple dataset ends with an exception
### Steps to reproduce the bug
I created a new conda env with python 3.10, pip installed datasets and:
```python
>>> from datasets import load_dataset, load_from_disk, Dataset
>>> d = Dataset.from_dict({"text": ["hello"] * 2})
>>> d.push_to_hub("ttt111")
/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_hf_folder.py:92: UserWarning: A token has been found in `/a/home/cc/students/cs/kirstain/.huggingface/token`. This is the old path where tokens were stored. The new location is `/home/olab/kirstain/.cache/huggingface/token` which is configurable using `HF_HOME` environment variable. Your token has been copied to this new location. You can now safely delete the old token file manually or use `huggingface-cli logout`.
warnings.warn(
Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 279.94ba/s]
Upload 1 LFS files: 0%| | 0/1 [00:02<?, ?it/s]
Pushing dataset shards to the dataset hub: 0%| | 0/1 [00:04<?, ?it/s]
Traceback (most recent call last):
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 264, in hf_raise_for_status
response.raise_for_status()
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://s3.us-east-1.amazonaws.com/lfs.huggingface.co/repos/cf/0c/cf0c5ab8a3f729e5f57a8b79a36ecea64a31126f13218591c27ed9a1c7bd9b41/ece885a4bb6bbc8c1bb51b45542b805283d74590f72cd4c45d3ba76628570386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230128%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230128T151640Z&X-Amz-Expires=900&X-Amz-Signature=89e78e9a9d70add7ed93d453334f4f93c6f29d889d46750a1f2da04af73978db&X-Amz-SignedHeaders=host&x-amz-storage-class=INTELLIGENT_TIERING&x-id=PutObject
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 334, in _inner_upload_lfs_object
return _upload_lfs_object(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 391, in _upload_lfs_object
lfs_upload(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/lfs.py", line 273, in lfs_upload
_upload_single_part(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/lfs.py", line 305, in _upload_single_part
hf_raise_for_status(upload_res)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 318, in hf_raise_for_status
raise HfHubHTTPError(str(e), response=response) from e
huggingface_hub.utils._errors.HfHubHTTPError: 403 Client Error: Forbidden for url: https://s3.us-east-1.amazonaws.com/lfs.huggingface.co/repos/cf/0c/cf0c5ab8a3f729e5f57a8b79a36ecea64a31126f13218591c27ed9a1c7bd9b41/ece885a4bb6bbc8c1bb51b45542b805283d74590f72cd4c45d3ba76628570386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230128%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230128T151640Z&X-Amz-Expires=900&X-Amz-Signature=89e78e9a9d70add7ed93d453334f4f93c6f29d889d46750a1f2da04af73978db&X-Amz-SignedHeaders=host&x-amz-storage-class=INTELLIGENT_TIERING&x-id=PutObject
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 4909, in push_to_hub
repo_id, split, uploaded_size, dataset_nbytes, repo_files, deleted_size = self._push_parquet_shards_to_hub(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 4804, in _push_parquet_shards_to_hub
_retry(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 281, in _retry
return func(*func_args, **func_kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn
return fn(*args, **kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2537, in upload_file
commit_info = self.create_commit(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn
return fn(*args, **kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2346, in create_commit
upload_lfs_files(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn
return fn(*args, **kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 346, in upload_lfs_files
thread_map(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/tqdm/contrib/concurrent.py", line 94, in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/tqdm/contrib/concurrent.py", line 76, in _executor_map
return list(tqdm_class(ex.map(fn, *iterables, **map_args), **kwargs))
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/_base.py", line 621, in result_iterator
yield _result_or_cancel(fs.pop())
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/_base.py", line 319, in _result_or_cancel
return fut.result(timeout)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 338, in _inner_upload_lfs_object
raise RuntimeError(
RuntimeError: Error while uploading 'data/train-00000-of-00001-6df93048e66df326.parquet' to the Hub.
```
### Expected behavior
The dataset should be uploaded without any exceptions
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-4.15.0-65-generic-x86_64-with-glibc2.27
- Python version: 3.10.9
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5483/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5483/timeline | null | completed | null | null | false | 60,683 |
https://api.github.com/repos/huggingface/datasets/issues/5482 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5482/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5482/comments | https://api.github.com/repos/huggingface/datasets/issues/5482/events | https://github.com/huggingface/datasets/issues/5482 | 1,560,853,137 | I_kwDODunzps5dCLqR | 5,482 | Reload features from Parquet metadata | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues",
"id": 3761482852,
"name": "good second issue",
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/6368040?v=4",
"events_url": "https://api.github.com/users/MFreidank/events{/privacy}",
"followers_url": "https://api.github.com/users/MFreidank/followers",
"following_url": "https://api.github.com/users/MFreidank/following{/other_user}",
"gists_url": "https://api.github.com/users/MFreidank/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MFreidank",
"id": 6368040,
"login": "MFreidank",
"node_id": "MDQ6VXNlcjYzNjgwNDA=",
"organizations_url": "https://api.github.com/users/MFreidank/orgs",
"received_events_url": "https://api.github.com/users/MFreidank/received_events",
"repos_url": "https://api.github.com/users/MFreidank/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MFreidank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MFreidank/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MFreidank"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/6368040?v=4",
"events_url": "https://api.github.com/users/MFreidank/events{/privacy}",
"followers_url": "https://api.github.com/users/MFreidank/followers",
"following_url": "https://api.github.com/users/MFreidank/following{/other_user}",
"gists_url": "https://api.github.com/users/MFreidank/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MFreidank",
"id": 6368040,
"login": "MFreidank",
"node_id": "MDQ6VXNlcjYzNjgwNDA=",
"organizations_url": "https://api.github.com/users/MFreidank/orgs",
"received_events_url": "https://api.github.com/users/MFreidank/received_events",
"repos_url": "https://api.github.com/users/MFreidank/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MFreidank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MFreidank/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MFreidank"
}
] | null | 3 | "2023-01-28T13:12:31Z" | "2023-02-12T15:57:02Z" | "2023-02-12T15:57:02Z" | MEMBER | null | The idea would be to allow this :
```python
ds.to_parquet("my_dataset/ds.parquet")
reloaded = load_dataset("my_dataset")
assert ds.features == reloaded.features
```
And it should also work with Image and Audio types (right now they're reloaded as a dict type)
This can be implemented by storing and reading the feature types in the parquet metadata, as we do for arrow files. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5482/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5482/timeline | null | completed | null | null | false | 1,305,871 |
https://api.github.com/repos/huggingface/datasets/issues/5481 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5481/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5481/comments | https://api.github.com/repos/huggingface/datasets/issues/5481/events | https://github.com/huggingface/datasets/issues/5481 | 1,560,468,195 | I_kwDODunzps5dAtrj | 5,481 | Load a cached dataset as iterable | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues",
"id": 3761482852,
"name": "good second issue",
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/56002455?v=4",
"events_url": "https://api.github.com/users/hamid-vakilzadeh/events{/privacy}",
"followers_url": "https://api.github.com/users/hamid-vakilzadeh/followers",
"following_url": "https://api.github.com/users/hamid-vakilzadeh/following{/other_user}",
"gists_url": "https://api.github.com/users/hamid-vakilzadeh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hamid-vakilzadeh",
"id": 56002455,
"login": "hamid-vakilzadeh",
"node_id": "MDQ6VXNlcjU2MDAyNDU1",
"organizations_url": "https://api.github.com/users/hamid-vakilzadeh/orgs",
"received_events_url": "https://api.github.com/users/hamid-vakilzadeh/received_events",
"repos_url": "https://api.github.com/users/hamid-vakilzadeh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hamid-vakilzadeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hamid-vakilzadeh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hamid-vakilzadeh"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/56002455?v=4",
"events_url": "https://api.github.com/users/hamid-vakilzadeh/events{/privacy}",
"followers_url": "https://api.github.com/users/hamid-vakilzadeh/followers",
"following_url": "https://api.github.com/users/hamid-vakilzadeh/following{/other_user}",
"gists_url": "https://api.github.com/users/hamid-vakilzadeh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hamid-vakilzadeh",
"id": 56002455,
"login": "hamid-vakilzadeh",
"node_id": "MDQ6VXNlcjU2MDAyNDU1",
"organizations_url": "https://api.github.com/users/hamid-vakilzadeh/orgs",
"received_events_url": "https://api.github.com/users/hamid-vakilzadeh/received_events",
"repos_url": "https://api.github.com/users/hamid-vakilzadeh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hamid-vakilzadeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hamid-vakilzadeh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hamid-vakilzadeh"
}
] | null | 12 | "2023-01-27T21:43:51Z" | "2023-02-07T15:58:15Z" | null | MEMBER | null | The idea would be to allow something like
```python
ds = load_dataset("c4", "en", as_iterable=True)
```
To be used to train models. It would load an IterableDataset from the cached Arrow files.
Cc @stas00
Edit : from the discussions we may load from cache when streaming=True | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 3,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5481/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5481/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/5480 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5480/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5480/comments | https://api.github.com/repos/huggingface/datasets/issues/5480/events | https://github.com/huggingface/datasets/pull/5480 | 1,560,364,866 | PR_kwDODunzps5ItY2y | 5,480 | Select columns of Dataset or DatasetDict | {
"avatar_url": "https://avatars.githubusercontent.com/u/9336514?v=4",
"events_url": "https://api.github.com/users/daskol/events{/privacy}",
"followers_url": "https://api.github.com/users/daskol/followers",
"following_url": "https://api.github.com/users/daskol/following{/other_user}",
"gists_url": "https://api.github.com/users/daskol/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/daskol",
"id": 9336514,
"login": "daskol",
"node_id": "MDQ6VXNlcjkzMzY1MTQ=",
"organizations_url": "https://api.github.com/users/daskol/orgs",
"received_events_url": "https://api.github.com/users/daskol/received_events",
"repos_url": "https://api.github.com/users/daskol/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/daskol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daskol/subscriptions",
"type": "User",
"url": "https://api.github.com/users/daskol"
} | [] | closed | false | null | [] | null | 2 | "2023-01-27T20:06:16Z" | "2023-02-13T11:10:13Z" | "2023-02-13T09:59:35Z" | CONTRIBUTOR | null | Close #5474 and #5468. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5480/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5480/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5480.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5480",
"merged_at": "2023-02-13T09:59:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5480.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5480"
} | true | 1,432,399 |
https://api.github.com/repos/huggingface/datasets/issues/5479 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5479/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5479/comments | https://api.github.com/repos/huggingface/datasets/issues/5479/events | https://github.com/huggingface/datasets/issues/5479 | 1,560,357,590 | I_kwDODunzps5dASrW | 5,479 | audiofolder works on local env, but creates empty dataset in a remote one, what dependencies could I be missing/outdated | {
"avatar_url": "https://avatars.githubusercontent.com/u/107211437?v=4",
"events_url": "https://api.github.com/users/jcho19/events{/privacy}",
"followers_url": "https://api.github.com/users/jcho19/followers",
"following_url": "https://api.github.com/users/jcho19/following{/other_user}",
"gists_url": "https://api.github.com/users/jcho19/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jcho19",
"id": 107211437,
"login": "jcho19",
"node_id": "U_kgDOBmPqrQ",
"organizations_url": "https://api.github.com/users/jcho19/orgs",
"received_events_url": "https://api.github.com/users/jcho19/received_events",
"repos_url": "https://api.github.com/users/jcho19/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jcho19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jcho19/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jcho19"
} | [] | closed | false | null | [] | null | 0 | "2023-01-27T20:01:22Z" | "2023-01-29T05:23:14Z" | "2023-01-29T05:23:14Z" | NONE | null | ### Describe the bug
I'm using a custom audio dataset (400+ audio files) in the correct format for audiofolder. Although loading the dataset with audiofolder works in one local setup, it doesn't in a remote one (it just creates an empty dataset). I have both ffmpeg and libndfile installed on both computers, what could be missing/need to be updated in the one that doesn't work? On the remote env, libsndfile is 1.0.28 and ffmpeg is 4.2.1.
from datasets import load_dataset
ds = load_dataset("audiofolder", data_dir="...")
Here is the output (should be generating 400+ rows):
Downloading and preparing dataset audiofolder/default to ...
Downloading data files: 0%| | 0/2 [00:00<?, ?it/s]
Downloading data files: 0it [00:00, ?it/s]
Extracting data files: 0it [00:00, ?it/s]
Generating train split: 0 examples [00:00, ? examples/s]
Dataset audiofolder downloaded and prepared to ... Subsequent calls will reuse this data.
0%| | 0/1 [00:00<?, ?it/s]
DatasetDict({
train: Dataset({
features: ['audio', 'transcription'],
num_rows: 1
})
})
Here is my pip environment in the one that doesn't work (uses torch 1.11.a0 from shared env):
Package Version
------------------- -------------------
aiofiles 22.1.0
aiohttp 3.8.3
aiosignal 1.3.1
altair 4.2.1
anyio 3.6.2
appdirs 1.4.4
argcomplete 2.0.0
argon2-cffi 20.1.0
astunparse 1.6.3
async-timeout 4.0.2
attrs 21.2.0
audioread 3.0.0
backcall 0.2.0
bleach 4.0.0
certifi 2021.10.8
cffi 1.14.6
charset-normalizer 2.0.12
click 8.1.3
contourpy 1.0.7
cycler 0.11.0
datasets 2.9.0
debugpy 1.4.1
decorator 5.0.9
defusedxml 0.7.1
dill 0.3.6
distlib 0.3.4
entrypoints 0.3
evaluate 0.4.0
expecttest 0.1.3
fastapi 0.89.1
ffmpy 0.3.0
filelock 3.6.0
fonttools 4.38.0
frozenlist 1.3.3
fsspec 2023.1.0
future 0.18.2
gradio 3.16.2
h11 0.14.0
httpcore 0.16.3
httpx 0.23.3
huggingface-hub 0.12.0
idna 3.3
ipykernel 6.2.0
ipython 7.26.0
ipython-genutils 0.2.0
ipywidgets 7.6.3
jedi 0.18.0
Jinja2 3.0.1
jiwer 2.5.1
joblib 1.2.0
jsonschema 3.2.0
jupyter 1.0.0
jupyter-client 6.1.12
jupyter-console 6.4.0
jupyter-core 4.7.1
jupyterlab-pygments 0.1.2
jupyterlab-widgets 1.0.0
kiwisolver 1.4.4
Levenshtein 0.20.2
librosa 0.9.2
linkify-it-py 1.0.3
llvmlite 0.39.1
markdown-it-py 2.1.0
MarkupSafe 2.0.1
matplotlib 3.6.3
matplotlib-inline 0.1.2
mdit-py-plugins 0.3.3
mdurl 0.1.2
mistune 0.8.4
multidict 6.0.4
multiprocess 0.70.14
nbclient 0.5.4
nbconvert 6.1.0
nbformat 5.1.3
nest-asyncio 1.5.1
notebook 6.4.3
numba 0.56.4
numpy 1.20.3
orjson 3.8.5
packaging 21.0
pandas 1.5.3
pandocfilters 1.4.3
parso 0.8.2
pexpect 4.8.0
pickleshare 0.7.5
Pillow 9.4.0
pip 22.3.1
pipx 1.1.0
platformdirs 2.5.2
pooch 1.6.0
prometheus-client 0.11.0
prompt-toolkit 3.0.19
psutil 5.9.0
ptyprocess 0.7.0
pyarrow 10.0.1
pycparser 2.20
pycryptodome 3.16.0
pydantic 1.10.4
pydub 0.25.1
Pygments 2.10.0
pyparsing 2.4.7
pyrsistent 0.18.0
python-dateutil 2.8.2
python-multipart 0.0.5
pytz 2022.7.1
PyYAML 6.0
pyzmq 22.2.1
qtconsole 5.1.1
QtPy 1.10.0
rapidfuzz 2.13.7
regex 2022.10.31
requests 2.27.1
resampy 0.4.2
responses 0.18.0
rfc3986 1.5.0
scikit-learn 1.2.1
scipy 1.6.3
Send2Trash 1.8.0
setuptools 65.5.1
shiboken6 6.3.1
shiboken6-generator 6.3.1
six 1.16.0
sniffio 1.3.0
soundfile 0.11.0
starlette 0.22.0
terminado 0.11.0
testpath 0.5.0
threadpoolctl 3.1.0
tokenizers 0.13.2
toolz 0.12.0
torch 1.11.0a0+gitunknown
tornado 6.1
tqdm 4.64.1
traitlets 5.0.5
transformers 4.27.0.dev0
types-dataclasses 0.6.4
typing_extensions 4.1.1
uc-micro-py 1.0.1
urllib3 1.26.9
userpath 1.8.0
uvicorn 0.20.0
virtualenv 20.14.1
wcwidth 0.2.5
webencodings 0.5.1
websockets 10.4
wheel 0.37.1
widgetsnbextension 3.5.1
xxhash 3.2.0
yarl 1.8.2
### Steps to reproduce the bug
Create a pip environment with the packages listed above (make sure ffmpeg and libsndfile is installed with same versions listed above).
Create a custom audio dataset and load it in with load_dataset("audiofolder", ...)
### Expected behavior
load_dataset should create a dataset with 400+ rows.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.0
- PyArrow version: 10.0.1
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5479/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5479/timeline | null | completed | null | null | false | 120,112 |
https://api.github.com/repos/huggingface/datasets/issues/5478 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5478/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5478/comments | https://api.github.com/repos/huggingface/datasets/issues/5478/events | https://github.com/huggingface/datasets/pull/5478 | 1,560,357,583 | PR_kwDODunzps5ItXQG | 5,478 | Tip for recomputing metadata | {
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu"
} | [] | closed | false | null | [] | null | 2 | "2023-01-27T20:01:22Z" | "2023-01-30T19:22:21Z" | "2023-01-30T19:15:26Z" | MEMBER | null | From this [feedback](https://discuss.huggingface.co/t/nonmatchingsplitssizeserror/30033) on the forum, thought I'd include a tip for recomputing the metadata numbers if it is your own dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5478/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5478/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5478.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5478",
"merged_at": "2023-01-30T19:15:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5478.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5478"
} | true | 256,444 |
https://api.github.com/repos/huggingface/datasets/issues/5477 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5477/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5477/comments | https://api.github.com/repos/huggingface/datasets/issues/5477/events | https://github.com/huggingface/datasets/issues/5477 | 1,559,909,892 | I_kwDODunzps5c-lYE | 5,477 | Unpin sqlalchemy once issue is fixed | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | open | false | null | [] | null | 0 | "2023-01-27T15:01:55Z" | "2023-01-27T15:01:55Z" | null | MEMBER | null | Once the source issue is fixed:
- pandas-dev/pandas#51015
we should revert the pin introduced in:
- #5476 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5477/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5477/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/5476 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5476/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5476/comments | https://api.github.com/repos/huggingface/datasets/issues/5476/events | https://github.com/huggingface/datasets/pull/5476 | 1,559,594,684 | PR_kwDODunzps5IqwC_ | 5,476 | Pin sqlalchemy | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 3 | "2023-01-27T11:26:38Z" | "2023-01-27T12:06:51Z" | "2023-01-27T11:57:48Z" | MEMBER | null | since sqlalchemy update to 2.0.0 the CI started to fail: https://github.com/huggingface/datasets/actions/runs/4023742457/jobs/6914976514
the error comes from pandas: https://github.com/pandas-dev/pandas/issues/51015 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5476/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5476/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5476.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5476",
"merged_at": "2023-01-27T11:57:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5476.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5476"
} | true | 1,870 |
https://api.github.com/repos/huggingface/datasets/issues/5475 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5475/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5475/comments | https://api.github.com/repos/huggingface/datasets/issues/5475/events | https://github.com/huggingface/datasets/issues/5475 | 1,559,030,149 | I_kwDODunzps5c7OmF | 5,475 | Dataset scan time is much slower than using native arrow | {
"avatar_url": "https://avatars.githubusercontent.com/u/121845112?v=4",
"events_url": "https://api.github.com/users/jonny-cyberhaven/events{/privacy}",
"followers_url": "https://api.github.com/users/jonny-cyberhaven/followers",
"following_url": "https://api.github.com/users/jonny-cyberhaven/following{/other_user}",
"gists_url": "https://api.github.com/users/jonny-cyberhaven/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jonny-cyberhaven",
"id": 121845112,
"login": "jonny-cyberhaven",
"node_id": "U_kgDOB0M1eA",
"organizations_url": "https://api.github.com/users/jonny-cyberhaven/orgs",
"received_events_url": "https://api.github.com/users/jonny-cyberhaven/received_events",
"repos_url": "https://api.github.com/users/jonny-cyberhaven/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jonny-cyberhaven/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonny-cyberhaven/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jonny-cyberhaven"
} | [] | closed | false | null | [] | null | 3 | "2023-01-27T01:32:25Z" | "2023-01-30T16:17:11Z" | "2023-01-30T16:17:11Z" | CONTRIBUTOR | null | ### Describe the bug
I'm basically running the same scanning experiment from the tutorials https://huggingface.co/course/chapter5/4?fw=pt except now I'm comparing to a native pyarrow version.
I'm finding that the native pyarrow approach is much faster (2 orders of magnitude). Is there something I'm missing that explains this phenomenon?
### Steps to reproduce the bug
https://colab.research.google.com/drive/11EtHDaGAf1DKCpvYnAPJUW-LFfAcDzHY?usp=sharing
### Expected behavior
I expect scan times to be on par with using pyarrow directly.
### Environment info
standard colab environment | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5475/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5475/timeline | null | completed | null | null | false | 312,286 |
https://api.github.com/repos/huggingface/datasets/issues/5474 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5474/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5474/comments | https://api.github.com/repos/huggingface/datasets/issues/5474/events | https://github.com/huggingface/datasets/issues/5474 | 1,558,827,155 | I_kwDODunzps5c6dCT | 5,474 | Column project operation on `datasets.Dataset` | {
"avatar_url": "https://avatars.githubusercontent.com/u/9336514?v=4",
"events_url": "https://api.github.com/users/daskol/events{/privacy}",
"followers_url": "https://api.github.com/users/daskol/followers",
"following_url": "https://api.github.com/users/daskol/following{/other_user}",
"gists_url": "https://api.github.com/users/daskol/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/daskol",
"id": 9336514,
"login": "daskol",
"node_id": "MDQ6VXNlcjkzMzY1MTQ=",
"organizations_url": "https://api.github.com/users/daskol/orgs",
"received_events_url": "https://api.github.com/users/daskol/received_events",
"repos_url": "https://api.github.com/users/daskol/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/daskol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daskol/subscriptions",
"type": "User",
"url": "https://api.github.com/users/daskol"
} | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
},
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | 1 | "2023-01-26T21:47:53Z" | "2023-02-13T09:59:37Z" | "2023-02-13T09:59:37Z" | CONTRIBUTOR | null | ### Feature request
There is no operation to select a subset of columns of original dataset. Expected API follows.
```python
a = Dataset.from_dict({
'int': [0, 1, 2]
'char': ['a', 'b', 'c'],
'none': [None] * 3,
})
b = a.project('int', 'char') # usually, .select()
print(a.column_names) # stdout: ['int', 'char', 'none']
print(b.column_names) # stdout: ['int', 'char']
```
Method project can easily accept not only column names (as a `str)` but univariant function applied to corresponding column as an example. Or keyword arguments can be used in order to rename columns in advance (see `pandas`, `pyspark`, `pyarrow`, and SQL)..
### Motivation
Projection is a typical operation in every data processing library. And it is a basic block of a well-known data manipulation language like SQL. Without this operation `datasets.Dataset` interface is not complete.
### Your contribution
Not sure. Some of my PRs are still open and some do not have any discussions. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5474/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5474/timeline | null | completed | null | null | false | 1,512,704 |
https://api.github.com/repos/huggingface/datasets/issues/5473 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5473/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5473/comments | https://api.github.com/repos/huggingface/datasets/issues/5473/events | https://github.com/huggingface/datasets/pull/5473 | 1,558,668,197 | PR_kwDODunzps5Inm9h | 5,473 | Set dev version | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 3 | "2023-01-26T19:34:44Z" | "2023-01-26T19:47:34Z" | "2023-01-26T19:38:30Z" | MEMBER | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5473/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5473/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5473.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5473",
"merged_at": "2023-01-26T19:38:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5473.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5473"
} | true | 226 |
https://api.github.com/repos/huggingface/datasets/issues/5472 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5472/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5472/comments | https://api.github.com/repos/huggingface/datasets/issues/5472/events | https://github.com/huggingface/datasets/pull/5472 | 1,558,662,251 | PR_kwDODunzps5Inlp8 | 5,472 | Release: 2.9.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 4 | "2023-01-26T19:29:42Z" | "2023-01-26T19:40:44Z" | "2023-01-26T19:33:00Z" | MEMBER | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5472/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5472/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5472.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5472",
"merged_at": "2023-01-26T19:33:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5472.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5472"
} | true | 198 |
https://api.github.com/repos/huggingface/datasets/issues/5471 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5471/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5471/comments | https://api.github.com/repos/huggingface/datasets/issues/5471/events | https://github.com/huggingface/datasets/pull/5471 | 1,558,557,545 | PR_kwDODunzps5InPA7 | 5,471 | Add num_test_batches option | {
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/amyeroberts",
"id": 22614925,
"login": "amyeroberts",
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"type": "User",
"url": "https://api.github.com/users/amyeroberts"
} | [] | closed | false | null | [] | null | 4 | "2023-01-26T18:09:40Z" | "2023-01-27T18:16:45Z" | "2023-01-27T18:08:36Z" | CONTRIBUTOR | null | `to_tf_dataset` calls can be very costly because of the number of test batches drawn during `_get_output_signature`. The test batches are draw in order to estimate the shapes when creating the tensorflow dataset. This is necessary when the shapes can be irregular, but not in cases when the tensor shapes are the same across all samples. This PR adds an option to change the number of batches drawn, so the user can speed this conversion up.
Running the following, and modifying `num_test_batches`
```
import time
from datasets import load_dataset
from transformers import DefaultDataCollator
data_collator = DefaultDataCollator()
dataset = load_dataset("beans")
dataset = dataset["train"].with_format("np")
start = time.time()
dataset = dataset.to_tf_dataset(
columns=["image"],
label_cols=["label"],
batch_size=8,
collate_fn=data_collator,
num_test_batches=NUM_TEST_BATCHES,
)
end = time.time()
print(end - start)
```
NUM_TEST_BATCHES=200: 0.8197s
NUM_TEST_BATCHES=50: 0.3070s
NUM_TEST_BATCHES=2: 0.1417s
NUM_TEST_BATCHES=1: 0.1352s | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5471/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5471/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5471.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5471",
"merged_at": "2023-01-27T18:08:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5471.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5471"
} | true | 86,336 |
https://api.github.com/repos/huggingface/datasets/issues/5470 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5470/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5470/comments | https://api.github.com/repos/huggingface/datasets/issues/5470/events | https://github.com/huggingface/datasets/pull/5470 | 1,558,542,611 | PR_kwDODunzps5InLw9 | 5,470 | Update dataset card creation | {
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu"
} | [] | closed | false | null | [] | null | 4 | "2023-01-26T17:57:51Z" | "2023-01-27T16:27:00Z" | "2023-01-27T16:20:10Z" | MEMBER | null | Encourages users to create a dataset card on the Hub directly with the new metadata ui + import dataset card template instead of telling users to manually create and upload one. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5470/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5470/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5470.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5470",
"merged_at": "2023-01-27T16:20:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5470.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5470"
} | true | 80,539 |
https://api.github.com/repos/huggingface/datasets/issues/5469 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5469/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5469/comments | https://api.github.com/repos/huggingface/datasets/issues/5469/events | https://github.com/huggingface/datasets/pull/5469 | 1,558,346,906 | PR_kwDODunzps5Imhk2 | 5,469 | Remove deprecated `shard_size` arg from `.push_to_hub()` | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [] | closed | false | null | [] | null | 2 | "2023-01-26T15:40:56Z" | "2023-01-26T17:37:51Z" | "2023-01-26T17:30:59Z" | CONTRIBUTOR | null | The docstrings say that it was supposed to be deprecated since version 2.4.0, can we remove it? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5469/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5469/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5469.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5469",
"merged_at": "2023-01-26T17:30:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5469.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5469"
} | true | 6,603 |
https://api.github.com/repos/huggingface/datasets/issues/5468 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5468/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5468/comments | https://api.github.com/repos/huggingface/datasets/issues/5468/events | https://github.com/huggingface/datasets/issues/5468 | 1,558,066,625 | I_kwDODunzps5c3jXB | 5,468 | Allow opposite of remove_columns on Dataset and DatasetDict | {
"avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4",
"events_url": "https://api.github.com/users/hollance/events{/privacy}",
"followers_url": "https://api.github.com/users/hollance/followers",
"following_url": "https://api.github.com/users/hollance/following{/other_user}",
"gists_url": "https://api.github.com/users/hollance/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hollance",
"id": 346853,
"login": "hollance",
"node_id": "MDQ6VXNlcjM0Njg1Mw==",
"organizations_url": "https://api.github.com/users/hollance/orgs",
"received_events_url": "https://api.github.com/users/hollance/received_events",
"repos_url": "https://api.github.com/users/hollance/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hollance/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hollance"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | null | [] | null | 9 | "2023-01-26T12:28:09Z" | "2023-02-13T09:59:38Z" | "2023-02-13T09:59:38Z" | NONE | null | ### Feature request
In this blog post https://huggingface.co/blog/audio-datasets, I noticed the following code:
```python
COLUMNS_TO_KEEP = ["text", "audio"]
all_columns = gigaspeech["train"].column_names
columns_to_remove = set(all_columns) - set(COLUMNS_TO_KEEP)
gigaspeech = gigaspeech.remove_columns(columns_to_remove)
```
This kind of thing happens a lot when you don't need to keep all columns from the dataset. It would be more convenient (and less error prone) if you could just write:
```python
gigaspeech = gigaspeech.keep_columns(["text", "audio"])
```
Internally, `keep_columns` could still call `remove_columns`, but it expresses more clearly what the user's intent is.
### Motivation
Less code to write for the user of the dataset.
### Your contribution
- | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5468/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5468/timeline | null | completed | null | null | false | 1,546,289 |
https://api.github.com/repos/huggingface/datasets/issues/5467 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5467/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5467/comments | https://api.github.com/repos/huggingface/datasets/issues/5467/events | https://github.com/huggingface/datasets/pull/5467 | 1,557,898,273 | PR_kwDODunzps5IlAlk | 5,467 | Fix conda command in readme | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 4 | "2023-01-26T10:03:01Z" | "2023-01-26T18:32:16Z" | "2023-01-26T18:29:37Z" | MEMBER | null | The [conda forge channel](https://anaconda.org/conda-forge/datasets) is lagging behind (as of right now, only 2.7.1 is available), we should recommend using the [Hugging face channel](https://anaconda.org/HuggingFace/datasets) that we are maintaining
```
conda install -c huggingface datasets
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5467/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5467/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5467.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5467",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5467.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5467"
} | true | 30,396 |
https://api.github.com/repos/huggingface/datasets/issues/5466 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5466/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5466/comments | https://api.github.com/repos/huggingface/datasets/issues/5466/events | https://github.com/huggingface/datasets/pull/5466 | 1,557,584,845 | PR_kwDODunzps5Ij-z1 | 5,466 | remove pathlib.Path with URIs | {
"avatar_url": "https://avatars.githubusercontent.com/u/121845112?v=4",
"events_url": "https://api.github.com/users/jonny-cyberhaven/events{/privacy}",
"followers_url": "https://api.github.com/users/jonny-cyberhaven/followers",
"following_url": "https://api.github.com/users/jonny-cyberhaven/following{/other_user}",
"gists_url": "https://api.github.com/users/jonny-cyberhaven/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jonny-cyberhaven",
"id": 121845112,
"login": "jonny-cyberhaven",
"node_id": "U_kgDOB0M1eA",
"organizations_url": "https://api.github.com/users/jonny-cyberhaven/orgs",
"received_events_url": "https://api.github.com/users/jonny-cyberhaven/received_events",
"repos_url": "https://api.github.com/users/jonny-cyberhaven/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jonny-cyberhaven/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonny-cyberhaven/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jonny-cyberhaven"
} | [] | closed | false | null | [] | null | 5 | "2023-01-26T03:25:45Z" | "2023-01-26T17:08:57Z" | "2023-01-26T16:59:11Z" | CONTRIBUTOR | null | Pathlib will convert "//" to "/" which causes retry errors when downloading from cloud storage | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5466/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5466/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5466.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5466",
"merged_at": "2023-01-26T16:59:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5466.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5466"
} | true | 48,806 |
https://api.github.com/repos/huggingface/datasets/issues/5465 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5465/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5465/comments | https://api.github.com/repos/huggingface/datasets/issues/5465/events | https://github.com/huggingface/datasets/issues/5465 | 1,557,510,618 | I_kwDODunzps5c1bna | 5,465 | audiofolder creates empty dataset even though the dataset passed in follows the correct structure | {
"avatar_url": "https://avatars.githubusercontent.com/u/107211437?v=4",
"events_url": "https://api.github.com/users/jcho19/events{/privacy}",
"followers_url": "https://api.github.com/users/jcho19/followers",
"following_url": "https://api.github.com/users/jcho19/following{/other_user}",
"gists_url": "https://api.github.com/users/jcho19/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jcho19",
"id": 107211437,
"login": "jcho19",
"node_id": "U_kgDOBmPqrQ",
"organizations_url": "https://api.github.com/users/jcho19/orgs",
"received_events_url": "https://api.github.com/users/jcho19/received_events",
"repos_url": "https://api.github.com/users/jcho19/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jcho19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jcho19/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jcho19"
} | [] | closed | false | null | [] | null | 0 | "2023-01-26T01:45:45Z" | "2023-01-26T08:48:45Z" | "2023-01-26T08:48:45Z" | NONE | null | ### Describe the bug
The structure of my dataset folder called "my_dataset" is : data metadata.csv
The data folder consists of all mp3 files and metadata.csv consist of file locations like 'data/...mp3 and transcriptions. There's 400+ mp3 files and corresponding transcriptions for my dataset.
When I run the following:
ds = load_dataset("audiofolder", data_dir="my_dataset")
I get:
Using custom data configuration default-...
Downloading and preparing dataset audiofolder/default to /...
Downloading data files: 0%| | 0/2 [00:00<?, ?it/s]
Downloading data files: 0it [00:00, ?it/s]
Extracting data files: 0it [00:00, ?it/s]
Generating train split: 0 examples [00:00, ? examples/s]
Dataset audiofolder downloaded and prepared to /.... Subsequent calls will reuse this data.
0%| | 0/1 [00:00<?, ?it/s]
DatasetDict({
train: Dataset({
features: ['audio', 'transcription'],
num_rows: 1
})
})
### Steps to reproduce the bug
Create a dataset folder called 'my_dataset' with a subfolder called 'data' that has mp3 files. Also, create metadata.csv that has file locations like 'data/...mp3' and their corresponding transcription.
Run:
ds = load_dataset("audiofolder", data_dir="my_dataset")
### Expected behavior
It should generate a dataset with numerous rows.
### Environment info
Run on Jupyter notebook | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5465/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5465/timeline | null | completed | null | null | false | 25,380 |
Subsets and Splits