url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.6B
node_id
stringlengths
18
32
number
int64
1
5.57k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
int64
0
54
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
float64
0
1
pull_request
dict
is_pull_request
bool
2 classes
handling_time
float64
6
72.4M
https://api.github.com/repos/huggingface/datasets/issues/5566
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5566/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5566/comments
https://api.github.com/repos/huggingface/datasets/issues/5566/events
https://github.com/huggingface/datasets/issues/5566
1,595,916,674
I_kwDODunzps5fH8GC
5,566
Directly reading parquet files in a s3 bucket from the load_dataset method
{ "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shamanez", "id": 16892570, "login": "shamanez", "node_id": "MDQ6VXNlcjE2ODkyNTcw", "organizations_url": "https://api.github.com/users/shamanez/orgs", "received_events_url": "https://api.github.com/users/shamanez/received_events", "repos_url": "https://api.github.com/users/shamanez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "type": "User", "url": "https://api.github.com/users/shamanez" }
[ { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" }, { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
1
"2023-02-22T22:13:40"
"2023-02-23T11:03:29"
null
NONE
null
### Feature request Right now, we have to read the get the parquet file to the local storage. So having ability to read given the bucket directly address would be benificial ### Motivation In a production set up, this feature can help us a lot. So we do not need move training datafiles in between storage. ### Your contribution I am willing to help if there's anyway.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5566/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5566/timeline
null
null
null
null
false
null
https://api.github.com/repos/huggingface/datasets/issues/5565
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5565/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5565/comments
https://api.github.com/repos/huggingface/datasets/issues/5565/events
https://github.com/huggingface/datasets/pull/5565
1,595,281,752
PR_kwDODunzps5KhfTH
5,565
Add writer_batch_size for ArrowBasedBuilder
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
open
false
null
[]
null
3
"2023-02-22T15:09:30"
"2023-02-22T15:41:58"
null
MEMBER
null
This way we can control the size of the record_batches/row_groups of arrow/parquet files. This can be useful for `datasets-server` to keep control of the row groups size which can affect random access performance for audio/image/video datasets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5565/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5565/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5565.diff", "html_url": "https://github.com/huggingface/datasets/pull/5565", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5565.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5565" }
true
null
https://api.github.com/repos/huggingface/datasets/issues/5564
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5564/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5564/comments
https://api.github.com/repos/huggingface/datasets/issues/5564/events
https://github.com/huggingface/datasets/pull/5564
1,595,064,698
PR_kwDODunzps5KgwzU
5,564
Set dev version
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
3
"2023-02-22T13:00:09"
"2023-02-22T13:09:26"
"2023-02-22T13:00:25"
MEMBER
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5564/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5564/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5564.diff", "html_url": "https://github.com/huggingface/datasets/pull/5564", "merged_at": "2023-02-22T13:00:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/5564.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5564" }
true
16
https://api.github.com/repos/huggingface/datasets/issues/5563
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5563/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5563/comments
https://api.github.com/repos/huggingface/datasets/issues/5563/events
https://github.com/huggingface/datasets/pull/5563
1,595,049,025
PR_kwDODunzps5KgtbL
5,563
Release: 2.10.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
4
"2023-02-22T12:48:52"
"2023-02-22T13:05:55"
"2023-02-22T12:56:48"
MEMBER
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5563/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5563/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5563.diff", "html_url": "https://github.com/huggingface/datasets/pull/5563", "merged_at": "2023-02-22T12:56:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/5563.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5563" }
true
476
https://api.github.com/repos/huggingface/datasets/issues/5562
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5562/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5562/comments
https://api.github.com/repos/huggingface/datasets/issues/5562/events
https://github.com/huggingface/datasets/pull/5562
1,594,625,539
PR_kwDODunzps5KfTUT
5,562
Update csv.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/54279069?v=4", "events_url": "https://api.github.com/users/XDoubleU/events{/privacy}", "followers_url": "https://api.github.com/users/XDoubleU/followers", "following_url": "https://api.github.com/users/XDoubleU/following{/other_user}", "gists_url": "https://api.github.com/users/XDoubleU/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/XDoubleU", "id": 54279069, "login": "XDoubleU", "node_id": "MDQ6VXNlcjU0Mjc5MDY5", "organizations_url": "https://api.github.com/users/XDoubleU/orgs", "received_events_url": "https://api.github.com/users/XDoubleU/received_events", "repos_url": "https://api.github.com/users/XDoubleU/repos", "site_admin": false, "starred_url": "https://api.github.com/users/XDoubleU/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/XDoubleU/subscriptions", "type": "User", "url": "https://api.github.com/users/XDoubleU" }
[]
closed
false
null
[]
null
4
"2023-02-22T07:56:10"
"2023-02-23T11:07:49"
"2023-02-23T11:00:58"
CONTRIBUTOR
null
Removed mangle_dup_cols=True from BuilderConfig. It triggered following deprecation warning: /usr/local/lib/python3.8/dist-packages/datasets/download/streaming_download_manager.py:776: FutureWarning: the 'mangle_dupe_cols' keyword is deprecated and will be removed in a future version. Please take steps to stop the use of 'mangle_dupe_cols' return pd.read_csv(xopen(filepath_or_buffer, "rb", use_auth_token=use_auth_token), **kwargs) Further documentation of pandas: https://pandas.pydata.org/docs/whatsnew/v1.4.0.html#mangle-dupe-cols-in-read-csv-no-longer-renames-unique-columns-conflicting-with-target-names At first sight it seems like this flag is resolved internally, it might need some more research.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5562/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5562/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5562.diff", "html_url": "https://github.com/huggingface/datasets/pull/5562", "merged_at": "2023-02-23T11:00:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/5562.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5562" }
true
97,488
https://api.github.com/repos/huggingface/datasets/issues/5561
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5561/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5561/comments
https://api.github.com/repos/huggingface/datasets/issues/5561/events
https://github.com/huggingface/datasets/pull/5561
1,593,862,388
PR_kwDODunzps5Kcxw_
5,561
Add pre-commit config yaml file to enable automatic code formatting
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[]
open
false
null
[]
null
2
"2023-02-21T17:35:07"
"2023-02-22T20:41:11"
null
CONTRIBUTOR
null
@huggingface/datasets do you think it would be useful? Motivation - sometimes PRs are like 30% "fix: style" commits :) If so - I need to double check the config but for me locally it works as expected.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5561/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5561/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5561.diff", "html_url": "https://github.com/huggingface/datasets/pull/5561", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5561.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5561" }
true
null
https://api.github.com/repos/huggingface/datasets/issues/5560
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5560/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5560/comments
https://api.github.com/repos/huggingface/datasets/issues/5560/events
https://github.com/huggingface/datasets/pull/5560
1,593,809,978
PR_kwDODunzps5Kcml6
5,560
Ensure last tqdm update in `map`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
10
"2023-02-21T16:56:17"
"2023-02-21T18:26:23"
"2023-02-21T18:19:09"
CONTRIBUTOR
null
This PR modifies `map` to: * ensure the TQDM bar gets the last progress update * when a map function fails, avoid throwing a chained exception in the single-proc mode
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5560/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5560/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5560.diff", "html_url": "https://github.com/huggingface/datasets/pull/5560", "merged_at": "2023-02-21T18:19:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/5560.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5560" }
true
4,972
https://api.github.com/repos/huggingface/datasets/issues/5559
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5559/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5559/comments
https://api.github.com/repos/huggingface/datasets/issues/5559/events
https://github.com/huggingface/datasets/pull/5559
1,593,676,489
PR_kwDODunzps5KcKSb
5,559
Fix map suffix_template
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
4
"2023-02-21T15:26:26"
"2023-02-21T17:21:37"
"2023-02-21T17:14:29"
MEMBER
null
#5455 introduced a small bug that lead `map` to ignore the `suffix_template` argument and not put suffixes to cached files in multiprocessing. I fixed this and also improved a few things: - regarding logging: "Loading cached processed dataset" is now logged only once even in multiprocessing (it used to be logged `num_proc` times) - regarding new_fingerprint: I made sure that the returned dataset satisfies `ds._fingerprint==new_fingerprint` if `new_fingerprint` is passed to `map`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5559/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5559/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5559.diff", "html_url": "https://github.com/huggingface/datasets/pull/5559", "merged_at": "2023-02-21T17:14:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/5559.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5559" }
true
6,483
https://api.github.com/repos/huggingface/datasets/issues/5558
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5558/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5558/comments
https://api.github.com/repos/huggingface/datasets/issues/5558/events
https://github.com/huggingface/datasets/pull/5558
1,593,655,815
PR_kwDODunzps5KcF5E
5,558
Remove instructions for `ffmpeg` system package installation on Colab
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[]
open
false
null
[]
null
1
"2023-02-21T15:13:36"
"2023-02-22T21:04:09"
null
CONTRIBUTOR
null
Colab now has Ubuntu 20.04 which already has `ffmpeg` of required (>4) version.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5558/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5558/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5558.diff", "html_url": "https://github.com/huggingface/datasets/pull/5558", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5558.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5558" }
true
null
https://api.github.com/repos/huggingface/datasets/issues/5557
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5557/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5557/comments
https://api.github.com/repos/huggingface/datasets/issues/5557/events
https://github.com/huggingface/datasets/pull/5557
1,593,545,324
PR_kwDODunzps5Kbube
5,557
Add filter desc
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
3
"2023-02-21T14:04:42"
"2023-02-21T14:19:54"
"2023-02-21T14:12:39"
MEMBER
null
Otherwise it would show a `Map` progress bar, since it uses `map` under the hood
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5557/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5557/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5557.diff", "html_url": "https://github.com/huggingface/datasets/pull/5557", "merged_at": "2023-02-21T14:12:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/5557.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5557" }
true
477
https://api.github.com/repos/huggingface/datasets/issues/5556
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5556/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5556/comments
https://api.github.com/repos/huggingface/datasets/issues/5556/events
https://github.com/huggingface/datasets/pull/5556
1,593,246,936
PR_kwDODunzps5KauVL
5,556
Use default audio resampling type
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
5
"2023-02-21T10:45:50"
"2023-02-21T12:49:50"
"2023-02-21T12:42:52"
MEMBER
null
...instead of relying on the optional librosa dependency `resampy`. It was only used for `_decode_non_mp3_file_like` anyway and not for the other ones - removing it fixes consistency between decoding methods (except torchaudio decoding) Therefore I think it is a better solution than adding `resampy` as a dependency in https://github.com/huggingface/datasets/pull/5554 cc @polinaeterna
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5556/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5556/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5556.diff", "html_url": "https://github.com/huggingface/datasets/pull/5556", "merged_at": "2023-02-21T12:42:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/5556.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5556" }
true
7,022
https://api.github.com/repos/huggingface/datasets/issues/5555
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5555/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5555/comments
https://api.github.com/repos/huggingface/datasets/issues/5555/events
https://github.com/huggingface/datasets/issues/5555
1,592,469,938
I_kwDODunzps5e6ymy
5,555
`.shuffle` throwing error `ValueError: Protocol not known: parent`
{ "avatar_url": "https://avatars.githubusercontent.com/u/10768588?v=4", "events_url": "https://api.github.com/users/prabhakar267/events{/privacy}", "followers_url": "https://api.github.com/users/prabhakar267/followers", "following_url": "https://api.github.com/users/prabhakar267/following{/other_user}", "gists_url": "https://api.github.com/users/prabhakar267/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/prabhakar267", "id": 10768588, "login": "prabhakar267", "node_id": "MDQ6VXNlcjEwNzY4NTg4", "organizations_url": "https://api.github.com/users/prabhakar267/orgs", "received_events_url": "https://api.github.com/users/prabhakar267/received_events", "repos_url": "https://api.github.com/users/prabhakar267/repos", "site_admin": false, "starred_url": "https://api.github.com/users/prabhakar267/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prabhakar267/subscriptions", "type": "User", "url": "https://api.github.com/users/prabhakar267" }
[]
open
false
null
[]
null
1
"2023-02-20T21:33:45"
"2023-02-21T13:16:02"
null
NONE
null
### Describe the bug ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In [16], line 1 ----> 1 train_dataset = train_dataset.shuffle() File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:551, in transmit_format.<locals>.wrapper(*args, **kwargs) 544 self_format = { 545 "type": self._format_type, 546 "format_kwargs": self._format_kwargs, 547 "columns": self._format_columns, 548 "output_all_columns": self._output_all_columns, 549 } 550 # apply actual function --> 551 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 552 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 553 # re-apply format to the output File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs) 476 validate_fingerprint(kwargs[fingerprint_name]) 478 # Call actual function --> 480 out = func(self, *args, **kwargs) 482 # Update fingerprint of in-place transforms + update in-place history of transforms 484 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:3616, in Dataset.shuffle(self, seed, generator, keep_in_memory, load_from_cache_file, indices_cache_file_name, writer_batch_size, new_fingerprint) 3610 return self._new_dataset_with_indices( 3611 fingerprint=new_fingerprint, indices_cache_file_name=indices_cache_file_name 3612 ) 3614 permutation = generator.permutation(len(self)) -> 3616 return self.select( 3617 indices=permutation, 3618 keep_in_memory=keep_in_memory, 3619 indices_cache_file_name=indices_cache_file_name if not keep_in_memory else None, 3620 writer_batch_size=writer_batch_size, 3621 new_fingerprint=new_fingerprint, 3622 ) File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:551, in transmit_format.<locals>.wrapper(*args, **kwargs) 544 self_format = { 545 "type": self._format_type, 546 "format_kwargs": self._format_kwargs, 547 "columns": self._format_columns, 548 "output_all_columns": self._output_all_columns, 549 } 550 # apply actual function --> 551 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 552 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 553 # re-apply format to the output File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs) 476 validate_fingerprint(kwargs[fingerprint_name]) 478 # Call actual function --> 480 out = func(self, *args, **kwargs) 482 # Update fingerprint of in-place transforms + update in-place history of transforms 484 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:3266, in Dataset.select(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint) 3263 return self._select_contiguous(start, length, new_fingerprint=new_fingerprint) 3265 # If not contiguous, we need to create a new indices mapping -> 3266 return self._select_with_indices_mapping( 3267 indices, 3268 keep_in_memory=keep_in_memory, 3269 indices_cache_file_name=indices_cache_file_name, 3270 writer_batch_size=writer_batch_size, 3271 new_fingerprint=new_fingerprint, 3272 ) File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:551, in transmit_format.<locals>.wrapper(*args, **kwargs) 544 self_format = { 545 "type": self._format_type, 546 "format_kwargs": self._format_kwargs, 547 "columns": self._format_columns, 548 "output_all_columns": self._output_all_columns, 549 } 550 # apply actual function --> 551 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 552 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 553 # re-apply format to the output File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs) 476 validate_fingerprint(kwargs[fingerprint_name]) 478 # Call actual function --> 480 out = func(self, *args, **kwargs) 482 # Update fingerprint of in-place transforms + update in-place history of transforms 484 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:3389, in Dataset._select_with_indices_mapping(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint) 3387 logger.info(f"Caching indices mapping at {indices_cache_file_name}") 3388 tmp_file = tempfile.NamedTemporaryFile("wb", dir=os.path.dirname(indices_cache_file_name), delete=False) -> 3389 writer = ArrowWriter( 3390 path=tmp_file.name, writer_batch_size=writer_batch_size, fingerprint=new_fingerprint, unit="indices" 3391 ) 3393 indices = indices if isinstance(indices, list) else list(indices) 3395 size = len(self) File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_writer.py:315, in ArrowWriter.__init__(self, schema, features, path, stream, fingerprint, writer_batch_size, hash_salt, check_duplicates, disable_nullable, update_features, with_metadata, unit, embed_local_files, storage_options) 312 self._disable_nullable = disable_nullable 314 if stream is None: --> 315 fs_token_paths = fsspec.get_fs_token_paths(path, storage_options=storage_options) 316 self._fs: fsspec.AbstractFileSystem = fs_token_paths[0] 317 self._path = ( 318 fs_token_paths[2][0] 319 if not is_remote_filesystem(self._fs) 320 else self._fs.unstrip_protocol(fs_token_paths[2][0]) 321 ) File /opt/conda/envs/pytorch/lib/python3.9/site-packages/fsspec/core.py:593, in get_fs_token_paths(urlpath, mode, num, name_function, storage_options, protocol, expand) 591 else: 592 urlpath = stringify_path(urlpath) --> 593 chain = _un_chain(urlpath, storage_options or {}) 594 if len(chain) > 1: 595 inkwargs = {} File /opt/conda/envs/pytorch/lib/python3.9/site-packages/fsspec/core.py:330, in _un_chain(path, kwargs) 328 for bit in reversed(bits): 329 protocol = split_protocol(bit)[0] or "file" --> 330 cls = get_filesystem_class(protocol) 331 extra_kwargs = cls._get_kwargs_from_urls(bit) 332 kws = kwargs.get(protocol, {}) File /opt/conda/envs/pytorch/lib/python3.9/site-packages/fsspec/registry.py:240, in get_filesystem_class(protocol) 238 if protocol not in registry: 239 if protocol not in known_implementations: --> 240 raise ValueError("Protocol not known: %s" % protocol) 241 bit = known_implementations[protocol] 242 try: ValueError: Protocol not known: parent ``` This is what the `train_dataset` object looks like ``` Dataset({ features: ['label', 'input_ids', 'attention_mask'], num_rows: 364166 }) ``` ### Steps to reproduce the bug The `train_dataset` obj is created by concatenating two datasets And then shuffle is called, but it throws the mentioned error. ### Expected behavior Should shuffle the dataset properly. ### Environment info - `datasets` version: 2.6.1 - Platform: Linux-5.15.0-1022-aws-x86_64-with-glibc2.31 - Python version: 3.9.13 - PyArrow version: 10.0.0 - Pandas version: 1.4.4
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5555/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5555/timeline
null
null
null
null
false
null
https://api.github.com/repos/huggingface/datasets/issues/5554
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5554/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5554/comments
https://api.github.com/repos/huggingface/datasets/issues/5554/events
https://github.com/huggingface/datasets/pull/5554
1,592,285,062
PR_kwDODunzps5KXhZh
5,554
Add resampy dep
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
5
"2023-02-20T18:15:43"
"2023-02-21T12:46:10"
"2023-02-21T12:43:38"
MEMBER
null
In librosa 0.10 they removed the `resmpy` dependency and set it to optional. However it is necessary for resampling. I added it to the "audio" extra dependencies.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5554/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5554/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5554.diff", "html_url": "https://github.com/huggingface/datasets/pull/5554", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5554.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5554" }
true
66,475
https://api.github.com/repos/huggingface/datasets/issues/5553
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5553/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5553/comments
https://api.github.com/repos/huggingface/datasets/issues/5553/events
https://github.com/huggingface/datasets/pull/5553
1,592,236,998
PR_kwDODunzps5KXXUq
5,553
improved message error row formatting
{ "avatar_url": "https://avatars.githubusercontent.com/u/26489385?v=4", "events_url": "https://api.github.com/users/Plutone11011/events{/privacy}", "followers_url": "https://api.github.com/users/Plutone11011/followers", "following_url": "https://api.github.com/users/Plutone11011/following{/other_user}", "gists_url": "https://api.github.com/users/Plutone11011/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Plutone11011", "id": 26489385, "login": "Plutone11011", "node_id": "MDQ6VXNlcjI2NDg5Mzg1", "organizations_url": "https://api.github.com/users/Plutone11011/orgs", "received_events_url": "https://api.github.com/users/Plutone11011/received_events", "repos_url": "https://api.github.com/users/Plutone11011/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Plutone11011/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Plutone11011/subscriptions", "type": "User", "url": "https://api.github.com/users/Plutone11011" }
[]
closed
false
null
[]
null
2
"2023-02-20T17:29:14"
"2023-02-21T13:08:25"
"2023-02-21T12:58:12"
CONTRIBUTOR
null
Solves #5539
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5553/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5553/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5553.diff", "html_url": "https://github.com/huggingface/datasets/pull/5553", "merged_at": "2023-02-21T12:58:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/5553.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5553" }
true
70,138
https://api.github.com/repos/huggingface/datasets/issues/5552
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5552/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5552/comments
https://api.github.com/repos/huggingface/datasets/issues/5552/events
https://github.com/huggingface/datasets/pull/5552
1,592,186,703
PR_kwDODunzps5KXMjA
5,552
Make tiktoken tokenizers hashable
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
4
"2023-02-20T16:50:09"
"2023-02-21T13:20:42"
"2023-02-21T13:13:05"
CONTRIBUTOR
null
Fix for https://discord.com/channels/879548962464493619/1075729627546406912/1075729627546406912
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5552/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5552/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5552.diff", "html_url": "https://github.com/huggingface/datasets/pull/5552", "merged_at": "2023-02-21T13:13:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/5552.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5552" }
true
73,376
https://api.github.com/repos/huggingface/datasets/issues/5551
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5551/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5551/comments
https://api.github.com/repos/huggingface/datasets/issues/5551/events
https://github.com/huggingface/datasets/pull/5551
1,592,140,836
PR_kwDODunzps5KXCof
5,551
Suggest scikit-learn instead of sklearn
{ "avatar_url": "https://avatars.githubusercontent.com/u/74963545?v=4", "events_url": "https://api.github.com/users/osbm/events{/privacy}", "followers_url": "https://api.github.com/users/osbm/followers", "following_url": "https://api.github.com/users/osbm/following{/other_user}", "gists_url": "https://api.github.com/users/osbm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/osbm", "id": 74963545, "login": "osbm", "node_id": "MDQ6VXNlcjc0OTYzNTQ1", "organizations_url": "https://api.github.com/users/osbm/orgs", "received_events_url": "https://api.github.com/users/osbm/received_events", "repos_url": "https://api.github.com/users/osbm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/osbm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osbm/subscriptions", "type": "User", "url": "https://api.github.com/users/osbm" }
[]
closed
false
null
[]
null
4
"2023-02-20T16:16:57"
"2023-02-21T13:27:57"
"2023-02-21T13:21:07"
CONTRIBUTOR
null
This is kinda unimportant fix but, the suggested `pip install sklearn` does not work. The current error message if sklearn is not installed: ``` ImportError: To be able to use [dataset name], you need to install the following dependency: sklearn. Please install it using 'pip install sklearn' for instance. ```
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5551/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5551/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5551.diff", "html_url": "https://github.com/huggingface/datasets/pull/5551", "merged_at": "2023-02-21T13:21:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/5551.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5551" }
true
75,850
https://api.github.com/repos/huggingface/datasets/issues/5550
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5550/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5550/comments
https://api.github.com/repos/huggingface/datasets/issues/5550/events
https://github.com/huggingface/datasets/pull/5550
1,591,409,475
PR_kwDODunzps5KUl5i
5,550
Resolve four broken refs in the docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4", "events_url": "https://api.github.com/users/tomaarsen/events{/privacy}", "followers_url": "https://api.github.com/users/tomaarsen/followers", "following_url": "https://api.github.com/users/tomaarsen/following{/other_user}", "gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tomaarsen", "id": 37621491, "login": "tomaarsen", "node_id": "MDQ6VXNlcjM3NjIxNDkx", "organizations_url": "https://api.github.com/users/tomaarsen/orgs", "received_events_url": "https://api.github.com/users/tomaarsen/received_events", "repos_url": "https://api.github.com/users/tomaarsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions", "type": "User", "url": "https://api.github.com/users/tomaarsen" }
[]
closed
false
null
[]
null
3
"2023-02-20T08:52:11"
"2023-02-20T15:16:13"
"2023-02-20T15:09:13"
CONTRIBUTOR
null
Hello! ## Pull Request overview * Resolve 4 broken references in the docs ## The problems Two broken references [here](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.class_encode_column): ![image](https://user-images.githubusercontent.com/37621491/220056232-366b64dc-33c9-461b-8f82-1ac4aa570280.png) --- One broken reference [here](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.unique): ![image](https://user-images.githubusercontent.com/37621491/220057135-2f249d60-c01d-48b5-82bb-5085a7635198.png) --- One missing reference [here](https://huggingface.co/docs/datasets/v2.9.0/en/package_reference/main_classes#datasets.DatasetDict.class_encode_column): ![image](https://user-images.githubusercontent.com/37621491/220057025-4a8e5556-5041-4ec7-b8d8-ed4fdc266495.png) - Tom Aarsen
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5550/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5550/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5550.diff", "html_url": "https://github.com/huggingface/datasets/pull/5550", "merged_at": "2023-02-20T15:09:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/5550.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5550" }
true
22,622
https://api.github.com/repos/huggingface/datasets/issues/5549
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5549/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5549/comments
https://api.github.com/repos/huggingface/datasets/issues/5549/events
https://github.com/huggingface/datasets/pull/5549
1,590,836,848
PR_kwDODunzps5KSsi3
5,549
Apply ruff flake8-comprehension checks
{ "avatar_url": "https://avatars.githubusercontent.com/u/2053727?v=4", "events_url": "https://api.github.com/users/Skylion007/events{/privacy}", "followers_url": "https://api.github.com/users/Skylion007/followers", "following_url": "https://api.github.com/users/Skylion007/following{/other_user}", "gists_url": "https://api.github.com/users/Skylion007/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Skylion007", "id": 2053727, "login": "Skylion007", "node_id": "MDQ6VXNlcjIwNTM3Mjc=", "organizations_url": "https://api.github.com/users/Skylion007/orgs", "received_events_url": "https://api.github.com/users/Skylion007/received_events", "repos_url": "https://api.github.com/users/Skylion007/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Skylion007/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Skylion007/subscriptions", "type": "User", "url": "https://api.github.com/users/Skylion007" }
[]
open
false
null
[]
null
1
"2023-02-19T20:09:28"
"2023-02-22T16:45:10"
null
NONE
null
Fix #5548 Apply ruff's flake8-comprehension checks for better performance, and more readable code.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5549/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5549/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5549.diff", "html_url": "https://github.com/huggingface/datasets/pull/5549", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5549.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5549" }
true
null
https://api.github.com/repos/huggingface/datasets/issues/5548
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5548/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5548/comments
https://api.github.com/repos/huggingface/datasets/issues/5548/events
https://github.com/huggingface/datasets/issues/5548
1,590,835,479
I_kwDODunzps5e0jkX
5,548
Apply flake8-comprehensions to codebase
{ "avatar_url": "https://avatars.githubusercontent.com/u/2053727?v=4", "events_url": "https://api.github.com/users/Skylion007/events{/privacy}", "followers_url": "https://api.github.com/users/Skylion007/followers", "following_url": "https://api.github.com/users/Skylion007/following{/other_user}", "gists_url": "https://api.github.com/users/Skylion007/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Skylion007", "id": 2053727, "login": "Skylion007", "node_id": "MDQ6VXNlcjIwNTM3Mjc=", "organizations_url": "https://api.github.com/users/Skylion007/orgs", "received_events_url": "https://api.github.com/users/Skylion007/received_events", "repos_url": "https://api.github.com/users/Skylion007/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Skylion007/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Skylion007/subscriptions", "type": "User", "url": "https://api.github.com/users/Skylion007" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
0
"2023-02-19T20:05:38"
"2023-02-19T20:05:38"
null
NONE
null
### Feature request Apply ruff flake8 comprehension checks to codebase. ### Motivation This should strictly improve the performance / readability of the codebase by removing unnecessary iteration, function calls, etc. This should generate better Python bytecode which should strictly improve performance. I already applied this fixes to PyTorch and Sympy with little issue and have opened PRs to diffusers and transformers todo this as well. ### Your contribution Making a PR.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5548/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5548/timeline
null
null
null
null
false
null
https://api.github.com/repos/huggingface/datasets/issues/5547
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5547/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5547/comments
https://api.github.com/repos/huggingface/datasets/issues/5547/events
https://github.com/huggingface/datasets/pull/5547
1,590,468,200
PR_kwDODunzps5KRmcf
5,547
Add JAX device selection when formatting
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt" }
[]
closed
false
null
[]
null
9
"2023-02-18T20:57:40"
"2023-02-21T16:10:55"
"2023-02-21T16:04:03"
CONTRIBUTOR
null
## What's in this PR? After exploring for a while the JAX integration in 🤗`datasets`, I found out that, even though JAX prioritizes the TPU and GPU as the default device when available, the `JaxFormatter` doesn't let you specify the device where you want to place the `jax.Array`s in case you don't want to rely on JAX's default array placement. So on, I've included the `device` param in `JaxFormatter` but there are some things to take into consideration: * A formatted `Dataset` is copied with `copy.deepcopy` which means that if one adds the param `device` in `JaxFormatter` as a `jaxlib.xla_extension.Device`, it "fails" because that object cannot be serialized (instead of serializing the param adds a random hash instead). That's the reason why I added a function `_map_devices_to_str` to basically create a mapping of strings to `jaxlib.xla_extension.Device`s so that `self.device` is a string and not a `jaxlib.xla_extension.Device`. * To create a `jax.Array` in a device you need to either create it in the default device and then move it to the desired device with `jax.device_put` or directly create it in the device you want with `jax.default_device()` context manager. * JAX will create an array by default in `jax.devices()[0]` More information on JAX device management is available at https://jax.readthedocs.io/en/latest/faq.html#controlling-data-and-computation-placement-on-devices ## What's missing in this PR? I've tested it both locally in CPU (Mac M2 and Mac M1, as no GPU support for Mac yet), and in GPU and TPU in Google Colab, let me know if you want me to provide you the Notebook for the latter. But I did not implement any integration test as I wanted to get your feedback first.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5547/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5547/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5547.diff", "html_url": "https://github.com/huggingface/datasets/pull/5547", "merged_at": "2023-02-21T16:04:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/5547.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5547" }
true
241,583
https://api.github.com/repos/huggingface/datasets/issues/5546
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5546/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5546/comments
https://api.github.com/repos/huggingface/datasets/issues/5546/events
https://github.com/huggingface/datasets/issues/5546
1,590,346,349
I_kwDODunzps5eysJt
5,546
Downloaded datasets do not cache at $HF_HOME
{ "avatar_url": "https://avatars.githubusercontent.com/u/79091831?v=4", "events_url": "https://api.github.com/users/ErfanMoosaviMonazzah/events{/privacy}", "followers_url": "https://api.github.com/users/ErfanMoosaviMonazzah/followers", "following_url": "https://api.github.com/users/ErfanMoosaviMonazzah/following{/other_user}", "gists_url": "https://api.github.com/users/ErfanMoosaviMonazzah/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ErfanMoosaviMonazzah", "id": 79091831, "login": "ErfanMoosaviMonazzah", "node_id": "MDQ6VXNlcjc5MDkxODMx", "organizations_url": "https://api.github.com/users/ErfanMoosaviMonazzah/orgs", "received_events_url": "https://api.github.com/users/ErfanMoosaviMonazzah/received_events", "repos_url": "https://api.github.com/users/ErfanMoosaviMonazzah/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ErfanMoosaviMonazzah/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ErfanMoosaviMonazzah/subscriptions", "type": "User", "url": "https://api.github.com/users/ErfanMoosaviMonazzah" }
[]
open
false
null
[]
null
1
"2023-02-18T13:30:35"
"2023-02-21T13:18:04"
null
NONE
null
### Describe the bug In the huggingface course (https://huggingface.co/course/chapter3/2?fw=pt) it said that if we set HF_HOME, downloaded datasets would be cached at specified address but it does not. downloaded models from checkpoint names are downloaded and cached at HF_HOME but this is not the case for datasets, they are still cached at ~/.cache/huggingface/datasets. ### Steps to reproduce the bug Run the following code ``` from datasets import load_dataset raw_datasets = load_dataset("glue", "mrpc") raw_datasets ``` it downloads and store dataset at ~/.cache/huggingface/datasets ### Expected behavior to cache dataset at HF_HOME. ### Environment info python 3.10.6 Kubuntu 22.04 HF_HOME located on a separate partition
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5546/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5546/timeline
null
null
null
null
false
null
https://api.github.com/repos/huggingface/datasets/issues/5545
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5545/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5545/comments
https://api.github.com/repos/huggingface/datasets/issues/5545/events
https://github.com/huggingface/datasets/pull/5545
1,590,315,972
PR_kwDODunzps5KRKct
5,545
Added return methods for URL-references to the pushed dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/25269220?v=4", "events_url": "https://api.github.com/users/davidberenstein1957/events{/privacy}", "followers_url": "https://api.github.com/users/davidberenstein1957/followers", "following_url": "https://api.github.com/users/davidberenstein1957/following{/other_user}", "gists_url": "https://api.github.com/users/davidberenstein1957/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/davidberenstein1957", "id": 25269220, "login": "davidberenstein1957", "node_id": "MDQ6VXNlcjI1MjY5MjIw", "organizations_url": "https://api.github.com/users/davidberenstein1957/orgs", "received_events_url": "https://api.github.com/users/davidberenstein1957/received_events", "repos_url": "https://api.github.com/users/davidberenstein1957/repos", "site_admin": false, "starred_url": "https://api.github.com/users/davidberenstein1957/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidberenstein1957/subscriptions", "type": "User", "url": "https://api.github.com/users/davidberenstein1957" }
[]
open
false
null
[]
null
4
"2023-02-18T11:26:25"
"2023-02-21T14:17:28"
null
NONE
null
Hi, I was missing the ability to easily open the pushed dataset and it seemed like a quick fix. Maybe we also want to log this info somewhere, but let me know if I need to add that too. Cheers, David
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5545/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5545/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5545.diff", "html_url": "https://github.com/huggingface/datasets/pull/5545", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5545.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5545" }
true
null
https://api.github.com/repos/huggingface/datasets/issues/5543
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5543/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5543/comments
https://api.github.com/repos/huggingface/datasets/issues/5543/events
https://github.com/huggingface/datasets/issues/5543
1,588,951,379
I_kwDODunzps5etXlT
5,543
the pile datasets url seems to change back
{ "avatar_url": "https://avatars.githubusercontent.com/u/5126316?v=4", "events_url": "https://api.github.com/users/wjfwzzc/events{/privacy}", "followers_url": "https://api.github.com/users/wjfwzzc/followers", "following_url": "https://api.github.com/users/wjfwzzc/following{/other_user}", "gists_url": "https://api.github.com/users/wjfwzzc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wjfwzzc", "id": 5126316, "login": "wjfwzzc", "node_id": "MDQ6VXNlcjUxMjYzMTY=", "organizations_url": "https://api.github.com/users/wjfwzzc/orgs", "received_events_url": "https://api.github.com/users/wjfwzzc/received_events", "repos_url": "https://api.github.com/users/wjfwzzc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wjfwzzc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wjfwzzc/subscriptions", "type": "User", "url": "https://api.github.com/users/wjfwzzc" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
2
"2023-02-17T08:40:11"
"2023-02-21T06:37:00"
"2023-02-20T08:41:33"
NONE
null
### Describe the bug in #3627, the host url of the pile dataset became `https://mystic.the-eye.eu`. Now the new url is broken, but `https://the-eye.eu` seems to work again. ### Steps to reproduce the bug ```python3 from datasets import load_dataset dataset = load_dataset("bookcorpusopen") ``` shows ```python3 ConnectionError: Couldn't reach https://mystic.the-eye.eu/public/AI/pile_preliminary_components/books1.tar.gz (ProxyError(MaxRetryError("HTTPSConnectionPool(host='mystic.the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_pr eliminary_components/books1.tar.gz (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 504 Gateway Timeout')))"))) ``` ### Expected behavior Downloading as normal. ### Environment info - `datasets` version: 2.9.0 - Platform: Linux-5.4.143.bsk.7-amd64-x86_64-with-glibc2.31 - Python version: 3.9.2 - PyArrow version: 6.0.1 - Pandas version: 1.5.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5543/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5543/timeline
null
completed
null
null
false
259,282
https://api.github.com/repos/huggingface/datasets/issues/5542
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5542/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5542/comments
https://api.github.com/repos/huggingface/datasets/issues/5542/events
https://github.com/huggingface/datasets/pull/5542
1,588,633,724
PR_kwDODunzps5KLjMl
5,542
Avoid saving sparse ChunkedArrays in pyarrow tables
{ "avatar_url": "https://avatars.githubusercontent.com/u/6591505?v=4", "events_url": "https://api.github.com/users/marioga/events{/privacy}", "followers_url": "https://api.github.com/users/marioga/followers", "following_url": "https://api.github.com/users/marioga/following{/other_user}", "gists_url": "https://api.github.com/users/marioga/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/marioga", "id": 6591505, "login": "marioga", "node_id": "MDQ6VXNlcjY1OTE1MDU=", "organizations_url": "https://api.github.com/users/marioga/orgs", "received_events_url": "https://api.github.com/users/marioga/received_events", "repos_url": "https://api.github.com/users/marioga/repos", "site_admin": false, "starred_url": "https://api.github.com/users/marioga/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marioga/subscriptions", "type": "User", "url": "https://api.github.com/users/marioga" }
[]
closed
false
null
[]
null
2
"2023-02-17T01:52:38"
"2023-02-17T19:20:49"
"2023-02-17T11:12:32"
CONTRIBUTOR
null
Fixes https://github.com/huggingface/datasets/issues/5541
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5542/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5542/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5542.diff", "html_url": "https://github.com/huggingface/datasets/pull/5542", "merged_at": "2023-02-17T11:12:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/5542.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5542" }
true
33,594
https://api.github.com/repos/huggingface/datasets/issues/5541
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5541/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5541/comments
https://api.github.com/repos/huggingface/datasets/issues/5541/events
https://github.com/huggingface/datasets/issues/5541
1,588,633,555
I_kwDODunzps5esJ_T
5,541
Flattening indices in selected datasets is extremely inefficient
{ "avatar_url": "https://avatars.githubusercontent.com/u/6591505?v=4", "events_url": "https://api.github.com/users/marioga/events{/privacy}", "followers_url": "https://api.github.com/users/marioga/followers", "following_url": "https://api.github.com/users/marioga/following{/other_user}", "gists_url": "https://api.github.com/users/marioga/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/marioga", "id": 6591505, "login": "marioga", "node_id": "MDQ6VXNlcjY1OTE1MDU=", "organizations_url": "https://api.github.com/users/marioga/orgs", "received_events_url": "https://api.github.com/users/marioga/received_events", "repos_url": "https://api.github.com/users/marioga/repos", "site_admin": false, "starred_url": "https://api.github.com/users/marioga/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marioga/subscriptions", "type": "User", "url": "https://api.github.com/users/marioga" }
[]
closed
false
null
[]
null
3
"2023-02-17T01:52:24"
"2023-02-22T13:15:20"
"2023-02-17T11:12:33"
CONTRIBUTOR
null
### Describe the bug If we perform a `select` (or `shuffle`, `train_test_split`, etc.) operation on a dataset , we end up with a dataset with an `indices_table`. Currently, flattening such dataset consumes a lot of memory and the resulting flat dataset contains ChunkedArrays with as many chunks as there are rows. This is extremely inefficient and slows down the operations on the flat dataset, e.g., saving/loading the dataset to disk becomes really slow. Perhaps more importantly, loading the dataset back from disk basically loads the whole table into RAM, as it cannot take advantage of memory mapping. ### Steps to reproduce the bug The following script reproduces the issue: ```python import gc import os import psutil import tempfile import time from datasets import Dataset DATASET_SIZE = 5000000 def profile(func): def wrapper(*args, **kwargs): mem_before = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024) start = time.time() # Run function here out = func(*args, **kwargs) end = time.time() mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024) print(f"{func.__name__} -- RAM memory used: {mem_after - mem_before} MB -- Total time: {end - start:.6f} s") return out return wrapper def main(): ds = Dataset.from_list([{'col': i} for i in range(DATASET_SIZE)]) print(f"Num chunks for original ds: {ds.data['col'].num_chunks}") with tempfile.TemporaryDirectory() as tmpdir: path1 = os.path.join(tmpdir, 'ds1') print("Original ds save/load") profile(ds.save_to_disk)(path1) ds_loaded = profile(Dataset.load_from_disk)(path1) print(f"Num chunks for original ds after reloading: {ds_loaded.data['col'].num_chunks}") print("") ds_select = ds.select(reversed(range(len(ds)))) print(f"Num chunks for selected ds: {ds_select.data['col'].num_chunks}") del ds del ds_loaded gc.collect() # This would happen anyway when we call save_to_disk ds_select = profile(ds_select.flatten_indices)() print(f"Num chunks for selected ds after flattening: {ds_select.data['col'].num_chunks}") print("") path2 = os.path.join(tmpdir, 'ds2') print("Selected ds save/load") profile(ds_select.save_to_disk)(path2) del ds_select gc.collect() ds_select_loaded = profile(Dataset.load_from_disk)(path2) print(f"Num chunks for selected ds after reloading: {ds_select_loaded.data['col'].num_chunks}") if __name__ == '__main__': main() ``` Sample result: ``` Num chunks for original ds: 1 Original ds save/load save_to_disk -- RAM memory used: 0.515625 MB -- Total time: 0.253888 s load_from_disk -- RAM memory used: 42.765625 MB -- Total time: 0.015176 s Num chunks for original ds after reloading: 5000 Num chunks for selected ds: 1 flatten_indices -- RAM memory used: 4852.609375 MB -- Total time: 46.116774 s Num chunks for selected ds after flattening: 5000000 Selected ds save/load save_to_disk -- RAM memory used: 1326.65625 MB -- Total time: 42.309825 s load_from_disk -- RAM memory used: 2085.953125 MB -- Total time: 11.659137 s Num chunks for selected ds after reloading: 5000000 ``` ### Expected behavior Saving/loading the dataset should be much faster and consume almost no extra memory thanks to pyarrow memory mapping. ### Environment info - `datasets` version: 2.9.1.dev0 - Platform: macOS-13.1-arm64-arm-64bit - Python version: 3.10.8 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5541/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5541/timeline
null
completed
null
null
false
33,609
https://api.github.com/repos/huggingface/datasets/issues/5540
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5540/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5540/comments
https://api.github.com/repos/huggingface/datasets/issues/5540/events
https://github.com/huggingface/datasets/pull/5540
1,588,438,344
PR_kwDODunzps5KK5qz
5,540
Tutorial for creating a dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu" }
[]
closed
false
null
[]
null
2
"2023-02-16T22:09:35"
"2023-02-17T18:50:46"
"2023-02-17T18:41:28"
MEMBER
null
A tutorial for creating datasets based on the folder-based builders and `from_dict` and `from_generator` methods. I've also mentioned loading scripts as a next step, but I think we should keep the tutorial focused on the low-code methods. Let me know what you think! 🙂
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5540/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5540/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5540.diff", "html_url": "https://github.com/huggingface/datasets/pull/5540", "merged_at": "2023-02-17T18:41:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/5540.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5540" }
true
73,913
https://api.github.com/repos/huggingface/datasets/issues/5539
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5539/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5539/comments
https://api.github.com/repos/huggingface/datasets/issues/5539/events
https://github.com/huggingface/datasets/issues/5539
1,587,970,083
I_kwDODunzps5epoAj
5,539
IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number
{ "avatar_url": "https://avatars.githubusercontent.com/u/41912135?v=4", "events_url": "https://api.github.com/users/aalbersk/events{/privacy}", "followers_url": "https://api.github.com/users/aalbersk/followers", "following_url": "https://api.github.com/users/aalbersk/following{/other_user}", "gists_url": "https://api.github.com/users/aalbersk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/aalbersk", "id": 41912135, "login": "aalbersk", "node_id": "MDQ6VXNlcjQxOTEyMTM1", "organizations_url": "https://api.github.com/users/aalbersk/orgs", "received_events_url": "https://api.github.com/users/aalbersk/received_events", "repos_url": "https://api.github.com/users/aalbersk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/aalbersk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aalbersk/subscriptions", "type": "User", "url": "https://api.github.com/users/aalbersk" }
[ { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
closed
false
null
[]
null
4
"2023-02-16T16:08:51"
"2023-02-22T10:30:30"
"2023-02-21T13:03:57"
NONE
null
### Describe the bug When dataset contains a 0-dim tensor, formatting.py raises a following error and fails. ```bash Traceback (most recent call last): File "<path>/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 501, in format_row return _unnest(formatted_batch) File "<path>/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 137, in _unnest return {key: array[0] for key, array in py_dict.items()} File "<path>/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 137, in <dictcomp> return {key: array[0] for key, array in py_dict.items()} IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number ``` ### Steps to reproduce the bug Load whichever dataset and add transform method to add 0-dim tensor. Or create/find a dataset containing 0-dim tensor. E.g. ```python from datasets import load_dataset import torch dataset = load_dataset("lambdalabs/pokemon-blip-captions", split='train') def t(batch): return {"test": torch.tensor(1)} dataset.set_transform(t) d_0 = dataset[0] ``` ### Expected behavior Extractor will correctly get a row from the dataset, even if it contains 0-dim tensor. ### Environment info `datasets==2.8.0`, but it looks like it is also applicable to main branch version (as of 16th February)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5539/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5539/timeline
null
completed
null
null
false
420,906
https://api.github.com/repos/huggingface/datasets/issues/5538
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5538/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5538/comments
https://api.github.com/repos/huggingface/datasets/issues/5538/events
https://github.com/huggingface/datasets/issues/5538
1,587,732,596
I_kwDODunzps5eouB0
5,538
load_dataset in seaborn is not working for me. getting this error.
{ "avatar_url": "https://avatars.githubusercontent.com/u/125575109?v=4", "events_url": "https://api.github.com/users/reemaranibarik/events{/privacy}", "followers_url": "https://api.github.com/users/reemaranibarik/followers", "following_url": "https://api.github.com/users/reemaranibarik/following{/other_user}", "gists_url": "https://api.github.com/users/reemaranibarik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/reemaranibarik", "id": 125575109, "login": "reemaranibarik", "node_id": "U_kgDOB3wfxQ", "organizations_url": "https://api.github.com/users/reemaranibarik/orgs", "received_events_url": "https://api.github.com/users/reemaranibarik/received_events", "repos_url": "https://api.github.com/users/reemaranibarik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/reemaranibarik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/reemaranibarik/subscriptions", "type": "User", "url": "https://api.github.com/users/reemaranibarik" }
[]
closed
false
null
[]
null
1
"2023-02-16T14:01:58"
"2023-02-16T14:44:36"
"2023-02-16T14:44:36"
NONE
null
TimeoutError Traceback (most recent call last) ~\anaconda3\lib\urllib\request.py in do_open(self, http_class, req, **http_conn_args) 1345 try: -> 1346 h.request(req.get_method(), req.selector, req.data, headers, 1347 encode_chunked=req.has_header('Transfer-encoding')) ~\anaconda3\lib\http\client.py in request(self, method, url, body, headers, encode_chunked) 1278 """Send a complete request to the server.""" -> 1279 self._send_request(method, url, body, headers, encode_chunked) 1280 ~\anaconda3\lib\http\client.py in _send_request(self, method, url, body, headers, encode_chunked) 1324 body = _encode(body, 'body') -> 1325 self.endheaders(body, encode_chunked=encode_chunked) 1326 ~\anaconda3\lib\http\client.py in endheaders(self, message_body, encode_chunked) 1273 raise CannotSendHeader() -> 1274 self._send_output(message_body, encode_chunked=encode_chunked) 1275 ~\anaconda3\lib\http\client.py in _send_output(self, message_body, encode_chunked) 1033 del self._buffer[:] -> 1034 self.send(msg) 1035 ~\anaconda3\lib\http\client.py in send(self, data) 973 if self.auto_open: --> 974 self.connect() 975 else: ~\anaconda3\lib\http\client.py in connect(self) 1440 -> 1441 super().connect() 1442 ~\anaconda3\lib\http\client.py in connect(self) 944 """Connect to the host and port specified in __init__.""" --> 945 self.sock = self._create_connection( 946 (self.host,self.port), self.timeout, self.source_address) ~\anaconda3\lib\socket.py in create_connection(address, timeout, source_address) 843 try: --> 844 raise err 845 finally: ~\anaconda3\lib\socket.py in create_connection(address, timeout, source_address) 831 sock.bind(source_address) --> 832 sock.connect(sa) 833 # Break explicitly a reference cycle TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond During handling of the above exception, another exception occurred: URLError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_12220/2927704185.py in <module> 1 import seaborn as sn ----> 2 iris = sn.load_dataset('iris') ~\anaconda3\lib\site-packages\seaborn\utils.py in load_dataset(name, cache, data_home, **kws) 594 if name not in get_dataset_names(): 595 raise ValueError(f"'{name}' is not one of the example datasets.") --> 596 urlretrieve(url, cache_path) 597 full_path = cache_path 598 else: ~\anaconda3\lib\urllib\request.py in urlretrieve(url, filename, reporthook, data) 237 url_type, path = _splittype(url) 238 --> 239 with contextlib.closing(urlopen(url, data)) as fp: 240 headers = fp.info() 241 ~\anaconda3\lib\urllib\request.py in urlopen(url, data, timeout, cafile, capath, cadefault, context) 212 else: 213 opener = _opener --> 214 return opener.open(url, data, timeout) 215 216 def install_opener(opener): ~\anaconda3\lib\urllib\request.py in open(self, fullurl, data, timeout) 515 516 sys.audit('urllib.Request', req.full_url, req.data, req.headers, req.get_method()) --> 517 response = self._open(req, data) 518 519 # post-process response ~\anaconda3\lib\urllib\request.py in _open(self, req, data) 532 533 protocol = req.type --> 534 result = self._call_chain(self.handle_open, protocol, protocol + 535 '_open', req) 536 if result: ~\anaconda3\lib\urllib\request.py in _call_chain(self, chain, kind, meth_name, *args) 492 for handler in handlers: 493 func = getattr(handler, meth_name) --> 494 result = func(*args) 495 if result is not None: 496 return result ~\anaconda3\lib\urllib\request.py in https_open(self, req) 1387 1388 def https_open(self, req): -> 1389 return self.do_open(http.client.HTTPSConnection, req, 1390 context=self._context, check_hostname=self._check_hostname) 1391 ~\anaconda3\lib\urllib\request.py in do_open(self, http_class, req, **http_conn_args) 1347 encode_chunked=req.has_header('Transfer-encoding')) 1348 except OSError as err: # timeout error -> 1349 raise URLError(err) 1350 r = h.getresponse() 1351 except: URLError: <urlopen error [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond>
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5538/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5538/timeline
null
completed
null
null
false
2,558
https://api.github.com/repos/huggingface/datasets/issues/5537
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5537/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5537/comments
https://api.github.com/repos/huggingface/datasets/issues/5537/events
https://github.com/huggingface/datasets/issues/5537
1,587,567,464
I_kwDODunzps5eoFto
5,537
Increase speed of data files resolution
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "BDE59C", "default": false, "description": "Issues a bit more difficult than \"Good First\" issues", "id": 3761482852, "name": "good second issue", "node_id": "LA_kwDODunzps7gM6xk", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue" } ]
open
false
null
[]
null
0
"2023-02-16T12:11:45"
"2023-02-16T12:11:45"
null
MEMBER
null
Certain datasets like `bigcode/the-stack-dedup` have so many files that loading them takes forever right from the data files resolution step. `datasets` uses file patterns to check the structure of the repository but it takes too much time to iterate over and over again on all the data files. This come from `resolve_patterns_in_dataset_repository` which calls `_resolve_single_pattern_in_dataset_repository`, which iterates on all the files at ```python glob_iter = [PurePath(filepath) for filepath in fs.glob(PurePath(pattern).as_posix()) if fs.isfile(filepath)] ``` but calling `glob` on such a dataset is too expensive. Indeed it calls `ls()` in `hffilesystem.py` too many times. Maybe `glob` can be more optimized in `hffilesystem.py`, or the data files resolution can directly be implemented in the filesystem by checking its `dir_cache` ?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5537/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5537/timeline
null
null
null
null
false
null
https://api.github.com/repos/huggingface/datasets/issues/5536
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5536/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5536/comments
https://api.github.com/repos/huggingface/datasets/issues/5536/events
https://github.com/huggingface/datasets/issues/5536
1,586,930,643
I_kwDODunzps5elqPT
5,536
Failure to hash function when using .map()
{ "avatar_url": "https://avatars.githubusercontent.com/u/6916056?v=4", "events_url": "https://api.github.com/users/venzen/events{/privacy}", "followers_url": "https://api.github.com/users/venzen/followers", "following_url": "https://api.github.com/users/venzen/following{/other_user}", "gists_url": "https://api.github.com/users/venzen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/venzen", "id": 6916056, "login": "venzen", "node_id": "MDQ6VXNlcjY5MTYwNTY=", "organizations_url": "https://api.github.com/users/venzen/orgs", "received_events_url": "https://api.github.com/users/venzen/received_events", "repos_url": "https://api.github.com/users/venzen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/venzen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/venzen/subscriptions", "type": "User", "url": "https://api.github.com/users/venzen" }
[]
closed
false
null
[]
null
3
"2023-02-16T03:12:07"
"2023-02-22T13:11:14"
"2023-02-16T14:56:41"
NONE
null
### Describe the bug _Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed._ This issue with `.map()` happens for me consistently, as also described in closed issue #4506 Dataset indices can be individually serialized using dill and pickle without any errors. I'm using tiktoken to encode in the function passed to map(). Similarly, indices can be individually encoded without error. ### Steps to reproduce the bug ```py from datasets import load_dataset import tiktoken dataset = load_dataset("stas/openwebtext-10k") enc = tiktoken.get_encoding("gpt2") tokenized = dataset.map( process, remove_columns=['text'], desc="tokenizing the OWT splits", ) def process(example): ids = enc.encode(example['text']) ids.append(enc.eot_token) out = {'ids': ids, 'len': len(ids)} return out ``` ### Expected behavior Should encode simple text objects. ### Environment info Python versions tried: both 3.8 and 3.10.10 `PYTHONUTF8=1` as env variable Datasets tried: - stas/openwebtext-10k - rotten_tomatoes - local text file OS: Ubuntu Linux 20.04 Package versions: - torch 1.13.1 - dill 0.3.4 (if using 0.3.6 - same issue) - datasets 2.9.0 - tiktoken 0.2.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5536/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5536/timeline
null
completed
null
null
false
42,274
https://api.github.com/repos/huggingface/datasets/issues/5535
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5535/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5535/comments
https://api.github.com/repos/huggingface/datasets/issues/5535/events
https://github.com/huggingface/datasets/pull/5535
1,586,520,369
PR_kwDODunzps5KEb5L
5,535
Add JAX-formatting documentation
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt" }
[]
closed
false
null
[]
null
9
"2023-02-15T20:35:11"
"2023-02-20T10:39:42"
"2023-02-20T10:32:39"
CONTRIBUTOR
null
## What's in this PR? As a follow-up of #5522, I've created this entry in the documentation to explain how to use `.with_format("jax")` and why is it useful. @lhoestq Feel free to drop any feedback and/or suggestion, as probably more useful features can be included there!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5535/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5535/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5535.diff", "html_url": "https://github.com/huggingface/datasets/pull/5535", "merged_at": "2023-02-20T10:32:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/5535.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5535" }
true
395,848
https://api.github.com/repos/huggingface/datasets/issues/5534
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5534/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5534/comments
https://api.github.com/repos/huggingface/datasets/issues/5534/events
https://github.com/huggingface/datasets/issues/5534
1,586,177,862
I_kwDODunzps5eiydG
5,534
map() breaks at certain dataset size when using Array3D
{ "avatar_url": "https://avatars.githubusercontent.com/u/3375489?v=4", "events_url": "https://api.github.com/users/ArneBinder/events{/privacy}", "followers_url": "https://api.github.com/users/ArneBinder/followers", "following_url": "https://api.github.com/users/ArneBinder/following{/other_user}", "gists_url": "https://api.github.com/users/ArneBinder/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArneBinder", "id": 3375489, "login": "ArneBinder", "node_id": "MDQ6VXNlcjMzNzU0ODk=", "organizations_url": "https://api.github.com/users/ArneBinder/orgs", "received_events_url": "https://api.github.com/users/ArneBinder/received_events", "repos_url": "https://api.github.com/users/ArneBinder/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArneBinder/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArneBinder/subscriptions", "type": "User", "url": "https://api.github.com/users/ArneBinder" }
[]
open
false
null
[]
null
0
"2023-02-15T16:34:25"
"2023-02-15T17:12:02"
null
NONE
null
### Describe the bug `map()` magically breaks when using a `Array3D` feature and mapping it. I created a very simple dummy dataset (see below). When filtering it down to 95 elements I can apply map, but it breaks when filtering it down to just 96 entries with the following exception: ``` Traceback (most recent call last): File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3255, in _map_single writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_writer.py", line 581, in finalize self.write_examples_on_file() File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_writer.py", line 440, in write_examples_on_file batch_examples[col] = array_concat(arrays) File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1931, in array_concat return _concat_arrays(arrays) File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1901, in _concat_arrays return array_type.wrap_array(_concat_arrays([array.storage for array in arrays])) File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1922, in _concat_arrays _concat_arrays([array.values for array in arrays]), File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1922, in _concat_arrays _concat_arrays([array.values for array in arrays]), File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1920, in _concat_arrays return pa.ListArray.from_arrays( File "pyarrow/array.pxi", line 1997, in pyarrow.lib.ListArray.from_arrays File "pyarrow/array.pxi", line 1527, in pyarrow.lib.Array.validate File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Negative offsets in list array During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2815, in map return self._map_single( File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 546, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 513, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/fingerprint.py", line 480, in wrapper out = func(self, *args, **kwargs) File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3259, in _map_single writer.finalize() File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_writer.py", line 581, in finalize self.write_examples_on_file() File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_writer.py", line 440, in write_examples_on_file batch_examples[col] = array_concat(arrays) File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1931, in array_concat return _concat_arrays(arrays) File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1901, in _concat_arrays return array_type.wrap_array(_concat_arrays([array.storage for array in arrays])) File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1922, in _concat_arrays _concat_arrays([array.values for array in arrays]), File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1922, in _concat_arrays _concat_arrays([array.values for array in arrays]), File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1920, in _concat_arrays return pa.ListArray.from_arrays( File "pyarrow/array.pxi", line 1997, in pyarrow.lib.ListArray.from_arrays File "pyarrow/array.pxi", line 1527, in pyarrow.lib.Array.validate File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Negative offsets in list array ``` ### Steps to reproduce the bug 1. put following dataset loading script into: debug/debug.py ```python import datasets import numpy as np class DEBUG(datasets.GeneratorBasedBuilder): """DEBUG dataset.""" def _info(self): return datasets.DatasetInfo( features=datasets.Features( { "id": datasets.Value("uint8"), "img_data": datasets.Array3D(shape=(3, 224, 224), dtype="uint8"), }, ), supervised_keys=None, ) def _split_generators(self, dl_manager): return [datasets.SplitGenerator(name=datasets.Split.TRAIN)] def _generate_examples(self): for i in range(149): image_np = np.zeros(shape=(3, 224, 224), dtype=np.int8).tolist() yield f"id_{i}", {"id": i, "img_data": image_np} ``` 2. try the following code: ```python import datasets def add_dummy_col(ex): ex["dummy"] = "test" return ex ds = datasets.load_dataset(path="debug", split="train") # works ds_filtered_works = ds.filter(lambda example: example["id"] < 95) print(f"filtered result size: {len(ds_filtered_works)}") # output: # filtered result size: 95 ds_mapped_works = ds_filtered_works.map(add_dummy_col) # fails ds_filtered_error = ds.filter(lambda example: example["id"] < 96) print(f"filtered result size: {len(ds_filtered_error)}") # output: # filtered result size: 96 ds_mapped_error = ds_filtered_error.map(add_dummy_col) ``` ### Expected behavior The example code does not fail. ### Environment info Python 3.9.16 (main, Jan 11 2023, 16:05:54); [GCC 11.2.0] :: Anaconda, Inc. on linux datasets 2.9.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5534/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5534/timeline
null
null
null
null
false
null
https://api.github.com/repos/huggingface/datasets/issues/5533
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5533/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5533/comments
https://api.github.com/repos/huggingface/datasets/issues/5533/events
https://github.com/huggingface/datasets/pull/5533
1,585,885,871
PR_kwDODunzps5KCR5I
5,533
Add reduce function
{ "avatar_url": "https://avatars.githubusercontent.com/u/38854604?v=4", "events_url": "https://api.github.com/users/AJDERS/events{/privacy}", "followers_url": "https://api.github.com/users/AJDERS/followers", "following_url": "https://api.github.com/users/AJDERS/following{/other_user}", "gists_url": "https://api.github.com/users/AJDERS/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AJDERS", "id": 38854604, "login": "AJDERS", "node_id": "MDQ6VXNlcjM4ODU0NjA0", "organizations_url": "https://api.github.com/users/AJDERS/orgs", "received_events_url": "https://api.github.com/users/AJDERS/received_events", "repos_url": "https://api.github.com/users/AJDERS/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AJDERS/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AJDERS/subscriptions", "type": "User", "url": "https://api.github.com/users/AJDERS" }
[]
open
false
null
[]
null
15
"2023-02-15T13:44:01"
"2023-02-22T19:05:00"
null
NONE
null
This PR closes #5496 . I tried to imitate the `reduce`-method from `functools`, i.e. the function input must be a binary operation. I assume that the input type has an empty element, i.e. `input_type()` is defined, as the acumulant is instantiated as this object - im not sure that is this a reasonable assumption? If `batched= True` the reduction of each shard is _not_ returned, but the reduction of the entire dataset. I was unsure wether this was an intuitive API, or it would make more sense to return the reduction of each shard?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5533/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5533/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5533.diff", "html_url": "https://github.com/huggingface/datasets/pull/5533", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5533.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5533" }
true
null
https://api.github.com/repos/huggingface/datasets/issues/5532
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5532/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5532/comments
https://api.github.com/repos/huggingface/datasets/issues/5532/events
https://github.com/huggingface/datasets/issues/5532
1,584,505,128
I_kwDODunzps5ecaEo
5,532
train_test_split in arrow_dataset does not ensure to keep single classes in test set
{ "avatar_url": "https://avatars.githubusercontent.com/u/37191008?v=4", "events_url": "https://api.github.com/users/Ulipenitz/events{/privacy}", "followers_url": "https://api.github.com/users/Ulipenitz/followers", "following_url": "https://api.github.com/users/Ulipenitz/following{/other_user}", "gists_url": "https://api.github.com/users/Ulipenitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Ulipenitz", "id": 37191008, "login": "Ulipenitz", "node_id": "MDQ6VXNlcjM3MTkxMDA4", "organizations_url": "https://api.github.com/users/Ulipenitz/orgs", "received_events_url": "https://api.github.com/users/Ulipenitz/received_events", "repos_url": "https://api.github.com/users/Ulipenitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Ulipenitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ulipenitz/subscriptions", "type": "User", "url": "https://api.github.com/users/Ulipenitz" }
[]
closed
false
null
[]
null
1
"2023-02-14T16:52:29"
"2023-02-15T16:09:19"
"2023-02-15T16:09:19"
NONE
null
### Describe the bug When I have a dataset with very few (e.g. 1) examples per class and I call the train_test_split function on it, sometimes the single class will be in the test set. thus will never be considered for training. ### Steps to reproduce the bug ``` import numpy as np from datasets import Dataset data = [ {'label': 0, 'text': "example1"}, {'label': 1, 'text': "example2"}, {'label': 1, 'text': "example3"}, {'label': 1, 'text': "example4"}, {'label': 0, 'text': "example5"}, {'label': 1, 'text': "example6"}, {'label': 2, 'text': "example7"}, {'label': 2, 'text': "example8"} ] for _ in range(10): data_set = Dataset.from_list(data) data_set = data_set.train_test_split(test_size=0.5) data_set["train"] unique_labels_train = np.unique(data_set["train"][:]["label"]) unique_labels_test = np.unique(data_set["test"][:]["label"]) assert len(unique_labels_train) >= len(unique_labels_test) ``` ### Expected behavior I expect to have every available class at least once in my training set. ### Environment info - `datasets` version: 2.9.0 - Platform: Linux-5.15.65+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - PyArrow version: 11.0.0 - Pandas version: 1.3.5
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5532/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5532/timeline
null
completed
null
null
false
83,810
https://api.github.com/repos/huggingface/datasets/issues/5531
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5531/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5531/comments
https://api.github.com/repos/huggingface/datasets/issues/5531/events
https://github.com/huggingface/datasets/issues/5531
1,584,387,276
I_kwDODunzps5eb9TM
5,531
Invalid Arrow data from JSONL
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
0
"2023-02-14T15:39:49"
"2023-02-14T15:46:09"
null
MEMBER
null
This code fails: ```python from datasets import Dataset ds = Dataset.from_json(path_to_file) ds.data.validate() ``` raises ```python ArrowInvalid: Column 2: In chunk 1: Invalid: Struct child array #3 invalid: Invalid: Length spanned by list offsets (4064) larger than values array (length 4063) ``` This causes many issues for @TevenLeScao: - `map` fails because it fails to rewrite invalid arrow arrays ```python ~/Desktop/hf/datasets/src/datasets/arrow_writer.py in write_examples_on_file(self) 438 if all(isinstance(row[0][col], (pa.Array, pa.ChunkedArray)) for row in self.current_examples): 439 arrays = [row[0][col] for row in self.current_examples] --> 440 batch_examples[col] = array_concat(arrays) 441 else: 442 batch_examples[col] = [ ~/Desktop/hf/datasets/src/datasets/table.py in array_concat(arrays) 1885 1886 if not _is_extension_type(array_type): -> 1887 return pa.concat_arrays(arrays) 1888 1889 def _offsets_concat(offsets): ~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.concat_arrays() ~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowIndexError: array slice would exceed array length ``` - `to_dict()` **segfaults** ⚠️ ```python /Users/runner/work/crossbow/crossbow/arrow/cpp/src/arrow/array/data.cc:99: Check failed: (off) <= (length) Slice offset greater than array length ``` To reproduce: unzip the archive and run the above code using `sanity_oscar_en.jsonl` [sanity_oscar_en.jsonl.zip](https://github.com/huggingface/datasets/files/10734124/sanity_oscar_en.jsonl.zip) PS: reading using pandas and converting to Arrow works though (note that the dataset lives in RAM in this case): ```python ds = Dataset.from_pandas(pd.read_json(path_to_file, lines=True)) ds.data.validate() ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5531/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5531/timeline
null
null
null
null
false
null
https://api.github.com/repos/huggingface/datasets/issues/5530
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5530/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5530/comments
https://api.github.com/repos/huggingface/datasets/issues/5530/events
https://github.com/huggingface/datasets/pull/5530
1,582,938,241
PR_kwDODunzps5J4W_4
5,530
Add missing license in `NumpyFormatter`
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt" }
[]
closed
false
null
[]
null
2
"2023-02-13T19:33:23"
"2023-02-14T14:40:41"
"2023-02-14T12:23:58"
CONTRIBUTOR
null
## What's in this PR? As discussed with @lhoestq in https://github.com/huggingface/datasets/pull/5522, the license for `NumpyFormatter` at `datasets/formatting/np_formatter.py` was missing, but present on the rest of the `formatting/*.py` files. So this PR is basically to include it there.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5530/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5530/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5530.diff", "html_url": "https://github.com/huggingface/datasets/pull/5530", "merged_at": "2023-02-14T12:23:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/5530.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5530" }
true
60,635
https://api.github.com/repos/huggingface/datasets/issues/5529
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5529/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5529/comments
https://api.github.com/repos/huggingface/datasets/issues/5529/events
https://github.com/huggingface/datasets/pull/5529
1,582,501,233
PR_kwDODunzps5J26Fq
5,529
Fix `datasets.load_from_disk`, `DatasetDict.load_from_disk` and `Dataset.load_from_disk`
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt" }
[]
open
false
null
[]
null
10
"2023-02-13T14:54:55"
"2023-02-23T11:14:39"
null
CONTRIBUTOR
null
## What's in this PR? After playing around a little bit with 🤗`datasets` in Google Cloud Storage (GCS), I found out some things that should be fixed IMO in the code: * `datasets.load_from_disk` is not checking whether `state.json` is there too when trying to load a `Dataset`, just `dataset_info.json` is checked * `DatasetDict.load_from_disk` is not checking whether `state.json` is there too when redirecting the user to load it as `datasets.load_from_disk`, just `dataset_info.json` is checked, which is misleading, as it won't be loadable that way either * `Dataset.load_from_disk` is missing the `extract_path_from_uri` call before checking in the `fs` whether `dataset_info.json` and `dataset_dict.json` exist, which when using `gcsfs` leads to 400 error code (not blocking) due to `gcsfs.retry.HttpError: Invalid bucket name: 'gs:', 400` * And, finally, the exception messages are a little bit misleading / incomplete IMO so I've tried to include all the relevant information in the messages to avoid issues when interpreting the exceptions
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5529/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5529/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5529.diff", "html_url": "https://github.com/huggingface/datasets/pull/5529", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5529.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5529" }
true
null
https://api.github.com/repos/huggingface/datasets/issues/5528
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5528/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5528/comments
https://api.github.com/repos/huggingface/datasets/issues/5528/events
https://github.com/huggingface/datasets/pull/5528
1,582,195,085
PR_kwDODunzps5J13wC
5,528
Push to hub in a pull request
{ "avatar_url": "https://avatars.githubusercontent.com/u/38854604?v=4", "events_url": "https://api.github.com/users/AJDERS/events{/privacy}", "followers_url": "https://api.github.com/users/AJDERS/followers", "following_url": "https://api.github.com/users/AJDERS/following{/other_user}", "gists_url": "https://api.github.com/users/AJDERS/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AJDERS", "id": 38854604, "login": "AJDERS", "node_id": "MDQ6VXNlcjM4ODU0NjA0", "organizations_url": "https://api.github.com/users/AJDERS/orgs", "received_events_url": "https://api.github.com/users/AJDERS/received_events", "repos_url": "https://api.github.com/users/AJDERS/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AJDERS/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AJDERS/subscriptions", "type": "User", "url": "https://api.github.com/users/AJDERS" }
[]
open
false
null
[]
null
9
"2023-02-13T11:43:47"
"2023-02-21T21:13:28"
null
NONE
null
Fixes #5492. Introduce new kwarg `create_pr` in `push_to_hub`, which is passed to `HFapi.upload_file`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5528/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5528/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5528.diff", "html_url": "https://github.com/huggingface/datasets/pull/5528", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5528.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5528" }
true
null
https://api.github.com/repos/huggingface/datasets/issues/5527
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5527/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5527/comments
https://api.github.com/repos/huggingface/datasets/issues/5527/events
https://github.com/huggingface/datasets/pull/5527
1,581,228,531
PR_kwDODunzps5JysSM
5,527
Fix benchmarks CI - pin protobuf
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
5
"2023-02-12T11:51:25"
"2023-02-13T10:29:03"
"2023-02-13T09:24:16"
MEMBER
null
fix https://github.com/huggingface/datasets/actions/runs/4156059127/jobs/7189576331
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5527/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5527/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5527.diff", "html_url": "https://github.com/huggingface/datasets/pull/5527", "merged_at": "2023-02-13T09:24:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/5527.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5527" }
true
77,571
https://api.github.com/repos/huggingface/datasets/issues/5526
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5526/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5526/comments
https://api.github.com/repos/huggingface/datasets/issues/5526/events
https://github.com/huggingface/datasets/pull/5526
1,580,488,133
PR_kwDODunzps5JwVol
5,526
Allow loading/saving of FAISS index using fsspec
{ "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Dref360", "id": 8976546, "login": "Dref360", "node_id": "MDQ6VXNlcjg5NzY1NDY=", "organizations_url": "https://api.github.com/users/Dref360/orgs", "received_events_url": "https://api.github.com/users/Dref360/received_events", "repos_url": "https://api.github.com/users/Dref360/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "type": "User", "url": "https://api.github.com/users/Dref360" }
[]
open
false
null
[]
null
3
"2023-02-10T23:37:14"
"2023-02-22T16:37:18"
null
CONTRIBUTOR
null
Fixes #5428 Allow loading/saving of FAISS index using fsspec: 1. Simply use BufferedIOWriter/Reader to Read/Write indices on fsspec stream. 2. Needed `mockfs` in the test, so I took it out of the `TestCase`. Let me know if that makes sense. I can work on the documentation once the code changes are approved.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5526/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5526/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5526.diff", "html_url": "https://github.com/huggingface/datasets/pull/5526", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5526.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5526" }
true
null
https://api.github.com/repos/huggingface/datasets/issues/5525
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5525/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5525/comments
https://api.github.com/repos/huggingface/datasets/issues/5525/events
https://github.com/huggingface/datasets/issues/5525
1,580,342,729
I_kwDODunzps5eMh3J
5,525
TypeError: Couldn't cast array of type string to null
{ "avatar_url": "https://avatars.githubusercontent.com/u/74564958?v=4", "events_url": "https://api.github.com/users/TJ-Solergibert/events{/privacy}", "followers_url": "https://api.github.com/users/TJ-Solergibert/followers", "following_url": "https://api.github.com/users/TJ-Solergibert/following{/other_user}", "gists_url": "https://api.github.com/users/TJ-Solergibert/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TJ-Solergibert", "id": 74564958, "login": "TJ-Solergibert", "node_id": "MDQ6VXNlcjc0NTY0OTU4", "organizations_url": "https://api.github.com/users/TJ-Solergibert/orgs", "received_events_url": "https://api.github.com/users/TJ-Solergibert/received_events", "repos_url": "https://api.github.com/users/TJ-Solergibert/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TJ-Solergibert/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TJ-Solergibert/subscriptions", "type": "User", "url": "https://api.github.com/users/TJ-Solergibert" }
[]
closed
false
null
[]
null
6
"2023-02-10T21:12:36"
"2023-02-14T17:41:08"
"2023-02-14T09:35:49"
NONE
null
### Describe the bug Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentioned error. I alredy tried reseting the shorter strings (reset_cortas function). It only happends with NL, PL, RO and PT. It does not make sense since when processing the other languages I also use the corpus of those that fail and it does not cause any errors. I suspect that the error may be in this direction: We use cast_array_to_feature to support casting to custom types like Audio and Image # Also, when trying type "string", we don't want to convert integers or floats to "string". # We only do it if trying_type is False - since this is what the user asks for. ### Steps to reproduce the bug Here I link a colab notebook to reproduce the error: https://colab.research.google.com/drive/1JCrS7FlGfu_kFqChMrwKZ_bpabnIMqbP?authuser=1#scrollTo=FBAvlhMxIzpA ### Expected behavior Data processing does not fail. A correct example can be seen here: https://huggingface.co/datasets/tj-solergibert/Europarl-ST-processed-mt-en ### Environment info - `datasets` version: 2.9.0 - Platform: Linux-5.10.147+-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 9.0.0 - Pandas version: 1.3.5
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5525/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5525/timeline
null
completed
null
null
false
303,793
https://api.github.com/repos/huggingface/datasets/issues/5524
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5524/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5524/comments
https://api.github.com/repos/huggingface/datasets/issues/5524/events
https://github.com/huggingface/datasets/pull/5524
1,580,219,454
PR_kwDODunzps5JvbMw
5,524
[INVALID PR]
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt" }
[]
closed
false
null
[]
null
1
"2023-02-10T19:35:50"
"2023-02-10T19:51:45"
"2023-02-10T19:49:12"
CONTRIBUTOR
null
Hi to whoever is reading this! 🤗 ## What's in this PR? ~~Basically, I've removed the 🤗`datasets` installation as `python -m pip install ".[quality]" in the `check_code_quality` job in `.github/workflows/ci.yaml`, as we don't need to install the whole package to run the CI, unless that's done on purpose e.g. to check that the Python package installation succeeds before running the tests over the matrix of os?~~ ~~So I just wanted to check whether the time was reduced doing this (which I assume it will), plus whether this is something that can be improved, or just discarded in case you're also using that step to make sure that the package can be installed.~~ ## What's missing? ~~I was just wondering whether you consider replacing `isort` and `flake8` with `ruff` (if possible), since it's way faster, more information at [`ruff`](https://github.com/charliermarsh/ruff). Before creating this PR the average time of the `check_code_quality` job was around 40s.~~ ## Edit Sorry for the inconvenience this may have caused, didn't realise that the config is defined in `setup.cfg` and `pyproject.toml`, so running those without installing the Python package leads to failure, my bad 😞
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5524/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5524/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5524.diff", "html_url": "https://github.com/huggingface/datasets/pull/5524", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5524.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5524" }
true
802
https://api.github.com/repos/huggingface/datasets/issues/5523
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5523/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5523/comments
https://api.github.com/repos/huggingface/datasets/issues/5523/events
https://github.com/huggingface/datasets/issues/5523
1,580,193,015
I_kwDODunzps5eL9T3
5,523
Checking that split name is correct happens only after the data is downloaded
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" } ]
null
0
"2023-02-10T19:13:03"
"2023-02-10T19:14:50"
null
CONTRIBUTOR
null
### Describe the bug Verification of split names (=indexing data by split) happens after downloading the data. So when the split name is incorrect, users learn about that only after the data is fully downloaded, for large datasets it might take a lot of time. ### Steps to reproduce the bug Load any dataset with random split name, for example: ```python from datasets import load_dataset load_dataset("mozilla-foundation/common_voice_11_0", "en", split="blabla") ``` and the download will start smoothly, despite there is no split named "blabla". ### Expected behavior Raise error when split name is incorrect. ### Environment info `datasets==2.9.1.dev0`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5523/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5523/timeline
null
null
null
null
false
null
https://api.github.com/repos/huggingface/datasets/issues/5522
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5522/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5522/comments
https://api.github.com/repos/huggingface/datasets/issues/5522/events
https://github.com/huggingface/datasets/pull/5522
1,580,183,124
PR_kwDODunzps5JvTVp
5,522
Minor changes in JAX-formatting docstrings & type-hints
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt" }
[]
closed
false
null
[]
null
16
"2023-02-10T19:05:00"
"2023-02-15T14:48:27"
"2023-02-15T13:19:06"
CONTRIBUTOR
null
Hi to whoever is reading this! 🤗 ## What's in this PR? I was exploring the code regarding the `JaxFormatter` implemented in 🤗`datasets`, and found some things that IMO could be changed. Those are mainly regarding the docstrings and the type-hints based on `jax`'s 0.4.1 release where `jax.Array` was introduced as the default type for JAX-arrays (instead of `jnp.DeviceArray`, `jnp.SharedDeviceArray`, and `jnp.GlobalDeviceArray`). Even though `isinstance(..., jax.Array)` also works with lower versions such as e.g. `0.3.25`. More information about the latter at [`jax` v0.4.1 - Release Notes](https://github.com/google/jax/releases/tag/jax-v0.4.1) and [jax.Array migration - JAX documentation](https://jax.readthedocs.io/en/latest/jax_array_migration.html). ## What's missing? * Do you want me to write an entry in the documentation on how to use 🤗`datasets` with JAX as https://huggingface.co/docs/datasets/use_with_pytorch with PyTorch? * Do we need to actually include `pyarrow` under the `TYPE_CHECKING` when needed? I just did it for JAX, but if we are OK with that, I can do that with the rest of the formatters, just LMK. * Should the License header be included in `datasets.formatting.np_formatter`? If so, do I include the one from 2020 e.g. https://github.com/huggingface/datasets/blob/b065547654efa0ec633cf373ac1512884c68b2e1/src/datasets/formatting/tf_formatter.py#L1-L13 * Is there any reason why `jnp.array` is being used instead of `jnp.asarray`? There's no difference between both, just that `jnp.asarray` has `copy=False` as default, even though `numpy` to `jax.numpy` conversion is not zero-copy, but just asking :)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5522/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5522/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5522.diff", "html_url": "https://github.com/huggingface/datasets/pull/5522", "merged_at": "2023-02-15T13:19:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/5522.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5522" }
true
411,246
https://api.github.com/repos/huggingface/datasets/issues/5521
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5521/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5521/comments
https://api.github.com/repos/huggingface/datasets/issues/5521/events
https://github.com/huggingface/datasets/pull/5521
1,578,418,289
PR_kwDODunzps5JpWnp
5,521
Fix bug when casting empty array to class labels
{ "avatar_url": "https://avatars.githubusercontent.com/u/6591505?v=4", "events_url": "https://api.github.com/users/marioga/events{/privacy}", "followers_url": "https://api.github.com/users/marioga/followers", "following_url": "https://api.github.com/users/marioga/following{/other_user}", "gists_url": "https://api.github.com/users/marioga/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/marioga", "id": 6591505, "login": "marioga", "node_id": "MDQ6VXNlcjY1OTE1MDU=", "organizations_url": "https://api.github.com/users/marioga/orgs", "received_events_url": "https://api.github.com/users/marioga/received_events", "repos_url": "https://api.github.com/users/marioga/repos", "site_admin": false, "starred_url": "https://api.github.com/users/marioga/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marioga/subscriptions", "type": "User", "url": "https://api.github.com/users/marioga" }
[]
closed
false
null
[]
null
1
"2023-02-09T18:47:59"
"2023-02-13T20:40:48"
"2023-02-12T11:17:17"
CONTRIBUTOR
null
Fix https://github.com/huggingface/datasets/issues/5520.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5521/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5521/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5521.diff", "html_url": "https://github.com/huggingface/datasets/pull/5521", "merged_at": "2023-02-12T11:17:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/5521.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5521" }
true
232,158
https://api.github.com/repos/huggingface/datasets/issues/5520
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5520/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5520/comments
https://api.github.com/repos/huggingface/datasets/issues/5520/events
https://github.com/huggingface/datasets/issues/5520
1,578,417,074
I_kwDODunzps5eFLuy
5,520
ClassLabel.cast_storage raises TypeError when called on an empty IntegerArray
{ "avatar_url": "https://avatars.githubusercontent.com/u/6591505?v=4", "events_url": "https://api.github.com/users/marioga/events{/privacy}", "followers_url": "https://api.github.com/users/marioga/followers", "following_url": "https://api.github.com/users/marioga/following{/other_user}", "gists_url": "https://api.github.com/users/marioga/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/marioga", "id": 6591505, "login": "marioga", "node_id": "MDQ6VXNlcjY1OTE1MDU=", "organizations_url": "https://api.github.com/users/marioga/orgs", "received_events_url": "https://api.github.com/users/marioga/received_events", "repos_url": "https://api.github.com/users/marioga/repos", "site_admin": false, "starred_url": "https://api.github.com/users/marioga/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marioga/subscriptions", "type": "User", "url": "https://api.github.com/users/marioga" }
[]
closed
false
null
[]
null
0
"2023-02-09T18:46:52"
"2023-02-12T11:17:18"
"2023-02-12T11:17:18"
CONTRIBUTOR
null
### Describe the bug `ClassLabel.cast_storage` raises `TypeError` when called on an empty `IntegerArray`. ### Steps to reproduce the bug Minimal steps: ```python import pyarrow as pa from datasets import ClassLabel ClassLabel(names=['foo', 'bar']).cast_storage(pa.array([], pa.int64())) ``` In practice, this bug arises in situations like the one below: ```python from datasets import ClassLabel, Dataset, Features, Sequence dataset = Dataset.from_dict({'labels': [[], []]}, features=Features({'labels': Sequence(ClassLabel(names=['foo', 'bar']))})) # this raises TypeError dataset.map(batched=True, batch_size=1) ``` ### Expected behavior `ClassLabel.cast_storage` should return an empty Int64Array. ### Environment info - `datasets` version: 2.9.1.dev0 - Platform: Linux-4.15.0-1032-aws-x86_64-with-glibc2.27 - Python version: 3.10.6 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5520/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5520/timeline
null
completed
null
null
false
232,226
https://api.github.com/repos/huggingface/datasets/issues/5519
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5519/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5519/comments
https://api.github.com/repos/huggingface/datasets/issues/5519/events
https://github.com/huggingface/datasets/pull/5519
1,578,341,785
PR_kwDODunzps5JpGPl
5,519
Format code with `ruff`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
5
"2023-02-09T17:50:21"
"2023-02-14T16:28:27"
"2023-02-14T16:18:38"
CONTRIBUTOR
null
Use `ruff` for formatting instead of `isort` and `black` to be consistent with [`transformers`](https://github.com/huggingface/transformers/pull/21480) and [`hfh`](https://github.com/huggingface/huggingface_hub/pull/1323). TODO: - [x] ~Merge the community contributors' PR to avoid having to run `make style` on their PR branches~ (we have some new PRs, but fixing those shouldn't be too big of a problem)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5519/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5519/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5519.diff", "html_url": "https://github.com/huggingface/datasets/pull/5519", "merged_at": "2023-02-14T16:18:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/5519.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5519" }
true
426,497
https://api.github.com/repos/huggingface/datasets/issues/5518
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5518/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5518/comments
https://api.github.com/repos/huggingface/datasets/issues/5518/events
https://github.com/huggingface/datasets/pull/5518
1,578,203,962
PR_kwDODunzps5Joom3
5,518
Remove py.typed
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
3
"2023-02-09T16:22:29"
"2023-02-13T13:55:49"
"2023-02-13T13:48:40"
CONTRIBUTOR
null
Fix https://github.com/huggingface/datasets/issues/3841
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5518/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5518/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5518.diff", "html_url": "https://github.com/huggingface/datasets/pull/5518", "merged_at": "2023-02-13T13:48:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/5518.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5518" }
true
336,371
https://api.github.com/repos/huggingface/datasets/issues/5517
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5517/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5517/comments
https://api.github.com/repos/huggingface/datasets/issues/5517/events
https://github.com/huggingface/datasets/issues/5517
1,577,976,608
I_kwDODunzps5eDgMg
5,517
`with_format("numpy")` silently downcasts float64 to float32 features
{ "avatar_url": "https://avatars.githubusercontent.com/u/1250234?v=4", "events_url": "https://api.github.com/users/ernestum/events{/privacy}", "followers_url": "https://api.github.com/users/ernestum/followers", "following_url": "https://api.github.com/users/ernestum/following{/other_user}", "gists_url": "https://api.github.com/users/ernestum/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ernestum", "id": 1250234, "login": "ernestum", "node_id": "MDQ6VXNlcjEyNTAyMzQ=", "organizations_url": "https://api.github.com/users/ernestum/orgs", "received_events_url": "https://api.github.com/users/ernestum/received_events", "repos_url": "https://api.github.com/users/ernestum/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ernestum/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ernestum/subscriptions", "type": "User", "url": "https://api.github.com/users/ernestum" }
[]
open
false
null
[]
{ "closed_at": null, "closed_issues": 0, "created_at": "2023-02-13T16:22:42Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }, "description": "Next major release", "due_on": null, "html_url": "https://github.com/huggingface/datasets/milestone/10", "id": 9038583, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/10/labels", "node_id": "MI_kwDODunzps4Aier3", "number": 10, "open_issues": 1, "state": "open", "title": "3.0", "updated_at": "2023-02-13T16:23:25Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/10" }
10
"2023-02-09T14:18:00"
"2023-02-14T15:38:54"
null
NONE
null
### Describe the bug When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`. ### Steps to reproduce the bug ```python import datasets dataset = datasets.Dataset.from_dict({'a': [1.0, 2.0, 3.0]}).with_format("numpy") print("feature dtype:", dataset.features['a'].dtype) print("array dtype:", dataset['a'].dtype) ``` output: ``` feature dtype: float64 array dtype: float32 ``` ### Expected behavior ``` feature dtype: float64 array dtype: float64 ``` ### Environment info - `datasets` version: 2.8.0 - Platform: Linux-5.4.0-135-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 10.0.1 - Pandas version: 1.4.4 ### Suggested Fix Changing [the `_tensorize` function of the numpy formatter](https://github.com/huggingface/datasets/blob/b065547654efa0ec633cf373ac1512884c68b2e1/src/datasets/formatting/np_formatter.py#L32) to ```python def _tensorize(self, value): if isinstance(value, (str, bytes, type(None))): return value elif isinstance(value, (np.character, np.ndarray)) and np.issubdtype(value.dtype, np.character): return value elif isinstance(value, np.number): return value return np.asarray(value, **self.np_array_kwargs) ``` fixes this particular issue for me. Not sure if this would break other tests. This should also avoid unnecessary copying of the array.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5517/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5517/timeline
null
null
null
null
false
null
https://api.github.com/repos/huggingface/datasets/issues/5516
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5516/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5516/comments
https://api.github.com/repos/huggingface/datasets/issues/5516/events
https://github.com/huggingface/datasets/pull/5516
1,577,661,640
PR_kwDODunzps5JmzPQ
5,516
Reload features from Parquet metadata
{ "avatar_url": "https://avatars.githubusercontent.com/u/6368040?v=4", "events_url": "https://api.github.com/users/MFreidank/events{/privacy}", "followers_url": "https://api.github.com/users/MFreidank/followers", "following_url": "https://api.github.com/users/MFreidank/following{/other_user}", "gists_url": "https://api.github.com/users/MFreidank/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MFreidank", "id": 6368040, "login": "MFreidank", "node_id": "MDQ6VXNlcjYzNjgwNDA=", "organizations_url": "https://api.github.com/users/MFreidank/orgs", "received_events_url": "https://api.github.com/users/MFreidank/received_events", "repos_url": "https://api.github.com/users/MFreidank/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MFreidank/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MFreidank/subscriptions", "type": "User", "url": "https://api.github.com/users/MFreidank" }
[]
closed
false
null
[]
null
4
"2023-02-09T10:52:15"
"2023-02-12T16:00:00"
"2023-02-12T15:57:01"
CONTRIBUTOR
null
Resolves #5482. Attaches feature metadata to parquet files serialised using `Dataset.to_parquet`. This allows retrieving data with "rich" feature types (e.g., `datasets.features.image.Image` or `datasets.features.audio.Audio`) from parquet files without cumbersome casting (for an example, see #5482). @lhoestq It seems that it is sufficient to attach metadata to the schema prior to serialising and features are loaded back with correct types afterwards automatically. I used the following script to test the implementation: ```python from pathlib import Path import datasets dataset_name = "Maysee/tiny-imagenet" ds = datasets.load_dataset(dataset_name, split=datasets.Split.TRAIN) output_directory_path = Path(__file__).parent.joinpath("example_test_outputs", dataset_name.replace("/", "_")) output_directory_path.mkdir(exist_ok=True, parents=True) output_filepath = output_directory_path.joinpath("ds.parquet") ds.to_parquet(str(output_filepath)) reloaded_ds = datasets.load_dataset(str(output_directory_path), split=datasets.Split.TRAIN) assert ds.features == reloaded_ds.features ``` Prior to the change in this PR this script raises an `AssertionError` and the `Image` features lose their type after serialisation. After the change in this PR, the assertion does not raise an error and manual inspection of the features shows type `Image` for the respective columns of `reloaded_ds `. Some open questions: * How/where can I best add new unit tests for this implementation? * What dataset would I best use in the tests? I chose `Maysee/tiny-imagenet` mainly because it is small and contains an ?Image` feature that can be used to test, but I'd be happy for suggestions on a suitable data source to use. * Currently I'm calling `datasets.arrow_writer.ArrowWriter._build_metadata` as I need the same logic. However, I'm not happy with the coupling between `datasets.io.parquet` and `datasets.arrow_writer` it leaves me with. Suggest to factor this common logic out into a helper function and reuse it from both of these. Do you agree and if yes, could you please guide me where I would best place this function? Many thanks in advance and kind regards, MFreidank
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5516/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5516/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5516.diff", "html_url": "https://github.com/huggingface/datasets/pull/5516", "merged_at": "2023-02-12T15:57:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/5516.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5516" }
true
277,486
https://api.github.com/repos/huggingface/datasets/issues/5515
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5515/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5515/comments
https://api.github.com/repos/huggingface/datasets/issues/5515/events
https://github.com/huggingface/datasets/pull/5515
1,577,590,611
PR_kwDODunzps5Jmj5X
5,515
Unify `load_from_cache_file` type and logic
{ "avatar_url": "https://avatars.githubusercontent.com/u/22773355?v=4", "events_url": "https://api.github.com/users/HallerPatrick/events{/privacy}", "followers_url": "https://api.github.com/users/HallerPatrick/followers", "following_url": "https://api.github.com/users/HallerPatrick/following{/other_user}", "gists_url": "https://api.github.com/users/HallerPatrick/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/HallerPatrick", "id": 22773355, "login": "HallerPatrick", "node_id": "MDQ6VXNlcjIyNzczMzU1", "organizations_url": "https://api.github.com/users/HallerPatrick/orgs", "received_events_url": "https://api.github.com/users/HallerPatrick/received_events", "repos_url": "https://api.github.com/users/HallerPatrick/repos", "site_admin": false, "starred_url": "https://api.github.com/users/HallerPatrick/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HallerPatrick/subscriptions", "type": "User", "url": "https://api.github.com/users/HallerPatrick" }
[]
closed
false
null
[]
null
4
"2023-02-09T10:04:46"
"2023-02-14T15:38:13"
"2023-02-14T14:26:42"
CONTRIBUTOR
null
* Updating type annotations for #`load_from_cache_file` * Added logic for cache checking if needed * Updated documentation following the wording of `Dataset.map`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5515/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5515/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5515.diff", "html_url": "https://github.com/huggingface/datasets/pull/5515", "merged_at": "2023-02-14T14:26:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/5515.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5515" }
true
447,716
https://api.github.com/repos/huggingface/datasets/issues/5514
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5514/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5514/comments
https://api.github.com/repos/huggingface/datasets/issues/5514/events
https://github.com/huggingface/datasets/issues/5514
1,576,453,837
I_kwDODunzps5d9sbN
5,514
Improve inconsistency of `Dataset.map` interface for `load_from_cache_file`
{ "avatar_url": "https://avatars.githubusercontent.com/u/22773355?v=4", "events_url": "https://api.github.com/users/HallerPatrick/events{/privacy}", "followers_url": "https://api.github.com/users/HallerPatrick/followers", "following_url": "https://api.github.com/users/HallerPatrick/following{/other_user}", "gists_url": "https://api.github.com/users/HallerPatrick/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/HallerPatrick", "id": 22773355, "login": "HallerPatrick", "node_id": "MDQ6VXNlcjIyNzczMzU1", "organizations_url": "https://api.github.com/users/HallerPatrick/orgs", "received_events_url": "https://api.github.com/users/HallerPatrick/received_events", "repos_url": "https://api.github.com/users/HallerPatrick/repos", "site_admin": false, "starred_url": "https://api.github.com/users/HallerPatrick/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HallerPatrick/subscriptions", "type": "User", "url": "https://api.github.com/users/HallerPatrick" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
4
"2023-02-08T16:40:44"
"2023-02-14T14:26:44"
"2023-02-14T14:26:44"
CONTRIBUTOR
null
### Feature request 1. Replace the `load_from_cache_file` default value to `True`. 2. Remove or alter checks from `is_caching_enabled` logic. ### Motivation I stumbled over an inconsistency in the `Dataset.map` interface. The documentation (and source) states for the parameter `load_from_cache_file`: ``` load_from_cache_file (`bool`, defaults to `True` if caching is enabled): If a cache file storing the current computation from `function` can be identified, use it instead of recomputing. ``` 1. `load_from_cache_file` default value is `None`, while being annotated as `bool` 2. It is inconsistent with other method signatures like `filter`, that have the default value `True` 3. The logic is inconsistent, as the `map` method checks if caching is enabled through `is_caching_enabled`. This logic is not used for other similar methods. ### Your contribution I am not fully aware of the logic behind caching checks. If this is just a inconsistency that historically grew, I would suggest to remove the `is_caching_enabled` logic as the "default" logic. Maybe someone can give insights, if environment variables have a higher priority than local variables or vice versa. If this is clarified, I could adjust the source according to the "Feature request" section of this issue.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5514/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5514/timeline
null
completed
null
null
false
510,360

Dataset Card for "github-issues"

More Information needed

Downloads last month
36