url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 600M
2.05B
| node_id
stringlengths 18
32
| number
int64 2
6.51k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
sequencelengths 0
30
| created_at
unknown | updated_at
unknown | closed_at
unknown | author_association
stringclasses 3
values | active_lock_reason
float64 | draft
float64 0
1
⌀ | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/1730 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1730/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1730/comments | https://api.github.com/repos/huggingface/datasets/issues/1730/events | https://github.com/huggingface/datasets/pull/1730 | 784,617,525 | MDExOlB1bGxSZXF1ZXN0NTUzNzgxMDY0 | 1,730 | Add MNIST dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sgugger",
"id": 35901082,
"login": "sgugger",
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"repos_url": "https://api.github.com/users/sgugger/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sgugger"
} | [] | closed | false | null | [] | null | [] | "2021-01-12T21:48:02Z" | "2021-01-13T10:19:47Z" | "2021-01-13T10:19:46Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1730.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1730",
"merged_at": "2021-01-13T10:19:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1730.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1730"
} | This PR adds the MNIST dataset to the library. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1730/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1730/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3969 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3969/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3969/comments | https://api.github.com/repos/huggingface/datasets/issues/3969/events | https://github.com/huggingface/datasets/issues/3969 | 1,174,273,824 | I_kwDODunzps5F_f8g | 3,969 | Cannot preview cnn_dailymail dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/75482871?v=4",
"events_url": "https://api.github.com/users/hasan-besh/events{/privacy}",
"followers_url": "https://api.github.com/users/hasan-besh/followers",
"following_url": "https://api.github.com/users/hasan-besh/following{/other_user}",
"gists_url": "https://api.github.com/users/hasan-besh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hasan-besh",
"id": 75482871,
"login": "hasan-besh",
"node_id": "MDQ6VXNlcjc1NDgyODcx",
"organizations_url": "https://api.github.com/users/hasan-besh/orgs",
"received_events_url": "https://api.github.com/users/hasan-besh/received_events",
"repos_url": "https://api.github.com/users/hasan-besh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hasan-besh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hasan-besh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hasan-besh"
} | [] | closed | false | null | [] | null | [
"I guess the cache got corrupted due to a previous issue with Google Drive service.\r\n\r\nThe cache should be regenerated, e.g. by passing `download_mode=\"force_redownload\"`.\r\n\r\nCC: @severo ",
"Note that the dataset preview uses its own cache, not `datasets`' cache. So `download_mode=\"force_redownload\"` doesn't help. But yes indeed the cache must be refreshed.\r\n\r\nThe CNN Dailymail dataste is currently hosted on Google Drive, which is an unreliable host and we've had many issues with it. Unless we found another most reliable host for the data, we will keep running into issues from time to time.\r\n\r\nAt Hugging Face we're not allowed to host the CNN Dailymail data by ourselves AFAIK",
"Yes @lhoestq, I didn't explain myself well: my previous message was addressed to @severo. ",
"I remove the tag dataset-viewer, since it's more an issue with the hosting on Google Drive",
"Sounds good. I was looking for another host of this dataset but couldn't find any (yet)",
"It seems like the issue is with the streaming mode, not with the hosting:\r\n\r\n```python\r\n>>> import datasets\r\n>>> dataset = datasets.load_dataset('cnn_dailymail', name=\"3.0.0\", split=\"train\", streaming=True, download_mode=\"force_redownload\")\r\nDownloading builder script: 9.35kB [00:00, 10.2MB/s]\r\nDownloading metadata: 9.50kB [00:00, 12.2MB/s]\r\n>>> len(list(dataset))\r\n0\r\n>>> dataset = datasets.load_dataset('cnn_dailymail', name=\"3.0.0\", split=\"train\", streaming=False)\r\nReusing dataset cnn_dailymail (/home/slesage/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234)\r\n>>> len(dataset)\r\n287113\r\n```\r\n\r\nNote, in particular, that the streaming mode is failing silently, returning 0 row while I would have expected an exception instead. The result is that the dataset viewer shows `No data` instead of a detailed error.\r\n\r\n<img width=\"1511\" alt=\"Capture d’écran 2022-04-12 à 11 50 46\" src=\"https://user-images.githubusercontent.com/1676121/162935341-d50f1e73-d053-41d4-917f-e79708a0ca23.png\">\r\n",
"Well this is because the host (Google Drive) returns a document that is not the actual data, but an error page",
"Do you think that `datasets` should detect this anyway and throw an exception?",
"Yes it definitely should ! I don't have the bandwidth to work on this right now though",
"Indeed, streaming was not supported: tgz archives were not properly iterated.\r\n\r\nI've opened a PR to support streaming.\r\n\r\nHowever, keep in mind that Google Drive will keep generating issues from time to time, like 403,..."
] | "2022-03-19T14:08:57Z" | "2022-04-20T15:52:49Z" | "2022-04-20T15:52:49Z" | NONE | null | null | null | ## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3969/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3969/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1544 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1544/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1544/comments | https://api.github.com/repos/huggingface/datasets/issues/1544/events | https://github.com/huggingface/datasets/pull/1544 | 765,514,828 | MDExOlB1bGxSZXF1ZXN0NTM4OTc5MjIz | 1,544 | Added Wiki Summary Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/33005287?v=4",
"events_url": "https://api.github.com/users/tanmoyio/events{/privacy}",
"followers_url": "https://api.github.com/users/tanmoyio/followers",
"following_url": "https://api.github.com/users/tanmoyio/following{/other_user}",
"gists_url": "https://api.github.com/users/tanmoyio/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tanmoyio",
"id": 33005287,
"login": "tanmoyio",
"node_id": "MDQ6VXNlcjMzMDA1Mjg3",
"organizations_url": "https://api.github.com/users/tanmoyio/orgs",
"received_events_url": "https://api.github.com/users/tanmoyio/received_events",
"repos_url": "https://api.github.com/users/tanmoyio/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tanmoyio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanmoyio/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tanmoyio"
} | [] | closed | false | null | [] | null | [
"@lhoestq why my tests are not running?",
"Maybe an issue with CircleCI, let me try to make them run",
"The CI error `tests/test_file_utils.py::TempSeedTest::test_tensorflow` is not related to this dataset and is fixed on master, you can ignore it",
"what I need to do now",
"Now the delimiter of the csv reader is fixed, thanks :) \r\n\r\nI just added a comment suggesting to try using actual URLS instead of a manual download if possible.\r\nThis would make things more convenient for the users. Can you try using the `dl_manager` to download the train/dev/test csv files instead of requiring manual download ?",
"Also pinging @m3hrdadfi , since I just noticed that there's already a dataset script that was created 3 weeks ago for this dataset here: https://github.com/m3hrdadfi/wiki-summary/tree/master/datasets/wiki_summary_persian",
"@lhoestq I am getting this error while generating the dummy data\r\n![Screenshot (181)](https://user-images.githubusercontent.com/33005287/102628819-50a40080-4170-11eb-9e96-efce74b45ff4.png)\r\n",
"Can you try by adding the flag `--match_text_files \"*\"` ?",
"now it worked",
"@lhoestq pytest on dummy data passed, but on real data raising this issue\r\n![Screenshot (196)](https://user-images.githubusercontent.com/33005287/102630784-fa848c80-4172-11eb-9f7e-e5a58dcf7abe.png)\r\nhow to resolve it\r\n",
"I see ! This is because the library did some verification to make sure it downloads the same files as in the first time you ran the `datasets-cli test` command with `--save_infos`. Since we're now downloading files, the verification fails. \r\n\r\nTo fix that you just need to regenerate the dataset_infos.json file:\r\n```\r\ndatasets-cli test ./datasets/wiki_summary --save_infos --all_configs --ignore_verifications\r\n```",
"@lhoestq I have modified everything and It worked fine, dont know why it is not passing the tests ",
"Awesome thank you !\r\n\r\nThe CI error `tests/test_file_utils.py::TempSeedTest::test_tensorflow` is not related to your dataset and is fixed on master.\r\nYou can ignore it :) ",
"@lhoestq anything left to do ?",
"The dataset script is all good now ! The dummy data and the dataset_infos.json file are good too :) ",
"@lhoestq yay, thanks for helping me out , ",
"merging since the CI is fixed on master",
"@tanmoyio @lhoestq \r\n\r\nThank you both!"
] | "2020-12-13T16:33:46Z" | "2020-12-18T16:20:06Z" | "2020-12-18T16:17:18Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1544.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1544",
"merged_at": "2020-12-18T16:17:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1544.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1544"
} | Wiki Summary: Dataset extracted from Persian Wikipedia into the form of articles and highlights.
Link: https://github.com/m3hrdadfi/wiki-summary | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1544/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1544/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3250 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3250/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3250/comments | https://api.github.com/repos/huggingface/datasets/issues/3250/events | https://github.com/huggingface/datasets/pull/3250 | 1,050,541,348 | PR_kwDODunzps4uYmkr | 3,250 | Add ETHICS dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/7088559?v=4",
"events_url": "https://api.github.com/users/ssss1029/events{/privacy}",
"followers_url": "https://api.github.com/users/ssss1029/followers",
"following_url": "https://api.github.com/users/ssss1029/following{/other_user}",
"gists_url": "https://api.github.com/users/ssss1029/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ssss1029",
"id": 7088559,
"login": "ssss1029",
"node_id": "MDQ6VXNlcjcwODg1NTk=",
"organizations_url": "https://api.github.com/users/ssss1029/orgs",
"received_events_url": "https://api.github.com/users/ssss1029/received_events",
"repos_url": "https://api.github.com/users/ssss1029/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ssss1029/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ssss1029/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ssss1029"
} | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | [] | null | [
"Thanks for your contribution, @ssss1029. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help."
] | "2021-11-11T03:45:34Z" | "2022-10-03T09:37:25Z" | "2022-10-03T09:37:25Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3250.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3250",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3250.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3250"
} | This PR adds the ETHICS dataset, including all 5 sub-datasets.
From https://arxiv.org/abs/2008.02275 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3250/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3250/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3741 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3741/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3741/comments | https://api.github.com/repos/huggingface/datasets/issues/3741/events | https://github.com/huggingface/datasets/pull/3741 | 1,141,132,649 | PR_kwDODunzps4y-syt | 3,741 | Rm sphinx doc | {
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mishig25",
"id": 11827707,
"login": "mishig25",
"node_id": "MDQ6VXNlcjExODI3NzA3",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"repos_url": "https://api.github.com/users/mishig25/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mishig25"
} | [] | closed | false | null | [] | null | [] | "2022-02-17T10:11:37Z" | "2022-02-17T10:15:17Z" | "2022-02-17T10:15:12Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3741.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3741",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3741.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3741"
} | Checklist
- [x] Update circle ci yaml
- [x] Delete sphinx static & python files in docs dir
- [x] Update readme in docs dir
- [ ] Update docs config in setup.py | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3741/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3741/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5688 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5688/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5688/comments | https://api.github.com/repos/huggingface/datasets/issues/5688/events | https://github.com/huggingface/datasets/issues/5688 | 1,648,463,504 | I_kwDODunzps5iQY6Q | 5,688 | Wikipedia download_and_prepare for GCS | {
"avatar_url": "https://avatars.githubusercontent.com/u/25522531?v=4",
"events_url": "https://api.github.com/users/adrianfagerland/events{/privacy}",
"followers_url": "https://api.github.com/users/adrianfagerland/followers",
"following_url": "https://api.github.com/users/adrianfagerland/following{/other_user}",
"gists_url": "https://api.github.com/users/adrianfagerland/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/adrianfagerland",
"id": 25522531,
"login": "adrianfagerland",
"node_id": "MDQ6VXNlcjI1NTIyNTMx",
"organizations_url": "https://api.github.com/users/adrianfagerland/orgs",
"received_events_url": "https://api.github.com/users/adrianfagerland/received_events",
"repos_url": "https://api.github.com/users/adrianfagerland/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/adrianfagerland/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adrianfagerland/subscriptions",
"type": "User",
"url": "https://api.github.com/users/adrianfagerland"
} | [] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Hi @adrianfagerland, thanks for reporting.\r\n\r\nPlease note that \"wikipedia\" is a special dataset, with an Apache Beam builder: https://beam.apache.org/\r\nYou can find more info about Beam datasets in our docs: https://huggingface.co/docs/datasets/beam\r\n\r\nIt was implemented to be run in parallel processing, using one of the distributed back-ends supported by Apache Beam: https://beam.apache.org/get-started/beam-overview/#apache-beam-pipeline-runners\r\n\r\nThat is, you are trying to process the source wikipedia data on your machine (not distributed) when passing `beam_runner=\"DirectRunner\"`.\r\n\r\nAs documented in the wikipedia dataset page (https://huggingface.co/datasets/wikipedia):\r\n\r\n Some subsets of Wikipedia have already been processed by HuggingFace, and you can load them just with:\r\n \r\n from datasets import load_dataset\r\n \r\n load_dataset(\"wikipedia\", \"20220301.en\")\r\n\r\n The list of pre-processed subsets is:\r\n - \"20220301.de\"\r\n - \"20220301.en\"\r\n - \"20220301.fr\"\r\n - \"20220301.frr\"\r\n - \"20220301.it\"\r\n - \"20220301.simple\"\r\n\r\nTo download the available processed data (in Arrow format):\r\n```python\r\nbuilder = datasets.load_dataset_builder(\"wikipedia\", \"20220301.en\")\r\nbuilder.download_and_prepare(your_path)\r\n```",
"When running this using :\r\n```\r\nimport datasets\r\nfrom apache_beam.options.pipeline_options import PipelineOptions\r\nfrom gcsfs import GCSFileSystem\r\n\r\nstorage_options = {\"project\":\"tdt4310\", \"token\":\"cloud\"}\r\nfs = GCSFileSystem(**storage_options)\r\n\r\noutput_dir = \"gcs://quiz_transformer/\"\r\nbeam_options = PipelineOptions(\r\n region=\"europe-west4\",\r\n project=\"tdt4310\",\r\n temp_location=output_dir+\"tmp/\")\r\n\r\n\r\nbuilder = datasets.load_dataset_builder(\"wikipedia\", \"20220301.en\", beam_runner=\"dataflow\", beam_options=beam_options)\r\nbuilder.download_and_prepare(\r\n output_dir, storage_options=storage_options, file_format=\"parquet\")\r\n```\r\nI now get this error:\r\n```\r\nraise FileNotFoundError(f\"Couldn't find file at {url}\")\r\nFileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json\r\nDownloading data files: 0%| | 0/1 [00:00<?, ?it/s]\r\n```\r\n\r\nI get the same error for this:\r\n```\r\nimport datasets\r\nfrom gcsfs import GCSFileSystem\r\n\r\nstorage_options = {\"project\":\"tdt4310\", \"token\":\"cloud\"}\r\nfs = GCSFileSystem(**storage_options)\r\n\r\noutput_dir = \"gcs://quiz_transformer/\"\r\nbuilder = datasets.load_dataset_builder(\"wikipedia\", \"20220301.en\")\r\nbuilder.download_and_prepare(\r\n output_dir, storage_options=storage_options, file_format=\"parquet\")\r\n```\r\n\r\n\r\n\r\n"
] | "2023-03-30T23:43:22Z" | "2023-03-31T13:31:32Z" | null | NONE | null | null | null | ### Describe the bug
I am unable to download the wikipedia dataset onto GCS.
When I run the script provided the memory firstly gets eaten up, then it crashes.
I tried running this on a VM with 128GB RAM and all I got was a two empty files: _data_builder.lock_, _data.incomplete/beam-temp-wikipedia-train-1ab2039acf3611ed87a9893475de0093_
I have troubleshot this for two straight days now, but I am just unable to get the dataset into storage.
### Steps to reproduce the bug
Run this and insert a path:
```
import datasets
builder = datasets.load_dataset_builder(
"wikipedia", language="en", date="20230320", beam_runner="DirectRunner")
builder.download_and_prepare({path}, file_format="parquet")
```
This is where the problem of it eating RAM occurs.
I have also tried several versions of this, based on the docs:
```
import gcsfs
import datasets
storage_options = {"project": "tdt4310", "token": "cloud"}
fs = gcsfs.GCSFileSystem(**storage_options)
output_dir = "gcs://wikipediadata/"
builder = datasets.load_dataset_builder(
"wikipedia", date="20230320", language="en", beam_runner="DirectRunner")
builder.download_and_prepare(
output_dir, storage_options=storage_options, file_format="parquet")
```
The error message that is received here is:
> ValueError: Unable to get filesystem from specified path, please use the correct path or ensure the required dependency is installed, e.g., pip install apache-beam[gcp]. Path specified: gcs://wikipediadata/wikipedia-train [while running 'train/Save to parquet/Write/WriteImpl/InitializeWrite']
I have ran `pip install apache-beam[gcp]`
### Expected behavior
The wikipedia data loaded into GCS
Everything worked when testing with a smaller demo dataset found somewhere in the docs
### Environment info
Newest published version of datasets. Python 3.9. Also tested with Python 3.7. 128GB RAM Google Cloud VM instance. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5688/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5688/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1332 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1332/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1332/comments | https://api.github.com/repos/huggingface/datasets/issues/1332/events | https://github.com/huggingface/datasets/pull/1332 | 759,679,135 | MDExOlB1bGxSZXF1ZXN0NTM0NjQxOTE5 | 1,332 | Add Open Subtitles Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abhishekkrthakur",
"id": 1183441,
"login": "abhishekkrthakur",
"node_id": "MDQ6VXNlcjExODM0NDE=",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abhishekkrthakur"
} | [] | closed | false | null | [] | null | [] | "2020-12-08T18:31:45Z" | "2020-12-10T11:17:38Z" | "2020-12-10T11:13:18Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1332.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1332",
"merged_at": "2020-12-10T11:13:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1332.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1332"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1332/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1332/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/3953 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3953/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3953/comments | https://api.github.com/repos/huggingface/datasets/issues/3953/events | https://github.com/huggingface/datasets/issues/3953 | 1,172,123,736 | I_kwDODunzps5F3TBY | 3,953 | Add ImageNet Sketch | {
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NielsRogge",
"id": 48327001,
"login": "NielsRogge",
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NielsRogge"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",
"default": false,
"description": "Vision datasets",
"id": 3608941089,
"name": "vision",
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision"
}
] | closed | false | null | [] | null | [
"Can you assign this task to me? @nreimers @mariosasko ",
"Hi! Sure! Let us know if you need any pointers."
] | "2022-03-17T09:20:31Z" | "2022-05-23T18:05:29Z" | "2022-05-23T18:05:29Z" | CONTRIBUTOR | null | null | null | ## Adding a Dataset
- **Name:** ImageNet Sketch
- **Description:** ImageNet-Sketch is a dataset consisting of sketch-like images, that matches the ImageNet classification validation set in categories and scale.
- **Paper:** [Learning Robust Global Representations by Penalizing Local Predictive Power](https://arxiv.org/abs/1905.13549)
- **Data:** https://github.com/HaohanWang/ImageNet-Sketch
- **Motivation:** Allows for evaluating the robustness of vision models.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3953/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3953/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/540 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/540/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/540/comments | https://api.github.com/repos/huggingface/datasets/issues/540/events | https://github.com/huggingface/datasets/pull/540 | 688,475,884 | MDExOlB1bGxSZXF1ZXN0NDc1NzMzNzMz | 540 | [BUGFIX] Fix Race Dataset Checksum bug | {
"avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4",
"events_url": "https://api.github.com/users/abarbosa94/events{/privacy}",
"followers_url": "https://api.github.com/users/abarbosa94/followers",
"following_url": "https://api.github.com/users/abarbosa94/following{/other_user}",
"gists_url": "https://api.github.com/users/abarbosa94/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abarbosa94",
"id": 6608232,
"login": "abarbosa94",
"node_id": "MDQ6VXNlcjY2MDgyMzI=",
"organizations_url": "https://api.github.com/users/abarbosa94/orgs",
"received_events_url": "https://api.github.com/users/abarbosa94/received_events",
"repos_url": "https://api.github.com/users/abarbosa94/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abarbosa94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abarbosa94/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abarbosa94"
} | [] | closed | false | null | [] | null | [
"I'm not sure this would fix #537 .\r\nHowever your point about the missing `middle` data is right and we probably want to include these data as well.\r\nDo you think it would we worth having different configurations for this dataset for users who want to only load part of it (`high school` or `middle` or `all`) ?",
"This has fixed #537 at least on my machine hahaha.\r\n\r\nNice point! I think it would totally worth it :) What the best implementation approach would you suggest?\r\n\r\nWould it be possible to have `high school`, `middle` and `all` inside each portion of `train`, `validation` and `test`? Would this make sense?",
"I think we could have one dataset configuration for `high school`, one for `middle` and one for `all`.\r\nYou just need to add\r\n```python\r\n BUILDER_CONFIGS = [\r\n nlp.BuilderConfig(\r\n name=\"high school\",\r\n description=\"insert description here\",\r\n ),\r\n nlp.BuilderConfig(\r\n name=\"middle\",\r\n description=\"insert description here\",\r\n ),\r\n nlp.BuilderConfig(\r\n name=\"all\",\r\n description=\"insert description here\",\r\n ),\r\n ]\r\n```\r\nas a class attribute for the `Race` class.\r\n\r\nThen in `generate_examples` you can check the value of `self.config.name` and choose which files to include when generating examples.\r\n\r\nYou can check [mlsum](https://github.com/huggingface/nlp/blob/master/datasets/mlsum/mlsum.py) for example if you want to see how it done in general, it's a dataset that has five configurations, and each config has train/val/test splits.",
"Hi @lhoestq sorry for the delay in addressing your comments. Thanks for your assistance :)\r\n\r\nYou were correct as well, as I was using the script without the `datasets/race/dataset_infos.json` file, it did not verify the checksum. I already fix it as well :)\r\n\r\nI managed to get everything running smoothly by now. Please let me know if you think that I could improve my solution"
] | "2020-08-29T07:00:10Z" | "2020-09-18T11:42:20Z" | "2020-09-18T11:42:20Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/540.diff",
"html_url": "https://github.com/huggingface/datasets/pull/540",
"merged_at": "2020-09-18T11:42:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/540.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/540"
} | In #537 I noticed that there was a bug in checksum checking when I have tried to download the race dataset. The reason for this is that the current preprocessing was just considering the `high school` data and it was ignoring the `middle` one. This PR just fixes it :)
Moreover, I have added some descriptions. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/540/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/540/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4073 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4073/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4073/comments | https://api.github.com/repos/huggingface/datasets/issues/4073/events | https://github.com/huggingface/datasets/pull/4073 | 1,188,364,711 | PR_kwDODunzps41adPA | 4,073 | Create a metric card for Competition MATH | {
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sashavor",
"id": 14205986,
"login": "sashavor",
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"repos_url": "https://api.github.com/users/sashavor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sashavor"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-03-31T16:48:59Z" | "2022-04-01T19:02:39Z" | "2022-04-01T18:57:13Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4073.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4073",
"merged_at": "2022-04-01T18:57:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4073.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4073"
} | Proposing metric card for Competition MATH | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4073/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4073/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3672 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3672/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3672/comments | https://api.github.com/repos/huggingface/datasets/issues/3672/events | https://github.com/huggingface/datasets/pull/3672 | 1,122,980,556 | PR_kwDODunzps4yBUrZ | 3,672 | Prioritize `module.builder_kwargs` over defaults in `TestCommand` | {
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lvwerra",
"id": 8264887,
"login": "lvwerra",
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lvwerra"
} | [] | closed | false | null | [] | null | [] | "2022-02-03T11:38:42Z" | "2022-02-04T12:37:20Z" | "2022-02-04T12:37:19Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3672.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3672",
"merged_at": "2022-02-04T12:37:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3672.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3672"
} | This fixes a bug in the `TestCommand` where multiple kwargs for `name` were passed if it was set in both default and `module.builder_kwargs`. Example error:
```Python
Traceback (most recent call last):
File "create_metadata.py", line 96, in <module>
main(**vars(args))
File "create_metadata.py", line 86, in main
metadata_command.run()
File "/opt/conda/lib/python3.7/site-packages/datasets/commands/test.py", line 144, in run
for j, builder in enumerate(get_builders()):
File "/opt/conda/lib/python3.7/site-packages/datasets/commands/test.py", line 141, in get_builders
name=name, cache_dir=self._cache_dir, data_dir=self._data_dir, **module.builder_kwargs
TypeError: type object got multiple values for keyword argument 'name'
```
Let me know what you think. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3672/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3672/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5249 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5249/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5249/comments | https://api.github.com/repos/huggingface/datasets/issues/5249/events | https://github.com/huggingface/datasets/issues/5249 | 1,451,692,247 | I_kwDODunzps5WhxDX | 5,249 | Protect the main branch from inadvertent direct pushes | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"It seems all the tasks have been addressed, meaning this issue can be closed, no?"
] | "2022-11-16T14:19:03Z" | "2023-07-21T14:34:44Z" | null | MEMBER | null | null | null | We have decided to implement a protection mechanism in this repository, so that nobody (not even administrators) can inadvertently push accidentally directly to the main branch.
See context here:
- d7c942228b8dcf4de64b00a3053dce59b335f618
To do:
- [x] Protect main branch
- Settings > Branches > Branch protection rules > main > Edit
- [x] Check: Do not allow bypassing the above settings
- The above settings will apply to administrators and custom roles with the "bypass branch protections" permission.
- [x] Additionally, uncheck: Require approvals [under "Require a pull request before merging", which was already checked]
- Before, we could exceptionally merge a non-approved PR, using Administrator bypass
- Now that Administrator bypass is no longer possible, we would always need an approval to be able to merge; and pull request authors cannot approve their own pull requests. This could be an inconvenient in some exceptional circumstances when an urgent fix is needed
- Nevertheless, although it is no longer enforced, it is strongly recommended to merge PRs only if they have at least one approval
- [ ] #5250
- So that direct pushes to main branch are no longer necessary | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5249/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5249/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1094 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1094/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1094/comments | https://api.github.com/repos/huggingface/datasets/issues/1094/events | https://github.com/huggingface/datasets/pull/1094 | 756,927,060 | MDExOlB1bGxSZXF1ZXN0NTMyMzg5MDQ4 | 1,094 | add urdu fake news dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/44389205?v=4",
"events_url": "https://api.github.com/users/chaitnayabasava/events{/privacy}",
"followers_url": "https://api.github.com/users/chaitnayabasava/followers",
"following_url": "https://api.github.com/users/chaitnayabasava/following{/other_user}",
"gists_url": "https://api.github.com/users/chaitnayabasava/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/chaitnayabasava",
"id": 44389205,
"login": "chaitnayabasava",
"node_id": "MDQ6VXNlcjQ0Mzg5MjA1",
"organizations_url": "https://api.github.com/users/chaitnayabasava/orgs",
"received_events_url": "https://api.github.com/users/chaitnayabasava/received_events",
"repos_url": "https://api.github.com/users/chaitnayabasava/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/chaitnayabasava/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chaitnayabasava/subscriptions",
"type": "User",
"url": "https://api.github.com/users/chaitnayabasava"
} | [] | closed | false | null | [] | null | [] | "2020-12-04T08:57:38Z" | "2020-12-04T09:20:56Z" | "2020-12-04T09:20:56Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1094.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1094",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1094.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1094"
} | Added Urdu fake news dataset. The dataset can be found <a href="https://github.com/MaazAmjad/Datasets-for-Urdu-news">here</a>. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1094/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1094/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3153 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3153/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3153/comments | https://api.github.com/repos/huggingface/datasets/issues/3153/events | https://github.com/huggingface/datasets/pull/3153 | 1,034,179,198 | PR_kwDODunzps4tlEVE | 3,153 | Add TER (as implemented in sacrebleu) | {
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BramVanroy",
"id": 2779410,
"login": "BramVanroy",
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BramVanroy"
} | [] | closed | false | null | [] | null | [
"The problem appears to stem from the omission of the lines that you mentioned. If you add them back and try examples from [this](https://huggingface.co/docs/datasets/using_metrics.html) tutorial (sacrebleu metric example) the code you implemented works fine.\r\n\r\nI think the purpose of these lines is follows:\r\n\r\n1. Sacrebleu metrics confusingly expect a nested list of strings when you have just one reference for each hypothesis (i.e. `[[\"example1\", \"example2\", \"example3]]`), while for cases with more than one reference a _nested list of lists of strings_ (i.e. `[[\"ref1a\", \"ref1b\"], [\"ref2a\", \"ref2b\"], [\"ref3a\", \"ref3b\"]]`) is expected instead. So `transformed_references` line outputs the required single reference format for sacrebleu's ter implementation which you can't pass directly to `compute`.\r\n2. I'm assuming that an additional check is also related to that confusing format with one/many references, because it's really difficult to tell what exactly you're doing wrong if you're not aware of that issue."
] | "2021-10-23T14:26:45Z" | "2021-11-02T11:04:11Z" | "2021-11-02T11:04:11Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3153.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3153",
"merged_at": "2021-11-02T11:04:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3153.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3153"
} | Implements TER (Translation Edit Rate) as per its implementation in sacrebleu. Sacrebleu for BLEU scores is already implemented in `datasets` so I thought this would be a nice addition.
I started from the sacrebleu implementation, as the two metrics have a lot in common.
Verified with sacrebleu's [testing suite](https://github.com/mjpost/sacrebleu/blob/078c440168c6adc89ba75fe6d63f0d922d42bcfe/test/test_ter.py) that this indeed works as intended.
```python
import datasets
test_cases = [
(['aaaa bbbb cccc dddd'], ['aaaa bbbb cccc dddd'], 0), # perfect match
(['dddd eeee ffff'], ['aaaa bbbb cccc'], 1), # no overlap
([''], ['a'], 1), # corner case, empty hypothesis
(['d e f g h a b c'], ['a b c d e f g h'], 1 / 8), # a single shift fixes MT
(
[
'wählen Sie " Bild neu berechnen , " um beim Ändern der Bildgröße Pixel hinzuzufügen oder zu entfernen , damit das Bild ungefähr dieselbe Größe aufweist wie die andere Größe .',
'wenn Sie alle Aufgaben im aktuellen Dokument aktualisieren möchten , wählen Sie im Menü des Aufgabenbedienfelds die Option " Alle Aufgaben aktualisieren . "',
'klicken Sie auf der Registerkarte " Optionen " auf die Schaltfläche " Benutzerdefiniert " und geben Sie Werte für " Fehlerkorrektur-Level " und " Y / X-Verhältnis " ein .',
'Sie können beispielsweise ein Dokument erstellen , das ein Auto über die Bühne enthält .',
'wählen Sie im Dialogfeld " Neu aus Vorlage " eine Vorlage aus und klicken Sie auf " Neu . "',
],
[
'wählen Sie " Bild neu berechnen , " um beim Ändern der Bildgröße Pixel hinzuzufügen oder zu entfernen , damit die Darstellung des Bildes in einer anderen Größe beibehalten wird .',
'wenn Sie alle Aufgaben im aktuellen Dokument aktualisieren möchten , wählen Sie im Menü des Aufgabenbedienfelds die Option " Alle Aufgaben aktualisieren . "',
'klicken Sie auf der Registerkarte " Optionen " auf die Schaltfläche " Benutzerdefiniert " und geben Sie für " Fehlerkorrektur-Level " und " Y / X-Verhältnis " niedrigere Werte ein .',
'Sie können beispielsweise ein Dokument erstellen , das ein Auto enthalt , das sich über die Bühne bewegt .',
'wählen Sie im Dialogfeld " Neu aus Vorlage " eine Vorlage aus und klicken Sie auf " Neu . "',
],
0.136 # realistic example from WMT dev data (2019)
),
]
ter = datasets.load_metric(r"path\to\datasets\metrics\ter")
predictions = ["hello there general kenobi", "foo bar foobar"]
references = [["hello there general kenobi", "hello there !"], ["foo bar foobar", "foo bar foobar"]]
print(ter.compute(predictions=predictions, references=references))
for hyp, ref, score in test_cases:
# Note the reference transformation which is different from scarebleu's input format
results = ter.compute(predictions=hyp, references=[[r] for r in ref])
assert 100*score == results["score"], f"expected {100*score}, got {results['score']}"
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3153/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3153/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2427 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2427/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2427/comments | https://api.github.com/repos/huggingface/datasets/issues/2427/events | https://github.com/huggingface/datasets/pull/2427 | 907,162,923 | MDExOlB1bGxSZXF1ZXN0NjU4MDUwMjAx | 2,427 | Add copyright info to MLSUM dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PhilipMay",
"id": 229382,
"login": "PhilipMay",
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PhilipMay"
} | [] | closed | false | null | [] | null | [
"Build fails but this change should not be the reason...",
"rebased on master"
] | "2021-05-31T07:15:57Z" | "2021-06-04T09:53:50Z" | "2021-06-04T09:53:50Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2427.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2427",
"merged_at": "2021-06-04T09:53:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2427.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2427"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2427/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2427/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/729 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/729/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/729/comments | https://api.github.com/repos/huggingface/datasets/issues/729/events | https://github.com/huggingface/datasets/issues/729 | 719,558,876 | MDU6SXNzdWU3MTk1NTg4NzY= | 729 | Better error message when one forgets to call `add_batch` before `compute` | {
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sgugger",
"id": 35901082,
"login": "sgugger",
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"repos_url": "https://api.github.com/users/sgugger/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sgugger"
} | [] | closed | false | null | [] | null | [] | "2020-10-12T17:59:22Z" | "2020-10-29T15:18:24Z" | "2020-10-29T15:18:24Z" | CONTRIBUTOR | null | null | null | When using metrics, if for some reason a user forgets to call `add_batch` to a metric before `compute` (with no arguments), the error message is a bit cryptic and could probably be made clearer.
## Reproducer
```python
import datasets
import torch
from datasets import Metric
class GatherMetric(Metric):
def _info(self):
return datasets.MetricInfo(
description="description",
citation="citation",
inputs_description="kwargs",
features=datasets.Features({
'predictions': datasets.Value('int64'),
'references': datasets.Value('int64'),
}),
codebase_urls=[],
reference_urls=[],
format='numpy'
)
def _compute(self, predictions, references):
return {"predictions": predictions, "labels": references}
metric = GatherMetric(cache_dir="test-metric")
inputs = torch.randint(0, 2, (1024,))
targets = torch.randint(0, 2, (1024,))
batch_size = 8
for i in range(0, 1024, batch_size):
pass # User forgets to call `add_batch`
result = metric.compute()
```
## Stack trace:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-13-267729d187fa> in <module>
3 pass
4 # metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size])
----> 5 result = metric.compute()
~/git/datasets/src/datasets/metric.py in compute(self, *args, **kwargs)
380 if predictions is not None:
381 self.add_batch(predictions=predictions, references=references)
--> 382 self._finalize()
383
384 self.cache_file_name = None
~/git/datasets/src/datasets/metric.py in _finalize(self)
343 elif self.process_id == 0:
344 # Let's acquire a lock on each node files to be sure they are finished writing
--> 345 file_paths, filelocks = self._get_all_cache_files()
346
347 # Read the predictions and references
~/git/datasets/src/datasets/metric.py in _get_all_cache_files(self)
280 filelocks = []
281 for process_id, file_path in enumerate(file_paths):
--> 282 filelock = FileLock(file_path + ".lock")
283 try:
284 filelock.acquire(timeout=self.timeout)
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/729/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/729/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4365 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4365/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4365/comments | https://api.github.com/repos/huggingface/datasets/issues/4365/events | https://github.com/huggingface/datasets/pull/4365 | 1,239,109,943 | PR_kwDODunzps43-4fC | 4,365 | Remove dots in config names | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Closing in favor of https://github.com/huggingface/datasets/pull/4367"
] | "2022-05-17T20:12:57Z" | "2023-09-24T10:02:53Z" | "2022-05-18T13:59:41Z" | MEMBER | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4365.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4365",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4365.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4365"
} | 20+ datasets have dots in their config names. However it causes issues with the YAML tags of the dataset cards since we can't have dots in YAML keys.
This is related to https://github.com/huggingface/datasets/pull/2362 (internal https://github.com/huggingface/moon-landing/issues/946).
Also removing the dots in the config names would allow us to merge https://github.com/huggingface/datasets/pull/4302 which removes a hack that replaces all the dots by underscores in the YAML tags.
I also added a test in the CI that checks that all the YAML tags to make sure that:
- they can be parsed using a YAML parser
- they contain only valid YAML tags like `languages` or `task_ids`
- they contain valid config names (no invalid characters `<>:/\|?*.`) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4365/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4365/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1066 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1066/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1066/comments | https://api.github.com/repos/huggingface/datasets/issues/1066/events | https://github.com/huggingface/datasets/pull/1066 | 756,391,957 | MDExOlB1bGxSZXF1ZXN0NTMxOTQ0MDc0 | 1,066 | Add ChrEn | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [] | closed | false | null | [] | null | [
"I just saw your PR actually ^^",
"> I just saw your PR actually ^^\r\n\r\nSomehow that still doesn't work, lmk if you have any ideas.",
"Did you rebase from master ?"
] | "2020-12-03T17:17:48Z" | "2020-12-03T21:49:39Z" | "2020-12-03T21:49:39Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1066.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1066",
"merged_at": "2020-12-03T21:49:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1066.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1066"
} | Adding the Cherokee English machine translation dataset of https://github.com/ZhangShiyue/ChrEn | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1066/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1066/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1382 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1382/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1382/comments | https://api.github.com/repos/huggingface/datasets/issues/1382/events | https://github.com/huggingface/datasets/pull/1382 | 760,325,077 | MDExOlB1bGxSZXF1ZXN0NTM1MTc1NzMx | 1,382 | adding UNPC | {
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patil-suraj",
"id": 27137566,
"login": "patil-suraj",
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patil-suraj"
} | [] | closed | false | null | [] | null | [
"merging since the CI just had a connection error"
] | "2020-12-09T13:21:41Z" | "2020-12-09T17:53:06Z" | "2020-12-09T17:53:06Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1382.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1382",
"merged_at": "2020-12-09T17:53:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1382.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1382"
} | Adding United Nations Parallel Corpus
http://opus.nlpl.eu/UNPC.php | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1382/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1382/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/559 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/559/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/559/comments | https://api.github.com/repos/huggingface/datasets/issues/559/events | https://github.com/huggingface/datasets/pull/559 | 690,411,263 | MDExOlB1bGxSZXF1ZXN0NDc3MzAzOTM2 | 559 | Adding the KILT knowledge source and tasks | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [] | closed | false | null | [] | null | [
"Feel free to merge when you are happy with it @yjernite :-)"
] | "2020-09-01T20:05:13Z" | "2020-09-04T18:05:47Z" | "2020-09-04T18:05:47Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/559.diff",
"html_url": "https://github.com/huggingface/datasets/pull/559",
"merged_at": "2020-09-04T18:05:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/559.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/559"
} | This adds Wikipedia pre-processed for KILT, as well as the task data. Only the question IDs are provided for TriviaQA, but they can easily be mapped back with:
```
import nlp
kilt_wikipedia = nlp.load_dataset('kilt_wikipedia')
kilt_tasks = nlp.load_dataset('kilt_tasks')
triviaqa = nlp.load_dataset('trivia_qa', 'unfiltered.nocontext')
triviaqa_map = {}
for k in ['train', 'validation', 'test']:
triviaqa_map = dict([(q_id, i) for i, q_id in enumerate(triviaqa[k]['question_id'])])
kilt_tasks[k + '_triviaqa'] = kilt_tasks[k + '_triviaqa'].filter(lambda x: x['id'] in triviaqa_map)
kilt_tasks[k + '_triviaqa'].map(lambda x: {'input': triviaqa[split][triviaqa_map[x['id']]]['question']})
```
It would be great to have the dataset by Monday, which is when the paper should land on Arxiv and @fabiopetroni is planning on tweeting about the paper and `facebookresearch` repository for the datasett | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/559/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/559/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/717 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/717/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/717/comments | https://api.github.com/repos/huggingface/datasets/issues/717/events | https://github.com/huggingface/datasets/pull/717 | 714,959,268 | MDExOlB1bGxSZXF1ZXN0NDk3OTUwOTA2 | 717 | Fixes #712 Error in the Overview.ipynb notebook | {
"avatar_url": "https://avatars.githubusercontent.com/u/850012?v=4",
"events_url": "https://api.github.com/users/subhrm/events{/privacy}",
"followers_url": "https://api.github.com/users/subhrm/followers",
"following_url": "https://api.github.com/users/subhrm/following{/other_user}",
"gists_url": "https://api.github.com/users/subhrm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/subhrm",
"id": 850012,
"login": "subhrm",
"node_id": "MDQ6VXNlcjg1MDAxMg==",
"organizations_url": "https://api.github.com/users/subhrm/orgs",
"received_events_url": "https://api.github.com/users/subhrm/received_events",
"repos_url": "https://api.github.com/users/subhrm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/subhrm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/subhrm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/subhrm"
} | [] | closed | false | null | [] | null | [] | "2020-10-05T15:50:41Z" | "2020-10-06T06:31:43Z" | "2020-10-05T16:25:41Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/717.diff",
"html_url": "https://github.com/huggingface/datasets/pull/717",
"merged_at": "2020-10-05T16:25:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/717.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/717"
} | Fixes #712 Error in the Overview.ipynb notebook by adding `with_details=True` parameter to `list_datasets` function in Cell 3 of **overview** notebook | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/717/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/717/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3651 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3651/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3651/comments | https://api.github.com/repos/huggingface/datasets/issues/3651/events | https://github.com/huggingface/datasets/pull/3651 | 1,118,597,647 | PR_kwDODunzps4xy3De | 3,651 | Update link in wiki_bio dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jxmorris12",
"id": 13238952,
"login": "jxmorris12",
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jxmorris12"
} | [] | closed | false | null | [] | null | [
"> all the tests pass, but I'm still not able to import the dataset\r\n\r\nSince it's not merged on `master` yet, you have to provide the path to your local `wiki_bio.py` to use it.\r\nIndeed the library downloads the dataset files from `master` if you have a dev installation of the library.\r\n\r\nI agree it would be nice to change that, and use the local dataset scripts from the `datasets` directory - it feels definitely more natural.",
"Cool, thanks for your help and I agree!"
] | "2022-01-30T16:28:54Z" | "2022-01-31T14:50:48Z" | "2022-01-31T08:38:09Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3651.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3651",
"merged_at": "2022-01-31T08:38:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3651.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3651"
} | Fixes #3580 and makes the wiki_bio dataset work again. I changed the link and some documentation, and all the tests pass. Thanks @lhoestq for uploading the dataset to the HuggingFace data bucket.
@lhoestq -- all the tests pass, but I'm still not able to import the dataset, as the old Google Drive link is cached somewhere:
```python
>>> from datasets import load_dataset
load_dataset("wiki_bio>>> load_dataset("wiki_bio")
Using custom data configuration default
Downloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to /home/jxm3/.cache/huggingface/datasets/wiki_bio/default/1.1.0/5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9...
Traceback (most recent call last):
...
File "/home/jxm3/random/datasets/src/datasets/utils/file_utils.py", line 612, in get_from_cache
raise FileNotFoundError(f"Couldn't find file at {url}")
FileNotFoundError: Couldn't find file at https://drive.google.com/uc?export=download&id=1L7aoUXzHPzyzQ0ns4ApBbYepsjFOtXil
```
what do I have to do to invalidate the cache and actually import the dataset? It's clearly set up correctly, since the data is downloaded and processed by the tests.
As an aside, this caching-loading-scripts behavior makes for a really bad developer experience. I just wasted an hour trying to figure out where the caching was happening and how to disable it, and I don't know. All I wanted to do was update the link and submit a pull request! I recommend that you all either change this behavior (i.e. updating the link to a dataset should "just work") or document it, since I couldn't find any information about this in the contributing.md or readme or anywhere else! Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3651/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3651/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6247 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6247/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6247/comments | https://api.github.com/repos/huggingface/datasets/issues/6247/events | https://github.com/huggingface/datasets/pull/6247 | 1,901,390,945 | PR_kwDODunzps5amAQ1 | 6,247 | Update create_dataset.mdx | {
"avatar_url": "https://avatars.githubusercontent.com/u/76403422?v=4",
"events_url": "https://api.github.com/users/EswarDivi/events{/privacy}",
"followers_url": "https://api.github.com/users/EswarDivi/followers",
"following_url": "https://api.github.com/users/EswarDivi/following{/other_user}",
"gists_url": "https://api.github.com/users/EswarDivi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/EswarDivi",
"id": 76403422,
"login": "EswarDivi",
"node_id": "MDQ6VXNlcjc2NDAzNDIy",
"organizations_url": "https://api.github.com/users/EswarDivi/orgs",
"received_events_url": "https://api.github.com/users/EswarDivi/received_events",
"repos_url": "https://api.github.com/users/EswarDivi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/EswarDivi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EswarDivi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/EswarDivi"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008892 / 0.011353 (-0.002461) | 0.005140 / 0.011008 (-0.005868) | 0.110951 / 0.038508 (0.072442) | 0.086159 / 0.023109 (0.063050) | 0.391117 / 0.275898 (0.115218) | 0.440884 / 0.323480 (0.117404) | 0.006562 / 0.007986 (-0.001423) | 0.003711 / 0.004328 (-0.000618) | 0.081848 / 0.004250 (0.077598) | 0.063187 / 0.037052 (0.026135) | 0.369771 / 0.258489 (0.111282) | 0.447685 / 0.293841 (0.153844) | 0.046623 / 0.128546 (-0.081923) | 0.014024 / 0.075646 (-0.061622) | 0.418556 / 0.419271 (-0.000715) | 0.064660 / 0.043533 (0.021127) | 0.379416 / 0.255139 (0.124277) | 0.415800 / 0.283200 (0.132600) | 0.036899 / 0.141683 (-0.104784) | 1.710280 / 1.452155 (0.258125) | 1.932326 / 1.492716 (0.439610) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.311351 / 0.018006 (0.293345) | 0.621121 / 0.000490 (0.620631) | 0.013677 / 0.000200 (0.013477) | 0.000543 / 0.000054 (0.000488) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031310 / 0.037411 (-0.006102) | 0.099546 / 0.014526 (0.085020) | 0.122100 / 0.176557 (-0.054457) | 0.186477 / 0.737135 (-0.550659) | 0.116634 / 0.296338 (-0.179704) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.574639 / 0.215209 (0.359430) | 5.976678 / 2.077655 (3.899023) | 2.535482 / 1.504120 (1.031362) | 2.248873 / 1.541195 (0.707678) | 2.361696 / 1.468490 (0.893205) | 0.866700 / 4.584777 (-3.718077) | 5.298018 / 3.745712 (1.552306) | 4.753240 / 5.269862 (-0.516622) | 3.124698 / 4.565676 (-1.440979) | 0.101852 / 0.424275 (-0.322423) | 0.009117 / 0.007607 (0.001510) | 0.723730 / 0.226044 (0.497685) | 7.172649 / 2.268929 (4.903720) | 3.400410 / 55.444624 (-52.044214) | 2.626619 / 6.876477 (-4.249857) | 2.948692 / 2.142072 (0.806620) | 0.991589 / 4.805227 (-3.813638) | 0.208902 / 6.500664 (-6.291762) | 0.076172 / 0.075469 (0.000703) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.621880 / 1.841788 (-0.219907) | 22.735673 / 8.074308 (14.661365) | 20.376990 / 10.191392 (10.185598) | 0.232219 / 0.680424 (-0.448204) | 0.028616 / 0.534201 (-0.505585) | 0.455725 / 0.579283 (-0.123558) | 0.562796 / 0.434364 (0.128432) | 0.545344 / 0.540337 (0.005007) | 0.759440 / 1.386936 (-0.627496) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009845 / 0.011353 (-0.001508) | 0.005289 / 0.011008 (-0.005719) | 0.083117 / 0.038508 (0.044609) | 0.098467 / 0.023109 (0.075357) | 0.532345 / 0.275898 (0.256447) | 0.571000 / 0.323480 (0.247520) | 0.007223 / 0.007986 (-0.000763) | 0.004442 / 0.004328 (0.000114) | 0.081710 / 0.004250 (0.077459) | 0.071132 / 0.037052 (0.034080) | 0.540093 / 0.258489 (0.281604) | 0.582244 / 0.293841 (0.288403) | 0.048509 / 0.128546 (-0.080038) | 0.013897 / 0.075646 (-0.061749) | 0.092579 / 0.419271 (-0.326692) | 0.073409 / 0.043533 (0.029876) | 0.537369 / 0.255139 (0.282230) | 0.551403 / 0.283200 (0.268203) | 0.038847 / 0.141683 (-0.102835) | 1.940848 / 1.452155 (0.488693) | 2.045597 / 1.492716 (0.552881) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.303883 / 0.018006 (0.285877) | 0.600237 / 0.000490 (0.599748) | 0.006030 / 0.000200 (0.005830) | 0.000124 / 0.000054 (0.000070) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036633 / 0.037411 (-0.000778) | 0.105853 / 0.014526 (0.091327) | 0.126289 / 0.176557 (-0.050267) | 0.190022 / 0.737135 (-0.547113) | 0.123251 / 0.296338 (-0.173087) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.711893 / 0.215209 (0.496684) | 6.979781 / 2.077655 (4.902126) | 3.491514 / 1.504120 (1.987394) | 3.268077 / 1.541195 (1.726882) | 3.241777 / 1.468490 (1.773287) | 0.875913 / 4.584777 (-3.708864) | 5.458421 / 3.745712 (1.712709) | 4.818355 / 5.269862 (-0.451507) | 3.256046 / 4.565676 (-1.309631) | 0.095000 / 0.424275 (-0.329275) | 0.009072 / 0.007607 (0.001465) | 0.818468 / 0.226044 (0.592424) | 8.027702 / 2.268929 (5.758773) | 4.363234 / 55.444624 (-51.081390) | 3.695269 / 6.876477 (-3.181207) | 3.902601 / 2.142072 (1.760528) | 1.039007 / 4.805227 (-3.766220) | 0.212050 / 6.500664 (-6.288614) | 0.081438 / 0.075469 (0.005969) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.746945 / 1.841788 (-0.094842) | 25.274283 / 8.074308 (17.199975) | 23.514717 / 10.191392 (13.323325) | 0.232580 / 0.680424 (-0.447843) | 0.032083 / 0.534201 (-0.502118) | 0.482873 / 0.579283 (-0.096410) | 0.585730 / 0.434364 (0.151366) | 0.602066 / 0.540337 (0.061729) | 0.796391 / 1.386936 (-0.590546) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0d7cb68fe37dbfd81e5f82e19d8f9847c337788d \"CML watermark\")\n"
] | "2023-09-18T17:06:29Z" | "2023-09-19T18:51:49Z" | "2023-09-19T18:40:10Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6247.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6247",
"merged_at": "2023-09-19T18:40:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6247.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6247"
} | modified , as AudioFolder and ImageFolder not in Dataset Library.
``` from datasets import AudioFolder ``` and ```from datasets import ImageFolder``` to ```from datasets import load_dataset```
```
cannot import name 'AudioFolder' from 'datasets' (/home/eswardivi/miniconda3/envs/Hugformers/lib/python3.10/site-packages/datasets/__init__.py)
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6247/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6247/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2292 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2292/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2292/comments | https://api.github.com/repos/huggingface/datasets/issues/2292/events | https://github.com/huggingface/datasets/pull/2292 | 871,230,183 | MDExOlB1bGxSZXF1ZXN0NjI2MjgzNTYy | 2,292 | Fixed typo seperate->separate | {
"avatar_url": "https://avatars.githubusercontent.com/u/32505743?v=4",
"events_url": "https://api.github.com/users/laksh9950/events{/privacy}",
"followers_url": "https://api.github.com/users/laksh9950/followers",
"following_url": "https://api.github.com/users/laksh9950/following{/other_user}",
"gists_url": "https://api.github.com/users/laksh9950/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/laksh9950",
"id": 32505743,
"login": "laksh9950",
"node_id": "MDQ6VXNlcjMyNTA1NzQz",
"organizations_url": "https://api.github.com/users/laksh9950/orgs",
"received_events_url": "https://api.github.com/users/laksh9950/received_events",
"repos_url": "https://api.github.com/users/laksh9950/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/laksh9950/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laksh9950/subscriptions",
"type": "User",
"url": "https://api.github.com/users/laksh9950"
} | [] | closed | false | null | [] | null | [] | "2021-04-29T16:40:53Z" | "2021-04-30T13:29:18Z" | "2021-04-30T13:03:12Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2292.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2292",
"merged_at": "2021-04-30T13:03:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2292.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2292"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2292/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2292/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/5292 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5292/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5292/comments | https://api.github.com/repos/huggingface/datasets/issues/5292/events | https://github.com/huggingface/datasets/issues/5292 | 1,463,053,832 | I_kwDODunzps5XNG4I | 5,292 | Missing documentation build for versions 2.7.1 and 2.6.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"- Build docs for 2.6.2:\r\n - Commit: a6a5a1cf4cdf1e0be65168aed5a327f543001fe8\r\n - Build docs GH Action: https://github.com/huggingface/datasets/actions/runs/3539470622/jobs/5941404044\r\n- Build docs for 2.7.1:\r\n - Commit: 5ef1ab1cc06c2b7a574bf2df454cd9fcb071ccb2\r\n - Build docs GH Action: https://github.com/huggingface/datasets/actions/runs/3539574442/jobs/5941636792"
] | "2022-11-24T09:42:10Z" | "2022-11-24T10:10:02Z" | "2022-11-24T10:10:02Z" | MEMBER | null | null | null | After the patch releases [2.7.1](https://github.com/huggingface/datasets/releases/tag/2.7.1) and [2.6.2](https://github.com/huggingface/datasets/releases/tag/2.6.2), the online docs were not properly built (the build_documentation workflow was not triggered).
There was a fix by:
- #5291
However, both documentations were built from main branch, instead of their corresponding version branch.
We are rebuilding them. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5292/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5292/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3319 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3319/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3319/comments | https://api.github.com/repos/huggingface/datasets/issues/3319/events | https://github.com/huggingface/datasets/pull/3319 | 1,062,749,654 | PR_kwDODunzps4u-xdv | 3,319 | Add push_to_hub docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"Looks good to me! :)\r\n\r\nMaybe we can mention that users can also set the `private` argument if they want to keep their dataset private? It would lead nicely into the next section on Privacy.",
"Thanks for your comments, I fixed the capitalization for consistency and added an passage to mention the `private` parameter and to have a nice transition to the Privacy section :)\r\n\r\nI also added the login instruction that was missing before the user can actually upload a dataset."
] | "2021-11-24T18:21:11Z" | "2021-11-25T14:47:46Z" | "2021-11-25T14:47:46Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3319.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3319",
"merged_at": "2021-11-25T14:47:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3319.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3319"
} | Since #3098 it's now possible to upload a dataset on the Hub directly from python using the `push_to_hub` method.
I just added a section in the "Upload a dataset to the Hub" tutorial.
I kept the section quite simple but let me know if it sounds good to you @LysandreJik @stevhliu :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3319/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3319/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2410 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2410/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2410/comments | https://api.github.com/repos/huggingface/datasets/issues/2410/events | https://github.com/huggingface/datasets/pull/2410 | 903,613,676 | MDExOlB1bGxSZXF1ZXN0NjU0ODUwMjY4 | 2,410 | fix #2391 add original answers in kilt-TriviaQA | {
"avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4",
"events_url": "https://api.github.com/users/PaulLerner/events{/privacy}",
"followers_url": "https://api.github.com/users/PaulLerner/followers",
"following_url": "https://api.github.com/users/PaulLerner/following{/other_user}",
"gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PaulLerner",
"id": 25532159,
"login": "PaulLerner",
"node_id": "MDQ6VXNlcjI1NTMyMTU5",
"organizations_url": "https://api.github.com/users/PaulLerner/orgs",
"received_events_url": "https://api.github.com/users/PaulLerner/received_events",
"repos_url": "https://api.github.com/users/PaulLerner/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PaulLerner"
} | [] | closed | false | null | [] | null | [
"LGTM, but I'm not sure what's going on with the Unix tests @lhoestq ",
"The CI error is unrelated to this PR, it's been fixed now on master.",
"Thanks @PaulLerner !",
"> #- [ ] - Hey![image](https://user-images.githubusercontent.com/71971234/121969638-00030e00-cd75-11eb-9512-25d32ac08051.jpeg)@fr[fr_fr**fr~~fr `fr```\nFR\n````~~**_]()",
"Oh that was unexpected. I didn't know pokemons were into NLP"
] | "2021-05-27T11:54:29Z" | "2021-06-15T12:35:57Z" | "2021-06-14T17:29:10Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2410.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2410",
"merged_at": "2021-06-14T17:29:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2410.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2410"
} | cc @yjernite is it ok like this? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2410/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2410/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4984 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4984/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4984/comments | https://api.github.com/repos/huggingface/datasets/issues/4984/events | https://github.com/huggingface/datasets/pull/4984 | 1,375,690,330 | PR_kwDODunzps4_FhTm | 4,984 | docs: ✏️ add links to the Datasets API | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"OK, thanks @lhoestq. I'll close this PR, and come back to it with @stevhliu once we work on https://github.com/huggingface/datasets-server/issues/568"
] | "2022-09-16T09:34:12Z" | "2022-09-16T13:10:14Z" | "2022-09-16T13:07:33Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4984.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4984",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4984.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4984"
} | I added some links to the Datasets API in the docs. See https://github.com/huggingface/datasets-server/pull/566 for a companion PR in the datasets-server. The idea is to improve the discovery of the API through the docs.
I'm a bit shy about pasting a lot of links to the API in the docs, so it's minimal for now. I'm interested in ideas to integrate the API better in these docs without being too much. cc @lhoestq @julien-c @albertvillanova @stevhliu. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4984/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4984/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5965 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5965/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5965/comments | https://api.github.com/repos/huggingface/datasets/issues/5965/events | https://github.com/huggingface/datasets/issues/5965 | 1,763,648,540 | I_kwDODunzps5pHyQc | 5,965 | "Couldn't cast array of type" in complex datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/1712066?v=4",
"events_url": "https://api.github.com/users/piercefreeman/events{/privacy}",
"followers_url": "https://api.github.com/users/piercefreeman/followers",
"following_url": "https://api.github.com/users/piercefreeman/following{/other_user}",
"gists_url": "https://api.github.com/users/piercefreeman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/piercefreeman",
"id": 1712066,
"login": "piercefreeman",
"node_id": "MDQ6VXNlcjE3MTIwNjY=",
"organizations_url": "https://api.github.com/users/piercefreeman/orgs",
"received_events_url": "https://api.github.com/users/piercefreeman/received_events",
"repos_url": "https://api.github.com/users/piercefreeman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/piercefreeman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/piercefreeman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/piercefreeman"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
] | null | [
"Thanks for reporting! \r\n\r\nSpecifying the target features explicitly should avoid this error:\r\n```python\r\ndataset = dataset.map(\r\n batch_process,\r\n batched=True,\r\n batch_size=1,\r\n num_proc=1,\r\n remove_columns=dataset.column_names,\r\n features=datasets.Features({\"texts\": datasets.Sequence(datasets.Value(\"string\"))})\r\n)\r\n```\r\n\r\nThis error stems from our type promotion not handling the nested case. But this promotion/casting allocates memory in most scenarios, which can be problematic for large datasets, so explicitly passing the features is the optimal solution.",
"Hi @mariosasko thanks for the context, this is helpful to know. Would it be worth having some logic to generate this explicit feature specification automatically if a type annotation for a .map returns a dataclass that can be inferred?\r\n\r\nFeels like something that would be easy to implement and could save memory / deal with this case in a standardized way.",
"> . Would it be worth having some logic to generate this explicit feature specification automatically if a type annotation for a .map returns a dataclass that can be inferred?\r\n\r\nInteresting proposal! Yes, we could consider doing this if the (return) type hint is `TypedDict`, and raise an error that type hints are incorrect if the cast using the inferred types fails.",
"@mariosasko Put up an initial PR to implement this proposal. Let me know your thoughts on direction and what else should be in-scope here."
] | "2023-06-19T14:16:14Z" | "2023-07-26T15:13:53Z" | "2023-07-26T15:13:53Z" | NONE | null | null | null | ### Describe the bug
When doing a map of a dataset with complex types, sometimes `datasets` is unable to interpret the valid schema of a returned datasets.map() function. This often comes from conflicting types, like when both empty lists and filled lists are competing for the same field value.
This is prone to happen in batch mapping, when the mapper returns a sequence of null/empty values and other batches are non-null. A workaround is to manually cast the new batch to a pyarrow table (like implemented in this [workaround](https://github.com/piercefreeman/lassen/pull/3)) but it feels like this ideally should be solved at the core library level.
Note that the reproduction case only throws this error if the first datapoint has the empty list. If it is processed later, datasets already detects its representation as list-type and therefore allows the empty list to be provided.
### Steps to reproduce the bug
A trivial reproduction case:
```python
from typing import Iterator, Any
import pandas as pd
from datasets import Dataset
def batch_to_examples(batch: dict[str, list[Any]]) -> Iterator[dict[str, Any]]:
for i in range(next(iter(lengths))):
yield {feature: values[i] for feature, values in batch.items()}
def examples_to_batch(examples) -> dict[str, list[Any]]:
batch = {}
for example in examples:
for feature, value in example.items():
if feature not in batch:
batch[feature] = []
batch[feature].append(value)
return batch
def batch_process(examples, explicit_schema: bool):
new_examples = []
for example in batch_to_examples(examples):
new_examples.append(dict(texts=example["raw_text"].split()))
return examples_to_batch(new_examples)
df = pd.DataFrame(
[
{"raw_text": ""},
{"raw_text": "This is a test"},
{"raw_text": "This is another test"},
]
)
dataset = Dataset.from_pandas(df)
# datasets won't be able to typehint a dataset that starts with an empty example.
with pytest.raises(TypeError, match="Couldn't cast array of type"):
dataset = dataset.map(
batch_process,
batched=True,
batch_size=1,
num_proc=1,
remove_columns=dataset.column_names,
)
```
This results in crashes like:
```bash
File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 1819, in wrapper
return func(array, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 2109, in cast_array_to_feature
return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 1819, in wrapper
return func(array, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 1998, in array_cast
raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}")
TypeError: Couldn't cast array of type string to null
```
### Expected behavior
The code should successfully map and create a new dataset without error.
### Environment info
Mac OSX, Linux | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5965/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5965/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2869 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2869/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2869/comments | https://api.github.com/repos/huggingface/datasets/issues/2869/events | https://github.com/huggingface/datasets/issues/2869 | 987,676,420 | MDU6SXNzdWU5ODc2NzY0MjA= | 2,869 | TypeError: 'NoneType' object is not callable | {
"avatar_url": "https://avatars.githubusercontent.com/u/40911446?v=4",
"events_url": "https://api.github.com/users/Chenfei-Kang/events{/privacy}",
"followers_url": "https://api.github.com/users/Chenfei-Kang/followers",
"following_url": "https://api.github.com/users/Chenfei-Kang/following{/other_user}",
"gists_url": "https://api.github.com/users/Chenfei-Kang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Chenfei-Kang",
"id": 40911446,
"login": "Chenfei-Kang",
"node_id": "MDQ6VXNlcjQwOTExNDQ2",
"organizations_url": "https://api.github.com/users/Chenfei-Kang/orgs",
"received_events_url": "https://api.github.com/users/Chenfei-Kang/received_events",
"repos_url": "https://api.github.com/users/Chenfei-Kang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Chenfei-Kang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Chenfei-Kang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Chenfei-Kang"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi, @Chenfei-Kang.\r\n\r\nI'm sorry, but I'm not able to reproduce your bug:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"glue\", 'cola')\r\nds\r\n```\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 8551\r\n })\r\n validation: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 1043\r\n })\r\n test: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 1063\r\n })\r\n})\r\n```\r\n\r\nCould you please give more details and environment info (platform, PyArrow version)?",
"> Hi, @Chenfei-Kang.\r\n> \r\n> I'm sorry, but I'm not able to reproduce your bug:\r\n> \r\n> ```python\r\n> from datasets import load_dataset\r\n> \r\n> ds = load_dataset(\"glue\", 'cola')\r\n> ds\r\n> ```\r\n> \r\n> ```\r\n> DatasetDict({\r\n> train: Dataset({\r\n> features: ['sentence', 'label', 'idx'],\r\n> num_rows: 8551\r\n> })\r\n> validation: Dataset({\r\n> features: ['sentence', 'label', 'idx'],\r\n> num_rows: 1043\r\n> })\r\n> test: Dataset({\r\n> features: ['sentence', 'label', 'idx'],\r\n> num_rows: 1063\r\n> })\r\n> })\r\n> ```\r\n> \r\n> Could you please give more details and environment info (platform, PyArrow version)?\r\n\r\nSorry to reply you so late.\r\nplatform: pycharm 2021 + anaconda with python 3.7\r\nPyArrow version: 5.0.0\r\nhuggingface-hub: 0.0.16\r\ndatasets: 1.9.0\r\n",
"- For the platform, we need to know the operating system of your machine. Could you please run the command `datasets-cli env` and copy-and-paste its output below?\r\n- In relation with the error, you just gave us the error type and message (`TypeError: 'NoneType' object is not callable`). Could you please copy-paste the complete stack trace, so that we know exactly which part of the code threw the error?",
"> * For the platform, we need to know the operating system of your machine. Could you please run the command `datasets-cli env` and copy-and-paste its output below?\r\n> * In relation with the error, you just gave us the error type and message (`TypeError: 'NoneType' object is not callable`). Could you please copy-paste the complete stack trace, so that we know exactly which part of the code threw the error?\r\n\r\n1. For the platform, here are the output:\r\n - datasets` version: 1.11.0\r\n - Platform: Windows-10-10.0.19041-SP0\r\n - Python version: 3.7.10\r\n - PyArrow version: 5.0.0\r\n2. For the code and error:\r\n ```python\r\n from datasets import load_dataset, load_metric\r\n dataset = load_dataset(\"glue\", \"cola\")\r\n ```\r\n ```python\r\n Traceback (most recent call last):\r\n ....\r\n ....\r\n File \"my_file.py\", line 2, in <module>\r\n dataset = load_dataset(\"glue\", \"cola\")\r\n File \"My environments\\lib\\site-packages\\datasets\\load.py\", line 830, in load_dataset\r\n **config_kwargs,\r\n File \"My environments\\lib\\site-packages\\datasets\\load.py\", line 710, in load_dataset_builder\r\n **config_kwargs,\r\n TypeError: 'NoneType' object is not callable\r\n ```\r\n Thank you!",
"For that environment, I am sorry but I can't reproduce the bug: I can load the dataset without any problem.",
"One naive question: do you have internet access from the machine where you execute the code?",
"> For that environment, I am sorry but I can't reproduce the bug: I can load the dataset without any problem.\r\n\r\nBut I can download other task dataset such as `dataset = load_dataset('squad')`. I don't know what went wrong. Thank you so much!",
"Hi,friends. I meet the same problem. Do you have a way to fix this? Thanks!\r\n"
] | "2021-09-03T11:27:39Z" | "2022-03-30T05:30:38Z" | "2021-09-08T09:24:55Z" | NONE | null | null | null | ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2869/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2869/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4041 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4041/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4041/comments | https://api.github.com/repos/huggingface/datasets/issues/4041/events | https://github.com/huggingface/datasets/issues/4041 | 1,183,599,461 | I_kwDODunzps5GjEtl | 4,041 | Add support for IIIF in datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/8995957?v=4",
"events_url": "https://api.github.com/users/davanstrien/events{/privacy}",
"followers_url": "https://api.github.com/users/davanstrien/followers",
"following_url": "https://api.github.com/users/davanstrien/following{/other_user}",
"gists_url": "https://api.github.com/users/davanstrien/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/davanstrien",
"id": 8995957,
"login": "davanstrien",
"node_id": "MDQ6VXNlcjg5OTU5NTc=",
"organizations_url": "https://api.github.com/users/davanstrien/orgs",
"received_events_url": "https://api.github.com/users/davanstrien/received_events",
"repos_url": "https://api.github.com/users/davanstrien/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/davanstrien/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davanstrien/subscriptions",
"type": "User",
"url": "https://api.github.com/users/davanstrien"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Hi! Thanks for the detailed analysis of adding IIIF support. I like the idea of \"using IIIF through datasets scripts\" due to its ease of use. Another approach that I like is yielding image ids and using the `piffle` library (which offers a bit more flexibility) + `map` to download + cache images. We can handle bad URLs in `map` by returning `None`. Plus, we can add a `Dataset Preprocessing` section with the code that explains this approach to the card of such datasets. WDYT?\r\n\r\n> currently, IIIF is mainly used by cultural heritage organizations (museums, archives etc.) The adoption of IIIF in this sector has been growing but it's possible that adoption won't be extended to other industries which may also be a source of image data for training ML models.\r\n\r\nThis is why (currently) adding a new feature type would be overkill, IMO.\r\n"
] | "2022-03-28T15:19:25Z" | "2022-04-05T18:20:53Z" | null | MEMBER | null | null | null | This is a feature request for support for IIIF in `datasets`. Apologies for the long issue. I have also used a different format to the usual feature request since I think that makes more sense but happy to use the standard template if preferred.
## What is [IIIF](https://iiif.io/)?
IIIF (International Image Interoperability Framework)
> is a set of open standards for delivering high-quality, attributed digital objects online at scale. It’s also an international community developing and implementing the IIIF APIs. IIIF is backed by a consortium of leading cultural institutions.
The tl;dr is that IIIF provides various specifications for implementing useful functionality for:
- Institutions to make available images for various use cases
- Users to have a consistent way of interacting/requesting these images
- For developers to have a common standard for developing tools for working with IIIF images that will work across all institutions that implement a particular IIIF standard (for example the image viewer for the BNF can also work for the Library of Congress if they both use IIIF).
Some institutions that various levels of support IIF include: The British Library, Internet Archive, Library of Congress, Wikidata. There are also many smaller institutions that have IIIF support. An incomplete list can be found here: https://iiif.io/guides/finding_resources/
## IIIF APIs
IIIF consists of a number of APIs which could be integrated with datasets. I think the most obvious candidate for inclusion would be the [Image API](https://iiif.io/api/image/3.0/)
### IIIF Image API
The Image API https://iiif.io/api/image/3.0/ is likely the most suitable first candidate for integration with datasets. The Image API offers a consistent protocol for requesting images via a URL:
```{scheme}://{server}{/prefix}/{identifier}/{region}/{size}/{rotation}/{quality}.{format}```
A concrete example of this:
```https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/full/0/default.jpg```
As you can see the scheme offers a number of options that can be specified in the URL, for example, size. Using the example URL we return:
![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/full/0/default.jpg)
We can change the size to request a size of 250 by 250, this is done by changing the size from `full` to `250,250` i.e. switching the URL to `https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/250,250/0/default.jpg`
![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/250,250/0/default.jpg)
We can also request the image with max width 250, max height 250 whilst maintaining the aspect ratio using `!w,h`. i.e. change the url to `https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg`
![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg)
A full overview of the options for size can be found here: https://iiif.io/api/image/3.0/#42-size
## Why would/could this be useful for datasets?
There are a few reasons why support for the IIIF Image API could be useful. Broadly the ability to have more control over how an image is returned from a server is useful for many ML workflows:
- images can be requested in the right size, this prevents having to download/stream large images when the actual desired size is much smaller
- can select a subset of an image: it is possible to select a sub-region of an image, this could be useful for example when you already have a bounding box for a subset of an image and then want to use this subset of an image for another task. For example, https://github.com/Living-with-machines/nnanno uses IIIF to request parts of a newspaper image that have been detected as 'photograph', 'illustration' etc for downstream use.
- options for quality, rotation, the format can all be encoded in the URL request.
These may become particularly useful when pre-training models on large image datasets where the cost of downloading images with 1600 pixel width when you actually want 240 has a larger impact.
## What could this look like in datasets?
I think there are various ways in which support for IIIF could potentially be included in `datasets`. These suggestions aren't fully fleshed out but hopefully, give a sense of possible approaches that match existing `datasets` methods in their approach.
### Use through datasets scripts
Loading images via URL is already supported. There are a few possible 'extras' that could be included when using IIIF. One option is to leverage the IIIF protocol in datasets scripts, i.e. the dataset script can expose the IIIF options via the dataset script:
```python
ds = load_dataset("iiif_dataset", image_size="250,250", fmt="jpg")
```
This is already possible. The approach to parsing the IIIF URLs would be left to the person creating the dataset script.
### Support through dataset scripts (with some datasets support)
This is similar to the above but `datasets` would offer some way of saying this is a iiif URL and then expose the options associated with IIIF images automatically. i.e. if you did something like:
```python
features = {"label": ClassLabel(names=['dog','cat']),
"url": datasets.IIIFURL()}
```
inside your loading script, you would automatically have exposed `size`, `fmt` etc. options when loading the dataset.
### Other possible integrations
Some other possible pseudocode ways that a user could interact with IIIF URLs:
The ability to cast to an `IIIFImage` feature type:
```
ds.cast_column('url', IIIFImage, download=False)
```
The ability to specify some options associated with IIIF urls.
```
ds = ds.set_iiif_options(column='url', size="250,250")
```
I think all of these would rely on having an `IIIFImage` feature type - this would be a little bit of a Frankenstein between a `string` and `datasets.Image`. I think most of the actual image behaviour would be exactly the same as `datasets.Image`, the difference would be that the underlying URL could be modified in various ways.
## prerequisite requirements
There are a few pre-requisites that I can anticipate. This doesn't cover a full implementation of IIIF support which would have different requirements depending on the approach taken to implementing IIIF. Some of these features would be useful independently of adding IIIF support:
### support for handling failed images loaded via a URL (or a specific IIIFImage feature).
Working with images via web requests will inevitably return the odd failed request. If these images are then requests and don't return it would be useful to have a `None` returned instead of an error. For example, when using `push_to_hub` `datasets` will try and include the image but currently fails with bad URLs.
```python
from datasets import Dataset
import datasets
urls = ['https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg']*3
urls.append("badurl.com/image.jpg")
data = {"url":urls}
ds = Dataset.from_dict(data)
ds = ds.cast_column('url', datasets.Image())
ds[3]['url']
```
returns a `FileNotFoundError`, for streaming large datasets of images using their URLs it could be useful to have `None` returned instead. This has implications for the actual training loop i.e. you now need to somehow skip those examples because of this it might not be desirable to support this.
### Caching support
Since IIIF requests images via a URL it would be great to have a way of not requesting the images multiple times. This is tracked in https://github.com/huggingface/datasets/issues/3142 and I think this would also be very desirable to have here particularly as one of the primary use cases of IIIF may be to do unsupervised pre-training on large datasets of IIIF URLs.
### Support for Parsing IIIF URLs
This gets closer to the actual implementation. Here the requirement would be some way for `datasets` to parse a URL that the users specify is an IIIF URL. An example of a Python library that does this: https://github.com/Princeton-CDH/piffle. I also have a rough version that uses `dataclasses` which I can share.
## Why it might not be worthwhile/suitable for datasets
There are some reasons that this might not be worth implementing:
- currently, IIIF is mainly used by cultural heritage organizations (museums, archives etc.) The adoption of IIIF in this sector has been growing but it's possible that adoption won't be extended to other industries which may also be a source of image data for training ML models.
- It may end up being better to leave this to the user. It would for example be possible for someone to write map functions to change an IIIF URL to the correct size etc. Adding direct support for IIIF in datasets may potentially not be worth the trouble.
- The impact of different approaches to doing image scaling can impact the downstream model's performance, see: https://twitter.com/wightmanr/status/1479528581466243073?s=20. Since different IIIF image servers may implement different approaches to resizing images this could have a downstream impact on model performance. think this is something that could be flagged to the end-user in the documentation. This probably also falls into general "gotchas" that probably aren't the `datasets` libraries' role to protect users from.
Some of the requirements outlined above would be useful for images anyway. These could be implemented prior to a final decision about whether IIIF support could/should be added to datasets.
## Suggested next steps:
I realise this is a long and slightly open-ended issue. I am happy to clarify/answer questions on IIIF and possible integrations. If the prerequisite requirements seem worth exploring/are better explored in their own issues let me know and I can open new issues for those.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4041/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4041/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/274 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/274/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/274/comments | https://api.github.com/repos/huggingface/datasets/issues/274/events | https://github.com/huggingface/datasets/issues/274 | 639,156,625 | MDU6SXNzdWU2MzkxNTY2MjU= | 274 | PG-19 | {
"avatar_url": "https://avatars.githubusercontent.com/u/108653?v=4",
"events_url": "https://api.github.com/users/lucidrains/events{/privacy}",
"followers_url": "https://api.github.com/users/lucidrains/followers",
"following_url": "https://api.github.com/users/lucidrains/following{/other_user}",
"gists_url": "https://api.github.com/users/lucidrains/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lucidrains",
"id": 108653,
"login": "lucidrains",
"node_id": "MDQ6VXNlcjEwODY1Mw==",
"organizations_url": "https://api.github.com/users/lucidrains/orgs",
"received_events_url": "https://api.github.com/users/lucidrains/received_events",
"repos_url": "https://api.github.com/users/lucidrains/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lucidrains/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucidrains/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lucidrains"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | [
"Sounds good! Do you want to give it a try?",
"Ok, I'll see if I can figure it out tomorrow!",
"Got around to this today, and so far so good, I'm able to download and load pg19 locally. However, I think there may be an issue with the dummy data, and testing in general.\r\n\r\nThe problem lies in the fact that each book from pg19 actually resides as its own text file in a google cloud folder that denotes the split, where the book id is the name of the text file. https://console.cloud.google.com/storage/browser/deepmind-gutenberg/train/ I don't believe there's anywhere else (even in the supplied metadata), where the mapping of id -> split can be found.\r\n\r\nTherefore I end up making a network call `tf.io.gfile.listdir` to get all the files within each of the split directories. https://github.com/lucidrains/nlp/commit/adbacbd85decc80db2347d0882e7dab4faa6fd03#diff-cece8f166a85dd927caf574ba303d39bR78\r\n\r\nDoes this network call need to be eventually stubbed out for testing?",
"Ohh nevermind, I think I can use `download_custom` here with `listdir` as the custom function. Ok, I'll keep trying to make the dummy data work!"
] | "2020-06-15T21:02:26Z" | "2020-07-06T15:35:02Z" | "2020-07-06T15:35:02Z" | CONTRIBUTOR | null | null | null | Hi, and thanks for all your open-sourced work, as always!
I was wondering if you would be open to adding PG-19 to your collection of datasets. https://github.com/deepmind/pg19 It is often used for benchmarking long-range language modeling. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/274/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/274/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6403 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6403/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6403/comments | https://api.github.com/repos/huggingface/datasets/issues/6403/events | https://github.com/huggingface/datasets/issues/6403 | 1,990,098,817 | I_kwDODunzps52nn-B | 6,403 | Cannot import datasets on google colab (python 3.10.12) | {
"avatar_url": "https://avatars.githubusercontent.com/u/15389235?v=4",
"events_url": "https://api.github.com/users/nabilaannisa/events{/privacy}",
"followers_url": "https://api.github.com/users/nabilaannisa/followers",
"following_url": "https://api.github.com/users/nabilaannisa/following{/other_user}",
"gists_url": "https://api.github.com/users/nabilaannisa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nabilaannisa",
"id": 15389235,
"login": "nabilaannisa",
"node_id": "MDQ6VXNlcjE1Mzg5MjM1",
"organizations_url": "https://api.github.com/users/nabilaannisa/orgs",
"received_events_url": "https://api.github.com/users/nabilaannisa/received_events",
"repos_url": "https://api.github.com/users/nabilaannisa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nabilaannisa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nabilaannisa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nabilaannisa"
} | [] | closed | false | null | [] | null | [
"You are most likely using an outdated version of `datasets` in the notebook, which can be verified with the `!datasets-cli env` command. You can run `!pip install -U datasets` to update the installation.",
"okay, it works! thank you so much! 😄 "
] | "2023-11-13T08:14:43Z" | "2023-11-16T05:04:22Z" | "2023-11-16T05:04:21Z" | NONE | null | null | null | ### Describe the bug
I'm trying A full colab demo notebook of zero-shot-distillation from https://github.com/huggingface/transformers/tree/main/examples/research_projects/zero-shot-distillation but i got this type of error when importing datasets on my google colab (python version is 3.10.12)
![image](https://github.com/huggingface/datasets/assets/15389235/6f7758a2-681d-4436-87d0-5e557838e368)
I found the same problem that have been solved in [#3326 ] but it seem still error on the google colab. I can't try on my local using jupyter notebook because of my laptop resource doesn't fulfill the requirements.
Please can anyone help me solve this problem. Thank you 😅
### Steps to reproduce the bug
Error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-8-b6e092f83978>](https://localhost:8080/#) in <cell line: 1>()
----> 1 from datasets import load_dataset
2
3 # Print all the available datasets
4 from huggingface_hub import list_datasets
5 print([dataset.id for dataset in list_datasets()])
6 frames
[/usr/lib/python3.10/functools.py](https://localhost:8080/#) in update_wrapper(wrapper, wrapped, assigned, updated)
59 # Issue #17482: set __wrapped__ last so we don't inadvertently copy it
60 # from the wrapped function when updating __dict__
---> 61 wrapper.__wrapped__ = wrapped
62 # Return the wrapper so this can be used as a decorator via partial()
63 return wrapper
AttributeError: readonly attribute
```
### Expected behavior
Run success on Google Colab (free)
### Environment info
Windows 11 x64, Google Colab free | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6403/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6403/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3302 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3302/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3302/comments | https://api.github.com/repos/huggingface/datasets/issues/3302/events | https://github.com/huggingface/datasets/pull/3302 | 1,058,907,168 | PR_kwDODunzps4uynjc | 3,302 | fix old_val typo in f-string | {
"avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4",
"events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}",
"followers_url": "https://api.github.com/users/Mehdi2402/followers",
"following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}",
"gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Mehdi2402",
"id": 56029953,
"login": "Mehdi2402",
"node_id": "MDQ6VXNlcjU2MDI5OTUz",
"organizations_url": "https://api.github.com/users/Mehdi2402/orgs",
"received_events_url": "https://api.github.com/users/Mehdi2402/received_events",
"repos_url": "https://api.github.com/users/Mehdi2402/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Mehdi2402"
} | [] | closed | false | null | [] | null | [] | "2021-11-19T20:51:08Z" | "2021-11-25T22:14:43Z" | "2021-11-22T17:04:19Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3302.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3302",
"merged_at": "2021-11-22T17:04:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3302.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3302"
} |
This PR is to correct a typo in #3277 that @Carlosbogo revieled in a comment.
Related closed issue : #3257
Sorry about that 😅. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3302/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3302/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2493 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2493/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2493/comments | https://api.github.com/repos/huggingface/datasets/issues/2493/events | https://github.com/huggingface/datasets/pull/2493 | 919,833,281 | MDExOlB1bGxSZXF1ZXN0NjY5MDc4OTcw | 2,493 | add tensorflow-macos support | {
"avatar_url": "https://avatars.githubusercontent.com/u/12831254?v=4",
"events_url": "https://api.github.com/users/slayerjain/events{/privacy}",
"followers_url": "https://api.github.com/users/slayerjain/followers",
"following_url": "https://api.github.com/users/slayerjain/following{/other_user}",
"gists_url": "https://api.github.com/users/slayerjain/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/slayerjain",
"id": 12831254,
"login": "slayerjain",
"node_id": "MDQ6VXNlcjEyODMxMjU0",
"organizations_url": "https://api.github.com/users/slayerjain/orgs",
"received_events_url": "https://api.github.com/users/slayerjain/received_events",
"repos_url": "https://api.github.com/users/slayerjain/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/slayerjain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/slayerjain/subscriptions",
"type": "User",
"url": "https://api.github.com/users/slayerjain"
} | [] | closed | false | null | [] | null | [
"@albertvillanova done!"
] | "2021-06-13T16:20:08Z" | "2021-06-15T08:53:06Z" | "2021-06-15T08:53:06Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2493.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2493",
"merged_at": "2021-06-15T08:53:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2493.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2493"
} | ref - https://github.com/huggingface/datasets/issues/2068 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2493/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2493/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/263 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/263/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/263/comments | https://api.github.com/repos/huggingface/datasets/issues/263/events | https://github.com/huggingface/datasets/issues/263 | 637,028,015 | MDU6SXNzdWU2MzcwMjgwMTU= | 263 | [Feature request] Support for external modality for language datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/1479733?v=4",
"events_url": "https://api.github.com/users/aleSuglia/events{/privacy}",
"followers_url": "https://api.github.com/users/aleSuglia/followers",
"following_url": "https://api.github.com/users/aleSuglia/following{/other_user}",
"gists_url": "https://api.github.com/users/aleSuglia/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aleSuglia",
"id": 1479733,
"login": "aleSuglia",
"node_id": "MDQ6VXNlcjE0Nzk3MzM=",
"organizations_url": "https://api.github.com/users/aleSuglia/orgs",
"received_events_url": "https://api.github.com/users/aleSuglia/received_events",
"repos_url": "https://api.github.com/users/aleSuglia/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aleSuglia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aleSuglia/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aleSuglia"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] | closed | false | null | [] | null | [
"Thanks a lot, @aleSuglia for the very detailed and introductive feature request.\r\nIt seems like we could build something pretty useful here indeed.\r\n\r\nOne of the questions here is that Arrow doesn't have built-in support for generic \"tensors\" in records but there might be ways to do that in a clean way. We'll probably try to tackle this during the summer.",
"I was looking into Facebook MMF and apparently they decided to use LMDB to store additional features associated with every example: https://github.com/facebookresearch/mmf/blob/master/mmf/datasets/databases/features_database.py\r\n\r\n",
"I saw the Mozilla common_voice dataset in model hub, which has mp3 audio recordings as part it. It's use predominantly maybe in ASR and TTS, but dataset is a Language + Voice Dataset similar to @aleSuglia's point about Language + Vision. \r\n\r\nhttps://huggingface.co/datasets/common_voice",
"Hey @thomwolf, are there any updates on this? I would love to contribute if possible!\r\n\r\nThanks, \r\nAlessandro ",
"Hi @aleSuglia :) In today's new release 1.17 of `datasets` we introduce a new feature type `Image` that allows to store images directly in a dataset, next to text features and labels for example. There is also an `Audio` feature type, for datasets containing audio data. For tensors there are `Array2D`, `Array3D`, etc. feature types\r\n\r\nNote that both Image and Audio feature types take care of decoding the images/audio data if needed. The returned images are PIL images, and the audio signals are decoded as numpy arrays.\r\n\r\nAnd `datasets` also leverage end-to-end zero copy from the arrow data for all of them, for maximum speed :)"
] | "2020-06-11T13:42:18Z" | "2022-02-10T13:26:35Z" | "2022-02-10T13:26:35Z" | CONTRIBUTOR | null | null | null | # Background
In recent years many researchers have advocated that learning meanings from text-based only datasets is just like asking a human to "learn to speak by listening to the radio" [[E. Bender and A. Koller,2020](https://openreview.net/forum?id=GKTvAcb12b), [Y. Bisk et. al, 2020](https://arxiv.org/abs/2004.10151)]. Therefore, the importance of multi-modal datasets for the NLP community is of paramount importance for next-generation models. For this reason, I raised a [concern](https://github.com/huggingface/nlp/pull/236#issuecomment-639832029) related to the best way to integrate external features in NLP datasets (e.g., visual features associated with an image, audio features associated with a recording, etc.). This would be of great importance for a more systematic way of representing data for ML models that are learning from multi-modal data.
# Language + Vision
## Use case
Typically, people working on Language+Vision tasks, have a reference dataset (either in JSON or JSONL format) and for each example, they have an identifier that specifies the reference image. For a practical example, you can refer to the [GQA](https://cs.stanford.edu/people/dorarad/gqa/download.html#seconddown) dataset.
Currently, images are represented by either pooling-based features (average pooling of ResNet or VGGNet features, see [DeVries et.al, 2017](https://arxiv.org/abs/1611.08481), [Shekhar et.al, 2019](https://www.aclweb.org/anthology/N19-1265.pdf)) where you have a single vector for every image. Another option is to use a set of feature maps for every image extracted from a specific layer of a CNN (see [Xu et.al, 2015](https://arxiv.org/abs/1502.03044)). A more recent option, especially with large-scale multi-modal transformers [Li et. al, 2019](https://arxiv.org/abs/1908.03557), is to use FastRCNN features.
For all these types of features, people use one of the following formats:
1. [HD5F](https://pypi.org/project/h5py/)
2. [NumPy](https://numpy.org/doc/stable/reference/generated/numpy.savez.html)
3. [LMDB](https://lmdb.readthedocs.io/en/release/)
## Implementation considerations
I was thinking about possible ways of implementing this feature. As mentioned above, depending on the model, different visual features can be used. This step usually relies on another model (say ResNet-101) that is used to generate the visual features for each image used in the dataset. Typically, this step is done in a separate script that completes the feature generation procedure. The usual processing steps for these datasets are the following:
1. Download dataset
2. Download images associated with the dataset
3. Write a script that generates the visual features for every image and store them in a specific file
4. Create a DataLoader that maps the visual features to the corresponding language example
In my personal projects, I've decided to ignore HD5F because it doesn't have out-of-the-box support for multi-processing (see this PyTorch [issue](https://github.com/pytorch/pytorch/issues/11929)). I've been successfully using a NumPy compressed file for each image so that I can store any sort of information in it.
For ease of use of all these Language+Vision datasets, it would be really handy to have a way to associate the visual features with the text and store them in an efficient way. That's why I immediately thought about the HuggingFace NLP backend based on Apache Arrow. The assumption here is that the external modality will be mapped to a N-dimensional tensor so easily represented by a NumPy array.
Looking forward to hearing your thoughts about it! | {
"+1": 18,
"-1": 0,
"confused": 0,
"eyes": 4,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 23,
"url": "https://api.github.com/repos/huggingface/datasets/issues/263/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/263/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4750 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4750/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4750/comments | https://api.github.com/repos/huggingface/datasets/issues/4750/events | https://github.com/huggingface/datasets/issues/4750 | 1,319,333,645 | I_kwDODunzps5Oo28N | 4,750 | Easily create loading script for benchmark comprising multiple huggingface datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4",
"events_url": "https://api.github.com/users/JoelNiklaus/events{/privacy}",
"followers_url": "https://api.github.com/users/JoelNiklaus/followers",
"following_url": "https://api.github.com/users/JoelNiklaus/following{/other_user}",
"gists_url": "https://api.github.com/users/JoelNiklaus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JoelNiklaus",
"id": 3775944,
"login": "JoelNiklaus",
"node_id": "MDQ6VXNlcjM3NzU5NDQ=",
"organizations_url": "https://api.github.com/users/JoelNiklaus/orgs",
"received_events_url": "https://api.github.com/users/JoelNiklaus/received_events",
"repos_url": "https://api.github.com/users/JoelNiklaus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoelNiklaus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JoelNiklaus"
} | [] | closed | false | null | [] | null | [
"Hi ! I think the simplest is to copy paste the `_split_generators` code from the other datasets and do a bunch of if-else, as in the glue dataset: https://huggingface.co/datasets/glue/blob/main/glue.py#L467",
"Ok, I see. Thank you"
] | "2022-07-27T10:13:38Z" | "2022-07-27T13:58:07Z" | "2022-07-27T13:58:07Z" | CONTRIBUTOR | null | null | null | Hi,
I would like to create a loading script for a benchmark comprising multiple huggingface datasets.
The function _split_generators needs to return the files for the respective dataset. However, the files are not always in the same location for each dataset. I want to just make a wrapper dataset that provides a single interface to all the underlying datasets.
I thought about downloading the files with the load_dataset function and then providing the link to the cached file. But this seems a bit inelegant to me. What approach would you propose to do this?
Please let me know if you have any questions.
Cheers,
Joel | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4750/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4750/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2539 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2539/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2539/comments | https://api.github.com/repos/huggingface/datasets/issues/2539/events | https://github.com/huggingface/datasets/pull/2539 | 927,952,429 | MDExOlB1bGxSZXF1ZXN0Njc2MDI5MDY5 | 2,539 | remove wi_locness dataset due to licensing issues | {
"avatar_url": "https://avatars.githubusercontent.com/u/4944799?v=4",
"events_url": "https://api.github.com/users/aseifert/events{/privacy}",
"followers_url": "https://api.github.com/users/aseifert/followers",
"following_url": "https://api.github.com/users/aseifert/following{/other_user}",
"gists_url": "https://api.github.com/users/aseifert/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aseifert",
"id": 4944799,
"login": "aseifert",
"node_id": "MDQ6VXNlcjQ5NDQ3OTk=",
"organizations_url": "https://api.github.com/users/aseifert/orgs",
"received_events_url": "https://api.github.com/users/aseifert/received_events",
"repos_url": "https://api.github.com/users/aseifert/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aseifert/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aseifert/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aseifert"
} | [] | closed | false | null | [] | null | [
"Hi ! I'm sorry to hear that.\r\nThough we are not redistributing the dataset, we just provide a python script that downloads and process the dataset from its original source hosted at https://www.cl.cam.ac.uk\r\n\r\nTherefore I'm not sure what's the issue with licensing. What do you mean exactly ?",
"I think that the main issue is that the licesenses of the data are not made clear in the huggingface hub – other people wrongly assumed that the data was license-free, which resulted in commercial use, which is against the licenses.\r\nIs it possible to add the licenses from the original download to huggingface? that would help clear any confusion (licenses can be found here: https://www.cl.cam.ac.uk/research/nl/bea2019st/data/wi+locness_v2.1.bea19.tar.gz)",
"Thanks for the clarification @SimonHFL \r\nYou're completely right, we need to show the licenses.\r\nI just added them here: https://huggingface.co/datasets/wi_locness#licensing-information",
"Hi guys, I'm one of the authors of this dataset. \r\n\r\nTo clarify, we're happy for you to keep the data in the repo on 2 conditions:\r\n1. You don't host the data yourself.\r\n2. You make it clear that anyone who downloads the data via HuggingFace should read and abide by the license. \r\n\r\nI think you've now met these conditions, so we're all good, but I just wanted to make it clear in case there are any issues in the future. Thanks again to @aseifert for bringing this to our attention! :)",
"Thanks for your message @chrisjbryant :)\r\nI'm closing this PR then.\r\n\r\nAnd thanks for reporting @aseifert"
] | "2021-06-23T07:35:32Z" | "2021-06-25T14:52:42Z" | "2021-06-25T14:52:42Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2539.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2539",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2539.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2539"
} | It was brought to my attention that this dataset's license is not only missing, but also prohibits redistribution. I contacted the original author to apologize for this oversight and asked if we could still use it, but unfortunately we can't and the author kindly asked to take down this dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2539/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2539/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/212 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/212/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/212/comments | https://api.github.com/repos/huggingface/datasets/issues/212/events | https://github.com/huggingface/datasets/pull/212 | 626,580,198 | MDExOlB1bGxSZXF1ZXN0NDI0NTQ1NjAy | 212 | have 'add' and 'add_batch' for metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2020-05-28T14:56:47Z" | "2020-05-29T10:41:05Z" | "2020-05-29T10:41:04Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/212.diff",
"html_url": "https://github.com/huggingface/datasets/pull/212",
"merged_at": "2020-05-29T10:41:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/212.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/212"
} | This should fix #116
Previously the `.add` method of metrics expected a batch of examples.
Now `.add` expects one prediction/reference and `.add_batch` expects a batch.
I think it is more coherent with the way the ArrowWriter works. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/212/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/212/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2736 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2736/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2736/comments | https://api.github.com/repos/huggingface/datasets/issues/2736/events | https://github.com/huggingface/datasets/issues/2736 | 956,895,199 | MDU6SXNzdWU5NTY4OTUxOTk= | 2,736 | Add Microsoft Building Footprints dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",
"default": false,
"description": "Vision datasets",
"id": 3608941089,
"name": "vision",
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision"
}
] | open | false | null | [] | null | [
"Motivation: this can be a useful dataset for researchers working on climate change adaptation, urban studies, geography, etc. I'll see if I can figure out how to add it!"
] | "2021-07-30T16:17:08Z" | "2021-12-08T12:09:03Z" | null | MEMBER | null | null | null | ## Adding a Dataset
- **Name:** Microsoft Building Footprints
- **Description:** With the goal to increase the coverage of building footprint data available as open data for OpenStreetMap and humanitarian efforts, we have released millions of building footprints as open data available to download free of charge.
- **Paper:** *link to the dataset paper if available*
- **Data:** https://www.microsoft.com/en-us/maps/building-footprints
- **Motivation:** this can be a useful dataset for researchers working on climate change adaptation, urban studies, geography, etc.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
Reported by: @sashavor | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2736/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2736/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3263 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3263/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3263/comments | https://api.github.com/repos/huggingface/datasets/issues/3263/events | https://github.com/huggingface/datasets/issues/3263 | 1,052,552,516 | I_kwDODunzps4-vK1E | 3,263 | FET DATA | {
"avatar_url": "https://avatars.githubusercontent.com/u/90987031?v=4",
"events_url": "https://api.github.com/users/FStell01/events{/privacy}",
"followers_url": "https://api.github.com/users/FStell01/followers",
"following_url": "https://api.github.com/users/FStell01/following{/other_user}",
"gists_url": "https://api.github.com/users/FStell01/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/FStell01",
"id": 90987031,
"login": "FStell01",
"node_id": "MDQ6VXNlcjkwOTg3MDMx",
"organizations_url": "https://api.github.com/users/FStell01/orgs",
"received_events_url": "https://api.github.com/users/FStell01/received_events",
"repos_url": "https://api.github.com/users/FStell01/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/FStell01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FStell01/subscriptions",
"type": "User",
"url": "https://api.github.com/users/FStell01"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | [] | "2021-11-13T05:46:06Z" | "2021-11-13T13:31:47Z" | "2021-11-13T13:31:47Z" | NONE | null | null | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3263/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3263/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2878 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2878/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2878/comments | https://api.github.com/repos/huggingface/datasets/issues/2878/events | https://github.com/huggingface/datasets/issues/2878 | 990,093,316 | MDU6SXNzdWU5OTAwOTMzMTY= | 2,878 | NotADirectoryError: [WinError 267] During load_from_disk | {
"avatar_url": "https://avatars.githubusercontent.com/u/1875064?v=4",
"events_url": "https://api.github.com/users/Grassycup/events{/privacy}",
"followers_url": "https://api.github.com/users/Grassycup/followers",
"following_url": "https://api.github.com/users/Grassycup/following{/other_user}",
"gists_url": "https://api.github.com/users/Grassycup/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Grassycup",
"id": 1875064,
"login": "Grassycup",
"node_id": "MDQ6VXNlcjE4NzUwNjQ=",
"organizations_url": "https://api.github.com/users/Grassycup/orgs",
"received_events_url": "https://api.github.com/users/Grassycup/received_events",
"repos_url": "https://api.github.com/users/Grassycup/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Grassycup/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Grassycup/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Grassycup"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [] | "2021-09-07T15:15:05Z" | "2021-09-07T15:15:05Z" | null | NONE | null | null | null | ## Describe the bug
Trying to load saved dataset or dataset directory from Amazon S3 on a Windows machine fails.
Performing the same operation succeeds on non-windows environment (AWS Sagemaker).
## Steps to reproduce the bug
```python
# Followed https://huggingface.co/docs/datasets/filesystems.html#loading-a-processed-dataset-from-s3
from datasets import load_from_disk
from datasets.filesystems import S3FileSystem
s3_file = "output of save_to_disk"
s3_filesystem = S3FileSystem()
load_from_disk(s3_file, fs=s3_filesystem)
```
## Expected results
load_from_disk succeeds without error
## Actual results
Seems like it succeeds in pulling the file into a windows temp directory, as it exists in my system, but fails to process it.
```
Exception ignored in: <finalize object at 0x26409231ce0; dead>
Traceback (most recent call last):
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\weakref.py", line 566, in __call__
return info.func(*info.args, **(info.kwargs or {}))
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 817, in _cleanup
cls._rmtree(name)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 813, in _rmtree
_shutil.rmtree(name, onerror=onerror)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 740, in rmtree
return _rmtree_unsafe(path, onerror)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe
_rmtree_unsafe(fullname, onerror)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe
_rmtree_unsafe(fullname, onerror)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe
_rmtree_unsafe(fullname, onerror)
[Previous line repeated 2 more times]
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 618, in _rmtree_unsafe
onerror(os.unlink, fullname, sys.exc_info())
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 805, in onerror
cls._rmtree(path)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 813, in _rmtree
_shutil.rmtree(name, onerror=onerror)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 740, in rmtree
return _rmtree_unsafe(path, onerror)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 599, in _rmtree_unsafe
onerror(os.scandir, path, sys.exc_info())
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 596, in _rmtree_unsafe
with os.scandir(path) as scandir_it:
NotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\Users\\grassycup\\AppData\\Local\\Temp\\tmp45f_qbma\\tests3bucket\\output\\test_output\\train\\dataset.arrow'
Exception ignored in: <finalize object at 0x264091c7880; dead>
Traceback (most recent call last):
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\weakref.py", line 566, in __call__
return info.func(*info.args, **(info.kwargs or {}))
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 817, in _cleanup
cls._rmtree(name)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 813, in _rmtree
_shutil.rmtree(name, onerror=onerror)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 740, in rmtree
return _rmtree_unsafe(path, onerror)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe
_rmtree_unsafe(fullname, onerror)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe
_rmtree_unsafe(fullname, onerror)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe
_rmtree_unsafe(fullname, onerror)
[Previous line repeated 2 more times]
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 618, in _rmtree_unsafe
onerror(os.unlink, fullname, sys.exc_info())
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 805, in onerror
cls._rmtree(path)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 813, in _rmtree
_shutil.rmtree(name, onerror=onerror)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 740, in rmtree
return _rmtree_unsafe(path, onerror)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 599, in _rmtree_unsafe
onerror(os.scandir, path, sys.exc_info())
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 596, in _rmtree_unsafe
with os.scandir(path) as scandir_it:
NotADirectoryError: [WinError 267] The directory name is invalid:
'C:\\Users\\grassycup\\AppData\\Local\\Temp\\tmp45f_qbma\\tests3bucket\\output\\test_output\\train\\dataset.arrow'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform: Windows-10-10.0.19042-SP0
- Python version: 3.8.11
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2878/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2878/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5250 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5250/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5250/comments | https://api.github.com/repos/huggingface/datasets/issues/5250/events | https://github.com/huggingface/datasets/pull/5250 | 1,451,720,030 | PR_kwDODunzps5DB-1y | 5,250 | Change release procedure to use only pull requests | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5250). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5250). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5250). All of your documentation changes will be reflected on that endpoint.",
"Little recap:\r\n- The release-conda GH action was properly triggered by push-tag event: therefore I guess this event is also created when we publish a release and create a tag within it (as it is the case in the new procedure)\r\n - However, the package was only uploaded to huggingface channel and not to conda-forge channel\r\n - [x] Why? Need to address this.\r\n - Reply by @lhoestq: https://github.com/huggingface/datasets/pull/5250#discussion_r1025047531\r\n - We only maintain the huggingface channel\r\n - The conda-forge channel is maintained by the community; the 2.7.0 has been finally added as well to this channel \r\n- The generate-documentation GH action will be triggered by the push-to-branch event if we align the name of the release branch with the expected regex `v*-release`\r\n - [x] The naming has been aligned in the new procedure\r\n - [ ] Question: why do we have different triggering events for generate-doc and release-conda? Maybe we could set the same for both: either push-tag (when publishing the release), or push-to-branch\r\n - I think it will be better to use the push-tag event because in the new release procedure this happens later (when we publish the release), once we have already tested that everything works using the test-PyPI; on the contrary, the push-to-branch event happens before, even before opening the release PR: we could see afterwards that there is an issue, and cancel the Pull Request, but the docs and conda-package will already be published.\r\n- For the naming of the dev-version branch/PR, instead of having a complicated version naming, I'm proposing:\r\n - Using always the same branch name `dev-version`\r\n - Just include a step to delete this branch locally if it exists: `git branch -D dev-version`\r\n - The remote version will not exist because it is deleted once the PR is merged\r\n - This approach is approved by @lhoestq: https://github.com/huggingface/datasets/pull/5250#discussion_r1025048300",
"Just one question to be addressed: why do we have different triggering events for generate-doc and release-conda? Maybe we could set the same for both: either push-tag (when publishing the release), or push-to-branch\r\n\r\nI think it will be better to use the push-tag event because in the new release procedure this happens later (when we publish the release), once we have already tested that everything works using the test-PyPI; on the contrary, the push-to-branch event happens before, even before opening the release PR: we could see afterwards that there is an issue, and cancel the Pull Request, but the docs and conda-package will already be published.\r\n\r\nWe could even use the release-published event instead: [8694901](https://github.com/huggingface/datasets/pull/5250/commits/86949013c9dc59a07b55fad5b78104b8a03f60cd)\r\n",
"@lhoestq now that we have push-tag event for both build_documentation and release-conda, we have no constraint on the naming of the release branch:\r\n- we could name it simpler: maybe as you suggested above: https://github.com/huggingface/datasets/pull/5250#discussion_r1024119018\r\n `release-VERSION` instead of `vVERSION-release` (we do not use the prefix \"v\" anywhere in our repo)"
] | "2022-11-16T14:35:32Z" | "2022-11-22T16:30:58Z" | "2022-11-22T16:27:48Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5250.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5250",
"merged_at": "2022-11-22T16:27:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5250.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5250"
} | This PR changes the release procedure so that:
- it only make changes to main branch via pull requests
- it is no longer necessary to directly commit/push to main branch
Close #5251.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5250/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5250/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5144 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5144/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5144/comments | https://api.github.com/repos/huggingface/datasets/issues/5144/events | https://github.com/huggingface/datasets/issues/5144 | 1,417,974,731 | I_kwDODunzps5UhJPL | 5,144 | Inconsistent documentation on map remove_columns | {
"avatar_url": "https://avatars.githubusercontent.com/u/22047467?v=4",
"events_url": "https://api.github.com/users/zhaowei-wang-nlp/events{/privacy}",
"followers_url": "https://api.github.com/users/zhaowei-wang-nlp/followers",
"following_url": "https://api.github.com/users/zhaowei-wang-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zhaowei-wang-nlp/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zhaowei-wang-nlp",
"id": 22047467,
"login": "zhaowei-wang-nlp",
"node_id": "MDQ6VXNlcjIyMDQ3NDY3",
"organizations_url": "https://api.github.com/users/zhaowei-wang-nlp/orgs",
"received_events_url": "https://api.github.com/users/zhaowei-wang-nlp/received_events",
"repos_url": "https://api.github.com/users/zhaowei-wang-nlp/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zhaowei-wang-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhaowei-wang-nlp/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zhaowei-wang-nlp"
} | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
},
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
},
{
"color": "DF8D62",
"default": false,
"description": "",
"id": 4614514401,
"name": "hacktoberfest",
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest"
}
] | closed | false | null | [] | null | [
"Thanks for reporting, @zhaowei-wang-nlp.\r\n\r\nYou are right, the documentation is confusing on the behavior of `remove_columns`. We should better explain it. ",
"This is a duplicate of https://github.com/huggingface/datasets/issues/2343.",
"I'm closing this issue because as @mariosasko pointed out, it is a duplicate of:\r\n- #2343"
] | "2022-10-21T08:37:53Z" | "2022-11-15T14:15:10Z" | "2022-11-15T14:15:10Z" | NONE | null | null | null | ### Describe the bug
The page [process](https://huggingface.co/docs/datasets/process) says this about the parameter `remove_columns` of the function `map`:
When you remove a column, it is only removed after the example has been provided to the mapped function.
So it seems that the `remove_columns` parameter removes after the mapped functions.
However, another page, [the documentation of the function map](https://huggingface.co/docs/datasets/v2.6.1/en/package_reference/main_classes#datasets.Dataset.map.remove_columns) says:
Columns will be removed before updating the examples with the output of `function`, i.e. if `function` is adding columns with names in remove_columns, these columns will be kept.
So one page says "after the mapped function" and another says "before the mapped function."
Is there something wrong?
### Steps to reproduce the bug
Not about code.
### Expected behavior
consistent about the descriptions of the behavior of the parameter `remove_columns` in the function `map`.
### Environment info
datasets V2.6.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5144/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5144/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3017 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3017/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3017/comments | https://api.github.com/repos/huggingface/datasets/issues/3017/events | https://github.com/huggingface/datasets/pull/3017 | 1,015,215,528 | PR_kwDODunzps4spE9m | 3,017 | Remove unused parameter in xdirname | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | "2021-10-04T13:55:53Z" | "2021-10-05T11:37:01Z" | "2021-10-05T11:37:00Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3017.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3017",
"merged_at": "2021-10-05T11:37:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3017.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3017"
} | Minor fix to remove unused args `*p` in `xdirname`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3017/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3017/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4866 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4866/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4866/comments | https://api.github.com/repos/huggingface/datasets/issues/4866/events | https://github.com/huggingface/datasets/pull/4866 | 1,344,809,132 | PR_kwDODunzps49e1CP | 4,866 | amend docstring for dunder | {
"avatar_url": "https://avatars.githubusercontent.com/u/37704298?v=4",
"events_url": "https://api.github.com/users/schafsam/events{/privacy}",
"followers_url": "https://api.github.com/users/schafsam/followers",
"following_url": "https://api.github.com/users/schafsam/following{/other_user}",
"gists_url": "https://api.github.com/users/schafsam/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/schafsam",
"id": 37704298,
"login": "schafsam",
"node_id": "MDQ6VXNlcjM3NzA0Mjk4",
"organizations_url": "https://api.github.com/users/schafsam/orgs",
"received_events_url": "https://api.github.com/users/schafsam/received_events",
"repos_url": "https://api.github.com/users/schafsam/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/schafsam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/schafsam/subscriptions",
"type": "User",
"url": "https://api.github.com/users/schafsam"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4866). All of your documentation changes will be reflected on that endpoint."
] | "2022-08-19T19:09:15Z" | "2022-09-09T16:33:11Z" | null | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4866.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4866",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4866.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4866"
} | display dunder method in docsting with underlines an not bold markdown. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4866/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4866/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1368 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1368/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1368/comments | https://api.github.com/repos/huggingface/datasets/issues/1368/events | https://github.com/huggingface/datasets/pull/1368 | 760,222,616 | MDExOlB1bGxSZXF1ZXN0NTM1MDkwMjM0 | 1,368 | Re-adding narrativeqa dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghomasHudson",
"id": 13795113,
"login": "ghomasHudson",
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghomasHudson"
} | [] | closed | false | null | [] | null | [
"@lhoestq I think I've fixed the dummy data - it finally passes! I'll add the model card now.",
"@lhoestq - pretty happy with it now",
"> Awesome thank you !\r\n> \r\n> Could you try to reduce the size of the dummy_data.zip file before we merge ? (it's 300KB right now)\r\n> \r\n> To do so feel free to take a look inside it and remove all the unnecessary files and chunks of text, to only keep a few examples. The idea is to have a zip file that is only a few KB\r\n\r\nAh, it only contains 1 example for each split. I think the problem is that I include an entire story (like in the full dataset). We can probably get away with a summarised version.",
"> Nice thank you, can you make it even lighter if possible ? Something round 10KB would be awesone\r\n> We try to keep the repo light so that it doesn't take ages to clone. So we have to make sure the dummy data are as small as possible for every single dataset.\r\n\r\nHave trimmed a little more out of each example now."
] | "2020-12-09T10:53:09Z" | "2020-12-11T13:30:59Z" | "2020-12-11T13:30:59Z" | CONTRIBUTOR | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1368.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1368",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1368.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1368"
} | An update of #309. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1368/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1368/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5150 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5150/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5150/comments | https://api.github.com/repos/huggingface/datasets/issues/5150/events | https://github.com/huggingface/datasets/issues/5150 | 1,420,684,999 | I_kwDODunzps5Ure7H | 5,150 | Problems after upgrading to 2.6.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4",
"events_url": "https://api.github.com/users/pietrolesci/events{/privacy}",
"followers_url": "https://api.github.com/users/pietrolesci/followers",
"following_url": "https://api.github.com/users/pietrolesci/following{/other_user}",
"gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pietrolesci",
"id": 61748653,
"login": "pietrolesci",
"node_id": "MDQ6VXNlcjYxNzQ4NjUz",
"organizations_url": "https://api.github.com/users/pietrolesci/orgs",
"received_events_url": "https://api.github.com/users/pietrolesci/received_events",
"repos_url": "https://api.github.com/users/pietrolesci/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pietrolesci"
} | [] | open | false | null | [] | null | [
"Hi! I can't reproduce the error following these steps. Can you please provide a reproducible example?",
"I faced the same issue:\r\n\r\n### Repro\r\n```\r\n!pip install datasets==2.6.1\r\nimport datasets as Dataset\r\ndataset = Dataset.from_pandas(dataframe)\r\ndataset.save_to_disk(local)\r\n\r\n!pip install datasets==2.5.2\r\nimport datasets as Dataset\r\ndataset = Dataset.load_from_disk(local)\r\n```\r\n\r\n",
"@Lokiiiiii And what are the contents of the \"dataframe\" in your example?",
"I bumped into the issue too. @Lokiiiiii thanks for steps. I \"solved\" if for now by `pip install datasets>=2.6.1` everywhere.",
"Hi all, \r\nI experienced the same issue. \r\nPlease note that the pull request is related to the IMDB example provided in the doc, and is a fix for that, in that context, to make sure that people can follow the doc example and have a working system. \r\nIt does not provide a fix for Datasets itself. ",
"im getting the same error.\r\n- using the base AWS HF container that uses a datasets <2.\r\n- updating the AWS HF container to use dataset 2.4\r\n",
"Same here, running on our SageMaker pipelines. It's only happening for some but not all of our saved Datasets.",
"I am also receiving this error on Sagemaker but not locally, I have noticed that this occurs when the `.dataset/` folder does not contain a single file like:\r\n\r\n`dataset.arrow`\r\n\r\nbut instead contains multiple files like:\r\n\r\n`data-00000-of-00002.arrow`\r\n`data-00001-of-00002.arrow`\r\n\r\nI think that it may have something to do with this recent PR that updated the behaviour of `dataset.save_to_disk` by introducing sharding: https://github.com/huggingface/datasets/pull/5268\r\n\r\nFor now I can get around this by forcing datasets==2.8.0 on machine that creates dataset and in the huggingface instance for training (by running this at the start of training script `os.system(\"pip install datasets==2.8.0\")`)\r\n\r\nTo ensure the dataset is a single shard when saving the dataset locally:\r\n\r\n```python3\r\ndataset.flatten_indices().save_to_disk('path/to/dataset', num_shards=1)\r\n```\r\n\r\n and then manually changing the name afterwards from `path/to/dataset/data-00000-of-00001.arrow` to `path/to/dataset/dataset.arrow` and updating the `path/to/dataset/state.json` to reflect this name change. i.e. by changing `state.json` to this:\r\n\r\n```javascript\r\n{\r\n \"_data_files\": [\r\n {\r\n \"filename\": \"dataset.arrow\"\r\n }\r\n ],\r\n \"_fingerprint\": \"420086f0636f8727\",\r\n \"_format_columns\": null,\r\n \"_format_kwargs\": {},\r\n \"_format_type\": null,\r\n \"_output_all_columns\": false,\r\n \"_split\": null\r\n}\r\n```",
"Does anyone know if this has been resolved?"
] | "2022-10-24T11:32:36Z" | "2023-12-14T14:20:28Z" | null | NONE | null | null | null | ### Describe the bug
Loading a dataset_dict from disk with `load_from_disk` is now creating a `KeyError "length"` that was not occurring in v2.5.2.
Context:
- Each individual dataset in the dict is created with `Dataset.from_pandas`
- The dataset_dict is create from a dict of `Dataset`s, e.g., `DatasetDict({"train": train_ds, "validation": val_ds})
- The pandas dataframe, besides text columns, has a column with a dictionary inside and potentially different keys in each row. Correctly the `Dataset.from_pandas` function adds `key: None` to all dictionaries in each row so that the schema can be correctly inferred.
### Steps to reproduce the bug
Steps to reproduce:
- Upgrade to datasets==2.6.1
- Create a dataset from pandas dataframe with `Dataset.from_pandas`
- Create a dataset_dict from a dict of `Dataset`s, e.g., `DatasetDict({"train": train_ds, "validation": val_ds})
- Save to disk with the `save` function
### Expected behavior
Same as in v2.5.2, that is load from disk without errors
### Environment info
- `datasets` version: 2.6.1
- Platform: Linux-5.4.209-129.367.amzn2int.x86_64-x86_64-with-glibc2.26
- Python version: 3.9.13
- PyArrow version: 9.0.0
- Pandas version: 1.5.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5150/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5150/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3143 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3143/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3143/comments | https://api.github.com/repos/huggingface/datasets/issues/3143/events | https://github.com/huggingface/datasets/issues/3143 | 1,033,569,655 | I_kwDODunzps49mwV3 | 3,143 | Provide a way to check if the features (in info) match with the data of a split | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | open | false | null | [] | null | [
"Related: #3144 "
] | "2021-10-22T13:13:36Z" | "2021-10-22T13:17:56Z" | null | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
I understand that currently the data loaded has not always the type described in the info features
**Describe the solution you'd like**
Provide a way to check if the rows have the type described by info features
**Describe alternatives you've considered**
Always check it, and raise an error when loading the data if their type doesn't match the features.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3143/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3143/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1132 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1132/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1132/comments | https://api.github.com/repos/huggingface/datasets/issues/1132/events | https://github.com/huggingface/datasets/pull/1132 | 757,301,368 | MDExOlB1bGxSZXF1ZXN0NTMyNzAwNTY5 | 1,132 | Add Urdu Sentiment Corpus (USC). | {
"avatar_url": "https://avatars.githubusercontent.com/u/44389205?v=4",
"events_url": "https://api.github.com/users/chaitnayabasava/events{/privacy}",
"followers_url": "https://api.github.com/users/chaitnayabasava/followers",
"following_url": "https://api.github.com/users/chaitnayabasava/following{/other_user}",
"gists_url": "https://api.github.com/users/chaitnayabasava/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/chaitnayabasava",
"id": 44389205,
"login": "chaitnayabasava",
"node_id": "MDQ6VXNlcjQ0Mzg5MjA1",
"organizations_url": "https://api.github.com/users/chaitnayabasava/orgs",
"received_events_url": "https://api.github.com/users/chaitnayabasava/received_events",
"repos_url": "https://api.github.com/users/chaitnayabasava/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/chaitnayabasava/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chaitnayabasava/subscriptions",
"type": "User",
"url": "https://api.github.com/users/chaitnayabasava"
} | [] | closed | false | null | [] | null | [] | "2020-12-04T18:12:24Z" | "2020-12-04T20:52:48Z" | "2020-12-04T20:52:48Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1132.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1132",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1132.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1132"
} | Added Urdu Sentiment Corpus. More details about the dataset over <a href="https://github.com/MuhammadYaseenKhan/Urdu-Sentiment-Corpus">here</a>. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1132/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1132/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1422 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1422/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1422/comments | https://api.github.com/repos/huggingface/datasets/issues/1422/events | https://github.com/huggingface/datasets/issues/1422 | 760,707,113 | MDU6SXNzdWU3NjA3MDcxMTM= | 1,422 | Can't map dataset (loaded from csv) | {
"avatar_url": "https://avatars.githubusercontent.com/u/28161779?v=4",
"events_url": "https://api.github.com/users/SolomidHero/events{/privacy}",
"followers_url": "https://api.github.com/users/SolomidHero/followers",
"following_url": "https://api.github.com/users/SolomidHero/following{/other_user}",
"gists_url": "https://api.github.com/users/SolomidHero/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SolomidHero",
"id": 28161779,
"login": "SolomidHero",
"node_id": "MDQ6VXNlcjI4MTYxNzc5",
"organizations_url": "https://api.github.com/users/SolomidHero/orgs",
"received_events_url": "https://api.github.com/users/SolomidHero/received_events",
"repos_url": "https://api.github.com/users/SolomidHero/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SolomidHero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SolomidHero/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SolomidHero"
} | [] | closed | false | null | [] | null | [
"Please could you post the whole script? I can't reproduce your issue. After updating the feature names/labels to match with the data, everything works fine for me. Try to update datasets/transformers to the newest version.",
"Actually, the problem was how `tokenize` function was defined. This was completely my side mistake, so there are really no needs in this issue anymore"
] | "2020-12-09T22:05:42Z" | "2020-12-17T18:13:40Z" | "2020-12-17T18:13:40Z" | NONE | null | null | null | Hello! I am trying to load single csv file with two columns: ('label': str, 'text' str), where is label is str of two possible classes.
Below steps are similar with [this notebook](https://colab.research.google.com/drive/1-JIJlao4dI-Ilww_NnTc0rxtp-ymgDgM?usp=sharing), where bert model and tokenizer are used to classify lmdb loaded dataset. Only one difference it is the dataset loaded from .csv file.
Here is how I load it:
```python
data_path = 'data.csv'
data = pd.read_csv(data_path)
# process class name to indices
classes = ['neg', 'pos']
class_to_idx = { cl: i for i, cl in enumerate(classes) }
# now data is like {'label': int, 'text' str}
data['label'] = data['label'].apply(lambda x: class_to_idx[x])
# load dataset and map it with defined `tokenize` function
features = Features({
target: ClassLabel(num_classes=2, names=['neg', 'pos'], names_file=None, id=None),
feature: Value(dtype='string', id=None),
})
dataset = Dataset.from_pandas(data, features=features)
dataset.map(tokenize, batched=True, batch_size=len(dataset))
```
It ruins on the last line with following error:
```
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-112-32b6275ce418> in <module>()
9 })
10 dataset = Dataset.from_pandas(data, features=features)
---> 11 dataset.map(tokenizer, batched=True, batch_size=len(dataset))
2 frames
/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1237 test_inputs = self[:2] if batched else self[0]
1238 test_indices = [0, 1] if batched else 0
-> 1239 update_data = does_function_return_dict(test_inputs, test_indices)
1240 logger.info("Testing finished, running the mapping function on the dataset")
1241
/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py in does_function_return_dict(inputs, indices)
1208 fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns]
1209 processed_inputs = (
-> 1210 function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
1211 )
1212 does_return_dict = isinstance(processed_inputs, Mapping)
/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py in __call__(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
2281 )
2282 ), (
-> 2283 "text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) "
2284 "or `List[List[str]]` (batch of pretokenized examples)."
2285 )
AssertionError: text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples).
```
which I think is not expected. I also tried the same steps using `Dataset.from_csv` which resulted in the same error.
For reproducing this, I used [this dataset from kaggle](https://www.kaggle.com/team-ai/spam-text-message-classification) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1422/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1422/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1021 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1021/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1021/comments | https://api.github.com/repos/huggingface/datasets/issues/1021/events | https://github.com/huggingface/datasets/pull/1021 | 755,644,559 | MDExOlB1bGxSZXF1ZXN0NTMxMzE4MTQw | 1,021 | Add Gutenberg time references dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TevenLeScao",
"id": 26709476,
"login": "TevenLeScao",
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TevenLeScao"
} | [] | closed | false | null | [] | null | [
"Description: \"A clean data resource containing all explicit time references in a dataset of 52,183 novels whose full text is available via Project Gutenberg and the Hathi Trust Digital Library 2.\" > This is just the Gutenberg part.\r\n\r\nAlso, the paragraph at the top of the file would make a good Dataset Summary in the README :) "
] | "2020-12-02T22:05:26Z" | "2020-12-03T10:33:39Z" | "2020-12-03T10:33:38Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1021.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1021",
"merged_at": "2020-12-03T10:33:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1021.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1021"
} | This PR adds the gutenberg_time dataset: https://arxiv.org/abs/2011.04124 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1021/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1021/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1561 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1561/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1561/comments | https://api.github.com/repos/huggingface/datasets/issues/1561/events | https://github.com/huggingface/datasets/pull/1561 | 765,831,436 | MDExOlB1bGxSZXF1ZXN0NTM5MTAwNjAy | 1,561 | Lama | {
"avatar_url": "https://avatars.githubusercontent.com/u/8900094?v=4",
"events_url": "https://api.github.com/users/ontocord/events{/privacy}",
"followers_url": "https://api.github.com/users/ontocord/followers",
"following_url": "https://api.github.com/users/ontocord/following{/other_user}",
"gists_url": "https://api.github.com/users/ontocord/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ontocord",
"id": 8900094,
"login": "ontocord",
"node_id": "MDQ6VXNlcjg5MDAwOTQ=",
"organizations_url": "https://api.github.com/users/ontocord/orgs",
"received_events_url": "https://api.github.com/users/ontocord/received_events",
"repos_url": "https://api.github.com/users/ontocord/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ontocord/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ontocord/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ontocord"
} | [] | closed | false | null | [] | null | [
"Let me know why the pyarrow test is failing. For one of the config \"trex\", I had to load an initial datafile for a dictionary which is used to augment the rest of the datasets. In the dummy data, the dictionary file was truncated so I had to fudge that. I'm not sure if that is the issue.\r\n",
"@ontocord it just needs a rerun and it will be good to go.",
"THanks @tanmoyio. How do I do a rerun?",
"@ontocord contributor can’t rerun it, the maintainers will rerun it, it may take lil bit of time as there are so many PRs left to be reviewed and merged ",
"@lhoestq not sure why it is failing. i've made all modifications. ",
"merging since the CI is fixed on master"
] | "2020-12-14T03:27:10Z" | "2020-12-28T09:51:47Z" | "2020-12-28T09:51:47Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1561.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1561",
"merged_at": "2020-12-28T09:51:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1561.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1561"
} | This the LAMA dataset for probing facts and common sense from language models.
See https://github.com/facebookresearch/LAMA for more details. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1561/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1561/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/835 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/835/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/835/comments | https://api.github.com/repos/huggingface/datasets/issues/835/events | https://github.com/huggingface/datasets/issues/835 | 740,102,210 | MDU6SXNzdWU3NDAxMDIyMTA= | 835 | Wikipedia postprocessing | {
"avatar_url": "https://avatars.githubusercontent.com/u/13353204?v=4",
"events_url": "https://api.github.com/users/bminixhofer/events{/privacy}",
"followers_url": "https://api.github.com/users/bminixhofer/followers",
"following_url": "https://api.github.com/users/bminixhofer/following{/other_user}",
"gists_url": "https://api.github.com/users/bminixhofer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bminixhofer",
"id": 13353204,
"login": "bminixhofer",
"node_id": "MDQ6VXNlcjEzMzUzMjA0",
"organizations_url": "https://api.github.com/users/bminixhofer/orgs",
"received_events_url": "https://api.github.com/users/bminixhofer/received_events",
"repos_url": "https://api.github.com/users/bminixhofer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bminixhofer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bminixhofer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bminixhofer"
} | [] | closed | false | null | [] | null | [
"Hi @bminixhofer ! Parsing WikiMedia is notoriously difficult: this processing used [mwparserfromhell](https://github.com/earwig/mwparserfromhell) which is pretty good but not perfect.\r\n\r\nAs an alternative, you can also use the Wiki40b dataset which was pre-processed using an un-released Google internal tool",
"Ok, thanks! I'll try the Wiki40b dataset.",
"If anyone else is concerned about this, `wiki40b` does indeed seem very well cleaned."
] | "2020-11-10T17:26:38Z" | "2020-11-10T18:23:20Z" | "2020-11-10T17:49:21Z" | NONE | null | null | null | Hi, thanks for this library!
Running this code:
```py
import datasets
wikipedia = datasets.load_dataset("wikipedia", "20200501.de")
print(wikipedia['train']['text'][0])
```
I get:
```
mini|Ricardo Flores Magón
mini|Mexikanische Revolutionäre, Magón in der Mitte anführend, gegen die Diktatur von Porfirio Diaz, Ausschnitt des Gemälde „Tierra y Libertad“ von Idelfonso Carrara (?) von 1930.
Ricardo Flores Magón (* 16. September 1874 in San Antonio Eloxochitlán im mexikanischen Bundesstaat Oaxaca; † 22. November 1922 im Bundesgefängnis Leavenworth im US-amerikanischen Bundesstaat Kansas) war als Journalist, Gewerkschafter und Literat ein führender anarchistischer Theoretiker und Aktivist, der die revolutionäre mexikanische Bewegung radikal beeinflusste. Magón war Gründer der Partido Liberal Mexicano und Mitglied der Industrial Workers of the World.
Politische Biografie
Journalistisch und politisch kämpfte er und sein Bruder sehr kompromisslos gegen die Diktatur Porfirio Diaz. Philosophisch und politisch orientiert an radikal anarchistischen Idealen und den Erfahrungen seiner indigenen Vorfahren bei der gemeinschaftlichen Bewirtschaftung des Gemeindelandes, machte er die Forderung „Land und Freiheit“ (Tierra y Libertad) populär. Besonders Francisco Villa und Emiliano Zapata griffen die Forderung Land und Freiheit auf. Seine Philosophie hatte großen Einfluss auf die Landarbeiter. 1904 floh er in die USA und gründete 1906 die Partido Liberal Mexicano. Im Exil lernte er u. a. Emma Goldman kennen. Er verbrachte die meiste Zeit seines Lebens in Gefängnissen und im Exil und wurde 1918 in den USA wegen „Behinderung der Kriegsanstrengungen“ zu zwanzig Jahren Gefängnis verurteilt. Zu seinem Tod gibt es drei verschiedene Theorien. Offiziell starb er an Herzversagen. Librado Rivera, der die Leiche mit eigenen Augen gesehen hat, geht davon aus, dass Magón von einem Mitgefangenen erdrosselt wurde. Die staatstreue Gewerkschaftszeitung CROM veröffentlichte 1923 einen Beitrag, nachdem Magón von einem Gefängniswärter erschlagen wurde.
mini|Die Brüder Ricardo (links) und Enrique Flores Magón (rechts) vor dem Los Angeles County Jail, 1917
[...]
```
so some Markup like `mini|` is still left. Should I run another parser on this text before feeding it to an ML model or is this a known imperfection of parsing Wiki markup?
Apologies if this has been asked before. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/835/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/835/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/507 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/507/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/507/comments | https://api.github.com/repos/huggingface/datasets/issues/507/events | https://github.com/huggingface/datasets/issues/507 | 679,400,683 | MDU6SXNzdWU2Nzk0MDA2ODM= | 507 | Errors when I use | {
"avatar_url": "https://avatars.githubusercontent.com/u/30506151?v=4",
"events_url": "https://api.github.com/users/mchari/events{/privacy}",
"followers_url": "https://api.github.com/users/mchari/followers",
"following_url": "https://api.github.com/users/mchari/following{/other_user}",
"gists_url": "https://api.github.com/users/mchari/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mchari",
"id": 30506151,
"login": "mchari",
"node_id": "MDQ6VXNlcjMwNTA2MTUx",
"organizations_url": "https://api.github.com/users/mchari/orgs",
"received_events_url": "https://api.github.com/users/mchari/received_events",
"repos_url": "https://api.github.com/users/mchari/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mchari/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchari/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mchari"
} | [] | closed | false | null | [] | null | [
"Looks like an issue with 3.0.2 transformers version. Works fine when I use \"master\" version of transformers."
] | "2020-08-14T21:03:57Z" | "2020-08-14T21:39:10Z" | "2020-08-14T21:39:10Z" | NONE | null | null | null | I tried the following example code from https://huggingface.co/deepset/roberta-base-squad2 and got errors
I am using **transformers 3.0.2** code .
from transformers.pipelines import pipeline
from transformers.modeling_auto import AutoModelForQuestionAnswering
from transformers.tokenization_auto import AutoTokenizer
model_name = "deepset/roberta-base-squad2"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
The errors are :
res = nlp(QA_input)
File ".local/lib/python3.6/site-packages/transformers/pipelines.py", line 1316, in __call__
for s, e, score in zip(starts, ends, scores)
File ".local/lib/python3.6/site-packages/transformers/pipelines.py", line 1316, in <listcomp>
for s, e, score in zip(starts, ends, scores)
KeyError: 0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/507/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/507/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3751 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3751/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3751/comments | https://api.github.com/repos/huggingface/datasets/issues/3751/events | https://github.com/huggingface/datasets/pull/3751 | 1,142,609,327 | PR_kwDODunzps4zDw9_ | 3,751 | Fix typo in train split name | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | "2022-02-18T08:18:04Z" | "2022-02-18T14:28:52Z" | "2022-02-18T14:28:52Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3751.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3751",
"merged_at": "2022-02-18T14:28:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3751.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3751"
} | In the README guide (and consequently in many datasets) there was a typo in the train split name:
```
| Tain | Valid | Test |
```
This PR:
- fixes the typo in the train split name
- fixes the column alignment of the split tables
in the README guide and in all datasets. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3751/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3751/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4283 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4283/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4283/comments | https://api.github.com/repos/huggingface/datasets/issues/4283/events | https://github.com/huggingface/datasets/pull/4283 | 1,225,686,988 | PR_kwDODunzps43Tnxo | 4,283 | Fix filesystem docstring | {
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-05-04T17:42:42Z" | "2022-05-06T16:32:02Z" | "2022-05-06T06:22:17Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4283.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4283",
"merged_at": "2022-05-06T06:22:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4283.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4283"
} | This PR untangles the `S3FileSystem` docstring so the [parameters](https://huggingface.co/docs/datasets/master/en/package_reference/main_classes#parameters) are properly displayed. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4283/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4283/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3016 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3016/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3016/comments | https://api.github.com/repos/huggingface/datasets/issues/3016/events | https://github.com/huggingface/datasets/pull/3016 | 1,015,208,654 | PR_kwDODunzps4spDlX | 3,016 | Fix Windows paths in LJ Speech dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | "2021-10-04T13:49:37Z" | "2021-10-04T15:23:05Z" | "2021-10-04T15:23:04Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3016.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3016",
"merged_at": "2021-10-04T15:23:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3016.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3016"
} | Minor fix in LJ Speech dataset for Windows pathname component separator.
Related to #1878. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3016/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3016/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3529 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3529/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3529/comments | https://api.github.com/repos/huggingface/datasets/issues/3529/events | https://github.com/huggingface/datasets/pull/3529 | 1,093,846,356 | PR_kwDODunzps4wiPA9 | 3,529 | Update README.md | {
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/meg-huggingface",
"id": 90473723,
"login": "meg-huggingface",
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"type": "User",
"url": "https://api.github.com/users/meg-huggingface"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/meg-huggingface",
"id": 90473723,
"login": "meg-huggingface",
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"type": "User",
"url": "https://api.github.com/users/meg-huggingface"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/meg-huggingface",
"id": 90473723,
"login": "meg-huggingface",
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"type": "User",
"url": "https://api.github.com/users/meg-huggingface"
}
] | null | [] | "2022-01-04T23:52:47Z" | "2022-01-05T12:50:15Z" | "2022-01-05T12:50:14Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3529.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3529",
"merged_at": "2022-01-05T12:50:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3529.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3529"
} | Updating licensing information & personal and sensitive information. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3529/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3529/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4638 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4638/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4638/comments | https://api.github.com/repos/huggingface/datasets/issues/4638/events | https://github.com/huggingface/datasets/pull/4638 | 1,295,233,315 | PR_kwDODunzps4656H9 | 4,638 | The speechocean762 dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/1777456?v=4",
"events_url": "https://api.github.com/users/jimbozhang/events{/privacy}",
"followers_url": "https://api.github.com/users/jimbozhang/followers",
"following_url": "https://api.github.com/users/jimbozhang/following{/other_user}",
"gists_url": "https://api.github.com/users/jimbozhang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jimbozhang",
"id": 1777456,
"login": "jimbozhang",
"node_id": "MDQ6VXNlcjE3Nzc0NTY=",
"organizations_url": "https://api.github.com/users/jimbozhang/orgs",
"received_events_url": "https://api.github.com/users/jimbozhang/received_events",
"repos_url": "https://api.github.com/users/jimbozhang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jimbozhang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jimbozhang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jimbozhang"
} | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | [] | null | [
"CircleCL reported two errors, but I didn't find the reason. The error message:\r\n```\r\n_________________ ERROR collecting tests/test_dataset_cards.py _________________\r\ntests/test_dataset_cards.py:53: in <module>\r\n @pytest.mark.parametrize(\"dataset_name\", get_changed_datasets(repo_path))\r\ntests/test_dataset_cards.py:35: in get_changed_datasets\r\n diff_output = check_output([\"git\", \"diff\", \"--name-only\", \"origin/master...HEAD\"], cwd=repo_path)\r\n../.pyenv/versions/3.6.15/lib/python3.6/subprocess.py:356: in check_output\r\n **kwargs).stdout\r\n../.pyenv/versions/3.6.15/lib/python3.6/subprocess.py:438: in run\r\n output=stdout, stderr=stderr)\r\nE subprocess.CalledProcessError: Command '['git', 'diff', '--name-only', 'origin/master...HEAD']' returned non-zero exit status 128.\r\n\r\n=========================== short test summary info ============================\r\nERROR tests/test_dataset_cards.py - subprocess.CalledProcessError: Command '[...\r\nERROR tests/test_dataset_cards.py - subprocess.CalledProcessError: Command '[...\r\n= 4011 passed, 2357 skipped, 2 xfailed, 1 xpassed, 116 warnings, 2 errors in 284.32s (0:04:44) =\r\n\r\nExited with code exit status 1\r\n```\r\nI'm not sure if it was caused by this PR ...\r\n\r\nI ran `tests/test_dataset_cards.py` in my local environment, and it passed:\r\n```\r\n(venv)$ pytest tests/test_dataset_cards.py\r\n============================== test session starts ==============================\r\nplatform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0\r\nrootdir: /home/zhangjunbo/src/datasets\r\nplugins: forked-1.4.0, datadir-1.3.1, xdist-2.5.0\r\ncollected 1531 items\r\n\r\ntests/test_dataset_cards.py ..... [100%]\r\n======================= 766 passed, 765 skipped in 2.55s ========================\r\n```\r\n",
"@sanchit-gandhi could you also maybe take a quick look? :-)",
"Thanks for your contribution, @jimbozhang. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help.",
"> Thanks for your contribution, @jimbozhang. Are you still interested in adding this dataset?\r\n> \r\n> We are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n> \r\n> We would suggest you create this dataset there. Please, feel free to tell us if you need some help.\r\n\r\nYes, I just planned to finish this dataset these days, and this suggestion is just in time! Thanks a lot!\r\nI will create this dataset to Hugging Face Hub soon, maybe this week."
] | "2022-07-06T06:17:30Z" | "2022-10-03T09:34:36Z" | "2022-10-03T09:34:36Z" | NONE | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4638.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4638",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4638.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4638"
} | [speechocean762](https://www.openslr.org/101/) is a non-native English corpus for pronunciation scoring tasks. It is free for both commercial and non-commercial use.
I believe it will be easier to use if it could be available on Hugging Face. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4638/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4638/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1667 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1667/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1667/comments | https://api.github.com/repos/huggingface/datasets/issues/1667/events | https://github.com/huggingface/datasets/pull/1667 | 776,446,658 | MDExOlB1bGxSZXF1ZXN0NTQ2OTM4MjAy | 1,667 | Fix NER metric example in Overview notebook | {
"avatar_url": "https://avatars.githubusercontent.com/u/53588015?v=4",
"events_url": "https://api.github.com/users/jungwhank/events{/privacy}",
"followers_url": "https://api.github.com/users/jungwhank/followers",
"following_url": "https://api.github.com/users/jungwhank/following{/other_user}",
"gists_url": "https://api.github.com/users/jungwhank/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jungwhank",
"id": 53588015,
"login": "jungwhank",
"node_id": "MDQ6VXNlcjUzNTg4MDE1",
"organizations_url": "https://api.github.com/users/jungwhank/orgs",
"received_events_url": "https://api.github.com/users/jungwhank/received_events",
"repos_url": "https://api.github.com/users/jungwhank/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jungwhank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jungwhank/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jungwhank"
} | [] | closed | false | null | [] | null | [] | "2020-12-30T13:05:19Z" | "2020-12-31T01:12:08Z" | "2020-12-30T17:21:51Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1667.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1667",
"merged_at": "2020-12-30T17:21:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1667.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1667"
} | Fix errors in `NER metric example` section in `Overview.ipynb`.
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-37-ee559b166e25> in <module>()
----> 1 ner_metric = load_metric('seqeval')
2 references = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
3 predictions = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
4 ner_metric.compute(predictions, references)
/usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs)
340 if needs_to_be_installed:
341 raise ImportError(
--> 342 f"To be able to use this {module_type}, you need to install the following dependencies"
343 f"{[lib_name for lib_name, lib_path in needs_to_be_installed]} using 'pip install "
344 f"{' '.join([lib_path for lib_name, lib_path in needs_to_be_installed])}' for instance'"
ImportError: To be able to use this metric, you need to install the following dependencies['seqeval'] using 'pip install seqeval' for instance'
```
```
ValueError Traceback (most recent call last)
<ipython-input-39-ee559b166e25> in <module>()
2 references = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
3 predictions = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
----> 4 ner_metric.compute(predictions, references)
/usr/local/lib/python3.6/dist-packages/datasets/metric.py in compute(self, *args, **kwargs)
378 """
379 if args:
--> 380 raise ValueError("Please call `compute` using keyword arguments.")
381
382 predictions = kwargs.pop("predictions", None)
ValueError: Please call `compute` using keyword arguments.
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1667/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1667/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4621 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4621/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4621/comments | https://api.github.com/repos/huggingface/datasets/issues/4621/events | https://github.com/huggingface/datasets/issues/4621 | 1,293,030,128 | I_kwDODunzps5NEhLw | 4,621 | ImageFolder raises an error with parameters drop_metadata=True and drop_labels=False when metadata.jsonl is present | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
}
] | null | [] | "2022-07-04T11:21:44Z" | "2022-07-15T14:24:24Z" | "2022-07-15T14:24:24Z" | CONTRIBUTOR | null | null | null | ## Describe the bug
If you pass `drop_metadata=True` and `drop_labels=False` when a `data_dir` contains at least one `matadata.jsonl` file, you will get a KeyError. This is probably not a very useful case but we shouldn't get an error anyway. Asking users to move metadata files manually outside `data_dir` or pass features manually (when there is a tool that can infer them automatically) don't look like a good idea to me either.
## Steps to reproduce the bug
### Clone an example dataset from the Hub
```bash
git clone https://huggingface.co/datasets/nateraw/test-imagefolder-metadata
```
### Try to load it
```python
from datasets import load_dataset
ds = load_dataset("test-imagefolder-metadata", drop_metadata=True, drop_labels=False)
```
or even just
```python
ds = load_dataset("test-imagefolder-metadata", drop_metadata=True)
```
as `drop_labels=False` is a default value.
## Expected results
A DatasetDict object with two features: `"image"` and `"label"`.
## Actual results
```
Traceback (most recent call last):
File "/home/polina/workspace/datasets/debug.py", line 18, in <module>
ds = load_dataset(
File "/home/polina/workspace/datasets/src/datasets/load.py", line 1732, in load_dataset
builder_instance.download_and_prepare(
File "/home/polina/workspace/datasets/src/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/polina/workspace/datasets/src/datasets/builder.py", line 1227, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "/home/polina/workspace/datasets/src/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/polina/workspace/datasets/src/datasets/builder.py", line 1218, in _prepare_split
example = self.info.features.encode_example(record)
File "/home/polina/workspace/datasets/src/datasets/features/features.py", line 1596, in encode_example
return encode_nested_example(self, example)
File "/home/polina/workspace/datasets/src/datasets/features/features.py", line 1165, in encode_nested_example
{
File "/home/polina/workspace/datasets/src/datasets/features/features.py", line 1165, in <dictcomp>
{
File "/home/polina/workspace/datasets/src/datasets/utils/py_utils.py", line 249, in zip_dict
yield key, tuple(d[key] for d in dicts)
File "/home/polina/workspace/datasets/src/datasets/utils/py_utils.py", line 249, in <genexpr>
yield key, tuple(d[key] for d in dicts)
KeyError: 'label'
```
## Environment info
`datasets` master branch
- `datasets` version: 2.3.3.dev0
- Platform: Linux-5.14.0-1042-oem-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyArrow version: 6.0.1
- Pandas version: 1.4.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4621/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4621/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1602 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1602/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1602/comments | https://api.github.com/repos/huggingface/datasets/issues/1602/events | https://github.com/huggingface/datasets/pull/1602 | 770,841,810 | MDExOlB1bGxSZXF1ZXN0NTQyNTA4NTM4 | 1,602 | second update of id_newspapers_2018 | {
"avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4",
"events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}",
"followers_url": "https://api.github.com/users/cahya-wirawan/followers",
"following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}",
"gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cahya-wirawan",
"id": 7669893,
"login": "cahya-wirawan",
"node_id": "MDQ6VXNlcjc2Njk4OTM=",
"organizations_url": "https://api.github.com/users/cahya-wirawan/orgs",
"received_events_url": "https://api.github.com/users/cahya-wirawan/received_events",
"repos_url": "https://api.github.com/users/cahya-wirawan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cahya-wirawan"
} | [] | closed | false | null | [] | null | [] | "2020-12-18T12:16:37Z" | "2020-12-22T10:41:15Z" | "2020-12-22T10:41:14Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1602.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1602",
"merged_at": "2020-12-22T10:41:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1602.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1602"
} | The feature "url" is currently set wrongly to data["date"], this PR fix it to data["url"].
I added also an additional POC. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1602/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1602/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/45 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/45/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/45/comments | https://api.github.com/repos/huggingface/datasets/issues/45/events | https://github.com/huggingface/datasets/pull/45 | 612,386,583 | MDExOlB1bGxSZXF1ZXN0NDEzMzQzMjAy | 45 | [Load] Separate Module kwargs and builder kwargs. | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | [] | "2020-05-05T07:09:54Z" | "2022-10-04T09:32:11Z" | "2020-05-08T09:51:22Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/45.diff",
"html_url": "https://github.com/huggingface/datasets/pull/45",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/45.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/45"
} | Kwargs for the `load_module` fn should be passed with `module_xxxx` to `builder_kwargs` of `load` fn.
This is a follow-up PR of: https://github.com/huggingface/nlp/pull/41 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/45/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/45/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6484 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6484/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6484/comments | https://api.github.com/repos/huggingface/datasets/issues/6484/events | https://github.com/huggingface/datasets/issues/6484 | 2,033,333,294 | I_kwDODunzps55MjQu | 6,484 | [Feature Request] Dataset versioning | {
"avatar_url": "https://avatars.githubusercontent.com/u/47979198?v=4",
"events_url": "https://api.github.com/users/kenfus/events{/privacy}",
"followers_url": "https://api.github.com/users/kenfus/followers",
"following_url": "https://api.github.com/users/kenfus/following{/other_user}",
"gists_url": "https://api.github.com/users/kenfus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kenfus",
"id": 47979198,
"login": "kenfus",
"node_id": "MDQ6VXNlcjQ3OTc5MTk4",
"organizations_url": "https://api.github.com/users/kenfus/orgs",
"received_events_url": "https://api.github.com/users/kenfus/received_events",
"repos_url": "https://api.github.com/users/kenfus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kenfus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kenfus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kenfus"
} | [] | open | false | null | [] | null | [
"Hello @kenfus, this is meant to be possible to do yes. Let me ping @lhoestq or @mariosasko from the `datasets` team (`huggingface_hub` is only the underlying library to download files from the Hub but here it looks more like a `datasets` problem). ",
"Hi! https://github.com/huggingface/datasets/pull/6459 will fix this."
] | "2023-12-08T16:01:35Z" | "2023-12-11T19:13:46Z" | null | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
I am working on a project, where I would like to test different preprocessing methods for my ML-data. Thus, I would like to work a lot with revisions and compare them. Currently, I was not able to make it work with the revision keyword because it was not redownloading the data, it was reading in some cached data, until I put `download_mode="force_redownload"`, even though the reversion was different.
Of course, I may have done something wrong or missed a setting somewhere!
**Describe the solution you'd like**
The solution would allow me to easily work with revisions:
- create a new dataset (by combining things, different preprocessing, ..) and give it a new revision (v.1.2.3), maybe like this:
`dataset_audio.push_to_hub('kenfus/xy', revision='v1.0.2')`
- then, get the current revision as follows:
```
dataset = load_dataset(
'kenfus/xy', revision='v1.0.2',
)
```
this downloads the new version and does not load in a different revision and all future map, filter, .. operations are done on this dataset and not loaded from cache produced from a different revision.
- if I rerun the run, the caching should be smart enough in every step to not reuse a mapping operation on a different revision.
**Describe alternatives you've considered**
I created my own caching, putting `download_mode="force_redownload"` and `load_from_cache_file=False,` everywhere.
**Additional context**
Thanks a lot for your great work! Creating NLP datasets and training a model with them is really easy and straightforward with huggingface.
This is the data loading in my script:
```
## CREATE PATHS
prepared_dataset_path = os.path.join(
DATA_FOLDER, str(DATA_VERSION), "prepared_dataset"
)
os.makedirs(os.path.join(DATA_FOLDER, str(DATA_VERSION)), exist_ok=True)
## LOAD DATASET
if os.path.exists(prepared_dataset_path):
print("Loading prepared dataset from disk...")
dataset_prepared = load_from_disk(prepared_dataset_path)
else:
print("Loading dataset from HuggingFace Datasets...")
dataset = load_dataset(
PATH_TO_DATASET, revision=DATA_VERSION, download_mode="force_redownload"
)
print("Preparing dataset...")
dataset_prepared = dataset.map(
prepare_dataset,
remove_columns=["audio", "transcription"],
num_proc=os.cpu_count(),
load_from_cache_file=False,
)
dataset_prepared.save_to_disk(prepared_dataset_path)
del dataset
if CHECK_DATASET:
## CHECK DATASET
dataset_prepared = dataset_prepared.map(
check_dimensions, num_proc=os.cpu_count(), load_from_cache_file=False
)
dataset_filtered = dataset_prepared.filter(
lambda example: not example["incorrect_dimension"],
load_from_cache_file=False,
)
for example in dataset_prepared.filter(
lambda example: example["incorrect_dimension"], load_from_cache_file=False
):
print(example["path"])
print(
f"Number of examples with incorrect dimension: {len(dataset_prepared) - len(dataset_filtered)}"
)
print("Number of examples train: ", len(dataset_filtered["train"]))
print("Number of examples test: ", len(dataset_filtered["test"]))
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6484/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6484/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3282 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3282/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3282/comments | https://api.github.com/repos/huggingface/datasets/issues/3282/events | https://github.com/huggingface/datasets/issues/3282 | 1,055,054,898 | I_kwDODunzps4-4twy | 3,282 | ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/10078549?v=4",
"events_url": "https://api.github.com/users/MinionAttack/events{/privacy}",
"followers_url": "https://api.github.com/users/MinionAttack/followers",
"following_url": "https://api.github.com/users/MinionAttack/following{/other_user}",
"gists_url": "https://api.github.com/users/MinionAttack/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MinionAttack",
"id": 10078549,
"login": "MinionAttack",
"node_id": "MDQ6VXNlcjEwMDc4NTQ5",
"organizations_url": "https://api.github.com/users/MinionAttack/orgs",
"received_events_url": "https://api.github.com/users/MinionAttack/received_events",
"repos_url": "https://api.github.com/users/MinionAttack/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MinionAttack/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MinionAttack/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MinionAttack"
} | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | [] | null | [
"Hi ! Thanks for reporting :)\r\nI think this is because the dataset is behind an access page. We can fix the dataset viewer\r\n\r\nIf you also have this error when you use the `datasets` library in python, you should probably pass `use_auth_token=True` to the `load_dataset()` function to use your account to access the dataset.",
"Ah ok, I didn't realise about the login page. I'll try `use_auth_token=True` and see if that solves it.\r\n\r\nRegards!",
"Hi, \r\n\r\nUsing `use_auth_token=True` and downloading the credentials with `huggingface-cli login` (stored in .huggingface/token) solved the issue.\r\n\r\nShould I leave the issue open until you fix the Dataset viewer issue?",
"Cool ! Yes let's keep this issue open until the viewer is fixed - I'll close it when this is fixed. Thanks",
"The error I get when trying to load OSCAR 21.09 is this\r\n```\r\nConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py\r\n```\r\n\r\nThe URL I get in the browser is this\r\n```\r\nhttps://huggingface.co/datasets/oscar-corpus/OSCAR-2109/blob/main/OSCAR-2109.py\r\n```\r\n\r\nMaybe URL is the issue? (resolve vs blob)",
"> The error I get when trying to load OSCAR 21.09 is this\r\n> \r\n> ```\r\n> ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py\r\n> ```\r\n> \r\n> The URL I get in the browser is this\r\n> \r\n> ```\r\n> https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/blob/main/OSCAR-2109.py\r\n> ```\r\n> \r\n> Maybe URL is the issue? (resolve vs blob)\r\n\r\nYou need to download your login credentials. See `huggingface-cli login` documentation and when loading the dataset use `use_auth_token=True`:\r\n`\r\nload_dataset(corpus, language, split=None, use_auth_token=True, cache_dir=cache_folder)`",
"Fixed.\r\n\r\n<img width=\"1542\" alt=\"Capture d’écran 2022-04-12 à 13 57 24\" src=\"https://user-images.githubusercontent.com/1676121/162957585-af96d19c-f86c-47fe-80c4-2b071083cee4.png\">\r\n"
] | "2021-11-16T16:05:19Z" | "2022-04-12T11:57:43Z" | "2022-04-12T11:57:43Z" | NONE | null | null | null | ## Dataset viewer issue for '*oscar-corpus/OSCAR-2109*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109)*
*The dataset library cannot download any language from the oscar-corpus/OSCAR-2109 dataset. By entering the URL in your browser I can access the file.*
```
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py
```
Am I the one who added this dataset ? No
Using the older version of [OSCAR](https://huggingface.co/datasets/oscar) I don't have any issues downloading languages with the dataset library. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3282/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3282/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1508 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1508/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1508/comments | https://api.github.com/repos/huggingface/datasets/issues/1508/events | https://github.com/huggingface/datasets/pull/1508 | 763,908,724 | MDExOlB1bGxSZXF1ZXN0NTM4MjEyODUy | 1,508 | Fix namedsplit docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"Hii please follow me",
"Thanks @mariosasko!"
] | "2020-12-12T14:43:38Z" | "2021-03-11T02:18:39Z" | "2020-12-15T12:57:48Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1508.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1508",
"merged_at": "2020-12-15T12:57:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1508.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1508"
} | Fixes a broken link and `DatasetInfoMixin.split`'s docstring. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1508/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1508/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2510 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2510/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2510/comments | https://api.github.com/repos/huggingface/datasets/issues/2510/events | https://github.com/huggingface/datasets/pull/2510 | 923,735,485 | MDExOlB1bGxSZXF1ZXN0NjcyNDY3MzY3 | 2,510 | Add align_labels_with_mapping to DatasetDict | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-06-17T10:03:35Z" | "2021-06-17T10:45:25Z" | "2021-06-17T10:45:24Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2510.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2510",
"merged_at": "2021-06-17T10:45:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2510.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2510"
} | https://github.com/huggingface/datasets/pull/2457 added the `Dataset.align_labels_with_mapping` method.
In this PR I also added `DatasetDict.align_labels_with_mapping` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2510/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2510/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/368 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/368/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/368/comments | https://api.github.com/repos/huggingface/datasets/issues/368/events | https://github.com/huggingface/datasets/issues/368 | 654,087,251 | MDU6SXNzdWU2NTQwODcyNTE= | 368 | load_metric can't acquire lock anymore | {
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ydshieh",
"id": 2521628,
"login": "ydshieh",
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ydshieh"
} | [] | closed | false | null | [] | null | [
"I found that, in the same process (or the same interactive session), if I do\r\n\r\nimport nlp\r\n\r\nm1 = nlp.load_metric('glue', 'mrpc')\r\nm2 = nlp.load_metric('glue', 'sst2')\r\n\r\nI will get the same error `ValueError: Cannot acquire lock, caching file might be used by another process, you should setup a unique 'experiment_id'`."
] | "2020-07-09T14:04:09Z" | "2020-07-10T13:45:20Z" | "2020-07-10T13:45:20Z" | NONE | null | null | null | I can't load metric (glue) anymore after an error in a previous run. I even removed the whole cache folder `/home/XXX/.cache/huggingface/`, and the issue persisted. What are the steps to fix this?
Traceback (most recent call last):
File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/metric.py", line 101, in __init__
self.filelock.acquire(timeout=1)
File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/filelock.py", line 278, in acquire
raise Timeout(self._lock_file)
filelock.Timeout: The file lock '/home/XXX/.cache/huggingface/metrics/glue/1.0.0/1-glue-0.arrow.lock' could not be acquired.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "examples_huggingface_nlp.py", line 268, in <module>
main()
File "examples_huggingface_nlp.py", line 242, in main
dataset, metric = get_dataset_metric(glue_task)
File "examples_huggingface_nlp.py", line 77, in get_dataset_metric
metric = nlp.load_metric('glue', glue_config, experiment_id=1)
File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/load.py", line 440, in load_metric
**metric_init_kwargs,
File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/metric.py", line 104, in __init__
"Cannot acquire lock, caching file might be used by another process, "
ValueError: Cannot acquire lock, caching file might be used by another process, you should setup a unique 'experiment_id' for this run.
I0709 15:54:41.008838 139854118430464 filelock.py:318] Lock 139852058030936 released on /home/XXX/.cache/huggingface/metrics/glue/1.0.0/1-glue-0.arrow.lock
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/368/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/368/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/231 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/231/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/231/comments | https://api.github.com/repos/huggingface/datasets/issues/231/events | https://github.com/huggingface/datasets/pull/231 | 629,988,694 | MDExOlB1bGxSZXF1ZXN0NDI3MTk3MTcz | 231 | Add .download to MockDownloadManager | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2020-06-03T13:20:00Z" | "2020-06-03T14:25:56Z" | "2020-06-03T14:25:55Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/231.diff",
"html_url": "https://github.com/huggingface/datasets/pull/231",
"merged_at": "2020-06-03T14:25:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/231.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/231"
} | One method from the DownloadManager was missing and some users couldn't run the tests because of that.
@yjernite | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/231/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/231/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/731 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/731/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/731/comments | https://api.github.com/repos/huggingface/datasets/issues/731/events | https://github.com/huggingface/datasets/pull/731 | 721,142,985 | MDExOlB1bGxSZXF1ZXN0NTAzMTExNzc4 | 731 | dataset(aslg_pc12): initial loading script | {
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AmitMY",
"id": 5757359,
"login": "AmitMY",
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AmitMY"
} | [] | closed | false | null | [] | null | [
"Thanks @lhoestq \r\nAre there any guidelines for the dummy data?\r\nIn this particular case for example, the dataset fetches from two hardcoded URLs. \r\nDo I just `head -n 10` both files and zip them?\r\n\r\n",
"> Thanks @lhoestq\r\n> Are there any guidelines for the dummy data?\r\n> In this particular case for example, the dataset fetches from two hardcoded URLs.\r\n> Do I just `head -n 10` both files and zip them?\r\n\r\nYes the idea is just to have a few examples to properly test the script and make sure it keeps working in the long run.\r\n\r\nAnd FYI there's a command to help you name the dummy data files correctly. More info in the documentation [here](https://huggingface.co/docs/datasets/share_dataset.html#adding-dummy-data)",
"@lhoestq passes all tests"
] | "2020-10-14T05:14:37Z" | "2020-10-28T15:27:06Z" | "2020-10-28T15:27:06Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/731.diff",
"html_url": "https://github.com/huggingface/datasets/pull/731",
"merged_at": "2020-10-28T15:27:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/731.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/731"
} | This contains the only current public part of this corpus.
The rest of the corpus is not yet been made public, but this sample is still being used by researchers. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/731/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/731/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2483 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2483/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2483/comments | https://api.github.com/repos/huggingface/datasets/issues/2483/events | https://github.com/huggingface/datasets/pull/2483 | 918,871,712 | MDExOlB1bGxSZXF1ZXN0NjY4MjU1Mjg1 | 2,483 | Use gc.collect only when needed to avoid slow downs | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"I continue thinking that the origin of the issue has to do with tqdm (and not with Arrow): this issue only arises for version 4.50.0 (and later) of tqdm, not for previous versions of tqdm.\r\n\r\nMy guess is that tqdm made a change from version 4.50.0 that does not properly release the iterable. ",
"FR"
] | "2021-06-11T15:09:30Z" | "2021-06-18T19:25:06Z" | "2021-06-11T15:31:36Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2483.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2483",
"merged_at": "2021-06-11T15:31:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2483.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2483"
} | In https://github.com/huggingface/datasets/commit/42320a110d9d072703814e1f630a0d90d626a1e6 we added a call to gc.collect to resolve some issues on windows (see https://github.com/huggingface/datasets/pull/2482)
However calling gc.collect too often causes significant slow downs (the CI run time doubled).
So I just moved the gc.collect call to the exact place where it's actually needed: when post-processing a dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2483/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2483/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2361 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2361/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2361/comments | https://api.github.com/repos/huggingface/datasets/issues/2361/events | https://github.com/huggingface/datasets/pull/2361 | 891,982,808 | MDExOlB1bGxSZXF1ZXN0NjQ0NzYzNTU4 | 2,361 | Preserve dtype for numpy/torch/tf/jax arrays | {
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhavitvyamalik",
"id": 19718818,
"login": "bhavitvyamalik",
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhavitvyamalik"
} | [] | closed | false | null | [] | null | [
"Hi @lhoestq, \r\nIt turns out that pyarrow `ListArray` are not recognized as list-like when we get output from `numpy_to_pyarrow_listarray`. This might cause tests to fail. If possible can we convert that `ListArray` output to list inorder for tests to pass? Under the hood it'll maintain the dtype as that of numpy array passed during input only",
"Brought down the failing tests from 7 to 4. Let me know if that part looks good. Failing tests are looking quite similar. In `test_map_torch` https://github.com/huggingface/datasets/blob/3d46bc384f811435e59e3916faa3aa20a1cf87bc/tests/test_arrow_dataset.py#L1039 and `test_map_tf`https://github.com/huggingface/datasets/blob/3d46bc384f811435e59e3916faa3aa20a1cf87bc/tests/test_arrow_dataset.py#L1056 \r\nthey're expecting `float64`. Shouldn't that be `float32` now?",
"It's normal: pytorch and tensorflow use `float32` by default, unlike numpy which uses `float64`.\r\n\r\nI think that we should always keep the precision of the original tensor (torch/tf/numpy).\r\nIt means that as it is in this PR it's fine (the precision is conserved when doing the torch/tf -> numpy conversion).\r\n\r\nThis is a breaking change but in my opinion the fact that we had Value(\"float64\") for torch.float32 tensors was an issue already.\r\n\r\nLet me know what you think. Cc @albertvillanova if you have an opinion on this\r\n\r\nIf we agree on doing this breaking change, we can just change the test. ",
"Hi @lhoestq, \r\nMerged master into this branch. Only changing the test is left for now (mentioned below) after which all tests should pass.\r\n\r\n> Brought down the failing tests from 7 to 4. Let me know if that part looks good. Failing tests are looking quite similar. In `test_map_torch`\r\n> \r\n> https://github.com/huggingface/datasets/blob/3d46bc384f811435e59e3916faa3aa20a1cf87bc/tests/test_arrow_dataset.py#L1039\r\n> \r\n> and `test_map_tf`\r\n> https://github.com/huggingface/datasets/blob/3d46bc384f811435e59e3916faa3aa20a1cf87bc/tests/test_arrow_dataset.py#L1056\r\n> \r\n> \r\n> they're expecting `float64`. Shouldn't that be `float32` now?\r\n\r\n",
"> they're expecting float64. Shouldn't that be float32 now?\r\n\r\nYes feel free to update those tests :)\r\n\r\nIt would be nice to have the same test for JAX as well",
"Added same test for for JAX too. Also, I saw that I missed changing `test_cast_to_python_objects_jax` like I did for TF and PyTorch. Finished that as well"
] | "2021-05-14T14:45:23Z" | "2021-08-17T08:30:04Z" | "2021-08-17T08:30:04Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2361.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2361",
"merged_at": "2021-08-17T08:30:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2361.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2361"
} | Fixes #625. This lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2361/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2361/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/21 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/21/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/21/comments | https://api.github.com/repos/huggingface/datasets/issues/21/events | https://github.com/huggingface/datasets/pull/21 | 607,914,185 | MDExOlB1bGxSZXF1ZXN0NDA5Nzk2MTM4 | 21 | Cleanup Features - Updating convert command - Fix Download manager | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
} | [] | closed | false | null | [] | null | [
"For conflicts, I think the mention hint \"This should be modified because it mentions ...\" is missing.",
"Looks great!"
] | "2020-04-27T23:16:55Z" | "2020-05-01T09:29:47Z" | "2020-05-01T09:29:46Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/21.diff",
"html_url": "https://github.com/huggingface/datasets/pull/21",
"merged_at": "2020-05-01T09:29:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/21.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/21"
} | This PR makes a number of changes:
# Updating `Features`
Features are a complex mechanism provided in `tfds` to be able to modify a dataset on-the-fly when serializing to disk and when loading from disk.
We don't really need this because (1) it hides too much from the user and (2) our datatype can be directly mapped to Arrow tables on drive so we usually don't need to change the format before/after serialization.
This PR extracts and refactors these features in a single `features.py` files. It still keep a number of features classes for easy compatibility with tfds, namely the `Sequence`, `Tensor`, `ClassLabel` and `Translation` features.
Some more complex features involving a pre-processing on-the-fly during serialization are kept:
- `ClassLabel` which are able to convert from label strings to integers,
- `Translation`which does some check on the languages.
# Updating the `convert` command
We do a few updates here
- following the simplification of the `features` (cf above), conversion are updated
- we also makes it simpler to convert a single file
- some code need to be fixed manually after conversion (e.g. to remove some encoding processing in former tfds `Text` features. We highlight this code with a "git merge conflict" style syntax for easy manual fixing.
# Fix download manager iterator
You kept me up quite late on Tuesday night with this `os.scandir` change @lhoestq ;-)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/21/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/21/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/908 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/908/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/908/comments | https://api.github.com/repos/huggingface/datasets/issues/908/events | https://github.com/huggingface/datasets/pull/908 | 752,428,652 | MDExOlB1bGxSZXF1ZXN0NTI4NzUzMjcz | 908 | Add dependency on black for tests | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"Sorry, I have just seen that it was already in `QUALITY_REQUIRE`.\r\n\r\nFor some reason it did not get installed on my virtual environment..."
] | "2020-11-27T19:12:48Z" | "2020-11-27T21:46:53Z" | "2020-11-27T21:46:52Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/908.diff",
"html_url": "https://github.com/huggingface/datasets/pull/908",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/908.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/908"
} | Add package 'black' as an installation requirement for tests. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/908/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/908/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6275 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6275/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6275/comments | https://api.github.com/repos/huggingface/datasets/issues/6275/events | https://github.com/huggingface/datasets/issues/6275 | 1,921,354,680 | I_kwDODunzps5yhYu4 | 6,275 | Would like to Contribute a dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/97907750?v=4",
"events_url": "https://api.github.com/users/vikas70607/events{/privacy}",
"followers_url": "https://api.github.com/users/vikas70607/followers",
"following_url": "https://api.github.com/users/vikas70607/following{/other_user}",
"gists_url": "https://api.github.com/users/vikas70607/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vikas70607",
"id": 97907750,
"login": "vikas70607",
"node_id": "U_kgDOBdX0Jg",
"organizations_url": "https://api.github.com/users/vikas70607/orgs",
"received_events_url": "https://api.github.com/users/vikas70607/received_events",
"repos_url": "https://api.github.com/users/vikas70607/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vikas70607/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vikas70607/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vikas70607"
} | [] | closed | false | null | [] | null | [
"Hi! The process of contributing a dataset is explained here: https://huggingface.co/docs/datasets/upload_dataset. Also, check https://huggingface.co/docs/datasets/image_dataset for a more detailed explanation of how to share an image dataset."
] | "2023-10-02T07:00:21Z" | "2023-10-10T16:27:54Z" | "2023-10-10T16:27:54Z" | NONE | null | null | null | I have a dataset of 2500 images that can be used for color-blind machine-learning algorithms. Since , there was no dataset available online , I made this dataset myself and would like to contribute this now to community | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6275/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6275/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/229 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/229/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/229/comments | https://api.github.com/repos/huggingface/datasets/issues/229/events | https://github.com/huggingface/datasets/pull/229 | 629,956,490 | MDExOlB1bGxSZXF1ZXN0NDI3MTcxMzc5 | 229 | Rename dataset_infos.json to dataset_info.json | {
"avatar_url": "https://avatars.githubusercontent.com/u/11817160?v=4",
"events_url": "https://api.github.com/users/aswin-giridhar/events{/privacy}",
"followers_url": "https://api.github.com/users/aswin-giridhar/followers",
"following_url": "https://api.github.com/users/aswin-giridhar/following{/other_user}",
"gists_url": "https://api.github.com/users/aswin-giridhar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aswin-giridhar",
"id": 11817160,
"login": "aswin-giridhar",
"node_id": "MDQ6VXNlcjExODE3MTYw",
"organizations_url": "https://api.github.com/users/aswin-giridhar/orgs",
"received_events_url": "https://api.github.com/users/aswin-giridhar/received_events",
"repos_url": "https://api.github.com/users/aswin-giridhar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aswin-giridhar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aswin-giridhar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aswin-giridhar"
} | [] | closed | false | null | [] | null | [
"\r\nThis was actually the right name. `dataset_infos.json` is used to have the infos of all the dataset configurations.\r\n\r\nOn the other hand `dataset_info.json` (without 's') is a cache file with the info of one specific configuration.\r\n\r\nTo fix #228, we probably just have to clear and reload the nlp-viewer cache."
] | "2020-06-03T12:31:44Z" | "2020-06-03T12:52:54Z" | "2020-06-03T12:48:33Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/229.diff",
"html_url": "https://github.com/huggingface/datasets/pull/229",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/229.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/229"
} | As the file required for the viewing in the live nlp viewer is named as dataset_info.json | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/229/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/229/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5920 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5920/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5920/comments | https://api.github.com/repos/huggingface/datasets/issues/5920/events | https://github.com/huggingface/datasets/pull/5920 | 1,736,196,991 | PR_kwDODunzps5R5TRB | 5,920 | Optimize IterableDataset.from_file using ArrowExamplesIterable | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007439 / 0.011353 (-0.003914) | 0.004884 / 0.011008 (-0.006124) | 0.098750 / 0.038508 (0.060242) | 0.040723 / 0.023109 (0.017613) | 0.347242 / 0.275898 (0.071344) | 0.381202 / 0.323480 (0.057722) | 0.006814 / 0.007986 (-0.001171) | 0.004543 / 0.004328 (0.000215) | 0.075338 / 0.004250 (0.071088) | 0.058976 / 0.037052 (0.021924) | 0.344746 / 0.258489 (0.086257) | 0.406761 / 0.293841 (0.112920) | 0.028961 / 0.128546 (-0.099585) | 0.009531 / 0.075646 (-0.066115) | 0.337324 / 0.419271 (-0.081947) | 0.051071 / 0.043533 (0.007538) | 0.341251 / 0.255139 (0.086112) | 0.362773 / 0.283200 (0.079573) | 0.109423 / 0.141683 (-0.032260) | 1.457420 / 1.452155 (0.005266) | 1.588824 / 1.492716 (0.096108) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.288620 / 0.018006 (0.270614) | 0.568975 / 0.000490 (0.568485) | 0.003350 / 0.000200 (0.003150) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028732 / 0.037411 (-0.008680) | 0.117820 / 0.014526 (0.103294) | 0.120180 / 0.176557 (-0.056376) | 0.178736 / 0.737135 (-0.558399) | 0.126399 / 0.296338 (-0.169939) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428357 / 0.215209 (0.213148) | 4.251989 / 2.077655 (2.174334) | 2.005239 / 1.504120 (0.501119) | 1.784009 / 1.541195 (0.242815) | 1.883763 / 1.468490 (0.415272) | 0.555429 / 4.584777 (-4.029348) | 3.868146 / 3.745712 (0.122434) | 2.081896 / 5.269862 (-3.187965) | 1.126047 / 4.565676 (-3.439629) | 0.069496 / 0.424275 (-0.354779) | 0.012926 / 0.007607 (0.005318) | 0.536989 / 0.226044 (0.310944) | 5.256052 / 2.268929 (2.987124) | 2.526802 / 55.444624 (-52.917822) | 2.233346 / 6.876477 (-4.643131) | 2.389063 / 2.142072 (0.246990) | 0.677107 / 4.805227 (-4.128120) | 0.147212 / 6.500664 (-6.353452) | 0.067061 / 0.075469 (-0.008408) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.210651 / 1.841788 (-0.631137) | 17.236898 / 8.074308 (9.162589) | 14.427301 / 10.191392 (4.235909) | 0.207194 / 0.680424 (-0.473229) | 0.018079 / 0.534201 (-0.516122) | 0.398355 / 0.579283 (-0.180929) | 0.462453 / 0.434364 (0.028089) | 0.484544 / 0.540337 (-0.055794) | 0.590119 / 1.386936 (-0.796817) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007392 / 0.011353 (-0.003961) | 0.005614 / 0.011008 (-0.005394) | 0.075587 / 0.038508 (0.037079) | 0.040429 / 0.023109 (0.017320) | 0.389901 / 0.275898 (0.114003) | 0.429466 / 0.323480 (0.105986) | 0.006790 / 0.007986 (-0.001196) | 0.006627 / 0.004328 (0.002299) | 0.075227 / 0.004250 (0.070976) | 0.060298 / 0.037052 (0.023246) | 0.391905 / 0.258489 (0.133416) | 0.449385 / 0.293841 (0.155544) | 0.028794 / 0.128546 (-0.099753) | 0.009461 / 0.075646 (-0.066185) | 0.083386 / 0.419271 (-0.335886) | 0.057968 / 0.043533 (0.014435) | 0.377327 / 0.255139 (0.122188) | 0.402825 / 0.283200 (0.119626) | 0.125477 / 0.141683 (-0.016206) | 1.462986 / 1.452155 (0.010832) | 1.595959 / 1.492716 (0.103243) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.304179 / 0.018006 (0.286173) | 0.543113 / 0.000490 (0.542623) | 0.004136 / 0.000200 (0.003936) | 0.000109 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032617 / 0.037411 (-0.004794) | 0.123596 / 0.014526 (0.109070) | 0.128714 / 0.176557 (-0.047842) | 0.176344 / 0.737135 (-0.560792) | 0.132525 / 0.296338 (-0.163813) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.446041 / 0.215209 (0.230832) | 4.438799 / 2.077655 (2.361144) | 2.210815 / 1.504120 (0.706695) | 2.052025 / 1.541195 (0.510830) | 2.204687 / 1.468490 (0.736197) | 0.535219 / 4.584777 (-4.049558) | 3.858407 / 3.745712 (0.112695) | 3.826043 / 5.269862 (-1.443819) | 1.334149 / 4.565676 (-3.231527) | 0.067454 / 0.424275 (-0.356821) | 0.012566 / 0.007607 (0.004958) | 0.551597 / 0.226044 (0.325553) | 5.520054 / 2.268929 (3.251126) | 2.817976 / 55.444624 (-52.626649) | 2.528074 / 6.876477 (-4.348403) | 2.622391 / 2.142072 (0.480319) | 0.657632 / 4.805227 (-4.147595) | 0.147039 / 6.500664 (-6.353625) | 0.069603 / 0.075469 (-0.005866) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.300140 / 1.841788 (-0.541648) | 17.303907 / 8.074308 (9.229599) | 15.657887 / 10.191392 (5.466495) | 0.168991 / 0.680424 (-0.511433) | 0.021332 / 0.534201 (-0.512869) | 0.487261 / 0.579283 (-0.092022) | 0.450073 / 0.434364 (0.015709) | 0.465865 / 0.540337 (-0.074473) | 0.565501 / 1.386936 (-0.821435) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f1723ab75a6b3a5e156ea0a41651e80e91fa9cc6 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006536 / 0.011353 (-0.004817) | 0.004254 / 0.011008 (-0.006755) | 0.095387 / 0.038508 (0.056878) | 0.032885 / 0.023109 (0.009776) | 0.298580 / 0.275898 (0.022682) | 0.319771 / 0.323480 (-0.003709) | 0.005510 / 0.007986 (-0.002476) | 0.003891 / 0.004328 (-0.000437) | 0.073763 / 0.004250 (0.069513) | 0.041625 / 0.037052 (0.004573) | 0.294896 / 0.258489 (0.036407) | 0.341308 / 0.293841 (0.047467) | 0.027898 / 0.128546 (-0.100648) | 0.008837 / 0.075646 (-0.066809) | 0.325055 / 0.419271 (-0.094216) | 0.050652 / 0.043533 (0.007119) | 0.298756 / 0.255139 (0.043617) | 0.318261 / 0.283200 (0.035061) | 0.098927 / 0.141683 (-0.042756) | 1.450356 / 1.452155 (-0.001798) | 1.508034 / 1.492716 (0.015318) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209009 / 0.018006 (0.191003) | 0.439154 / 0.000490 (0.438665) | 0.004299 / 0.000200 (0.004099) | 0.000142 / 0.000054 (0.000087) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025938 / 0.037411 (-0.011473) | 0.105954 / 0.014526 (0.091429) | 0.113858 / 0.176557 (-0.062698) | 0.168887 / 0.737135 (-0.568249) | 0.121292 / 0.296338 (-0.175046) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.402050 / 0.215209 (0.186841) | 4.002310 / 2.077655 (1.924655) | 1.816190 / 1.504120 (0.312070) | 1.634404 / 1.541195 (0.093209) | 1.713632 / 1.468490 (0.245142) | 0.519633 / 4.584777 (-4.065144) | 3.740291 / 3.745712 (-0.005421) | 1.787602 / 5.269862 (-3.482260) | 1.038844 / 4.565676 (-3.526833) | 0.064973 / 0.424275 (-0.359302) | 0.012475 / 0.007607 (0.004868) | 0.498152 / 0.226044 (0.272108) | 4.970941 / 2.268929 (2.702013) | 2.287429 / 55.444624 (-53.157195) | 1.998050 / 6.876477 (-4.878427) | 2.091903 / 2.142072 (-0.050169) | 0.630363 / 4.805227 (-4.174864) | 0.138623 / 6.500664 (-6.362041) | 0.063293 / 0.075469 (-0.012176) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.201802 / 1.841788 (-0.639986) | 14.073836 / 8.074308 (5.999528) | 12.968665 / 10.191392 (2.777273) | 0.144653 / 0.680424 (-0.535771) | 0.017613 / 0.534201 (-0.516588) | 0.392067 / 0.579283 (-0.187216) | 0.416955 / 0.434364 (-0.017409) | 0.471492 / 0.540337 (-0.068845) | 0.554576 / 1.386936 (-0.832360) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006408 / 0.011353 (-0.004945) | 0.004452 / 0.011008 (-0.006556) | 0.073648 / 0.038508 (0.035140) | 0.032536 / 0.023109 (0.009427) | 0.358546 / 0.275898 (0.082648) | 0.387330 / 0.323480 (0.063850) | 0.005542 / 0.007986 (-0.002444) | 0.003882 / 0.004328 (-0.000447) | 0.073867 / 0.004250 (0.069617) | 0.044798 / 0.037052 (0.007746) | 0.362303 / 0.258489 (0.103814) | 0.400496 / 0.293841 (0.106655) | 0.028244 / 0.128546 (-0.100302) | 0.008931 / 0.075646 (-0.066715) | 0.080617 / 0.419271 (-0.338654) | 0.046575 / 0.043533 (0.003043) | 0.364283 / 0.255139 (0.109145) | 0.373215 / 0.283200 (0.090015) | 0.100080 / 0.141683 (-0.041603) | 1.430047 / 1.452155 (-0.022108) | 1.530957 / 1.492716 (0.038240) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221061 / 0.018006 (0.203055) | 0.441753 / 0.000490 (0.441263) | 0.003626 / 0.000200 (0.003426) | 0.000088 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029509 / 0.037411 (-0.007902) | 0.109578 / 0.014526 (0.095053) | 0.121009 / 0.176557 (-0.055548) | 0.168950 / 0.737135 (-0.568185) | 0.124475 / 0.296338 (-0.171864) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431355 / 0.215209 (0.216146) | 4.295507 / 2.077655 (2.217852) | 2.167514 / 1.504120 (0.663394) | 2.013073 / 1.541195 (0.471879) | 1.973730 / 1.468490 (0.505240) | 0.529778 / 4.584777 (-4.054999) | 3.794702 / 3.745712 (0.048989) | 3.062940 / 5.269862 (-2.206922) | 1.503426 / 4.565676 (-3.062251) | 0.066692 / 0.424275 (-0.357583) | 0.011682 / 0.007607 (0.004075) | 0.539311 / 0.226044 (0.313266) | 5.406342 / 2.268929 (3.137414) | 2.652709 / 55.444624 (-52.791916) | 2.260066 / 6.876477 (-4.616410) | 2.295752 / 2.142072 (0.153680) | 0.647199 / 4.805227 (-4.158029) | 0.142981 / 6.500664 (-6.357683) | 0.065082 / 0.075469 (-0.010387) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.279788 / 1.841788 (-0.562000) | 14.982845 / 8.074308 (6.908536) | 14.277166 / 10.191392 (4.085774) | 0.145082 / 0.680424 (-0.535342) | 0.017885 / 0.534201 (-0.516316) | 0.392071 / 0.579283 (-0.187212) | 0.420425 / 0.434364 (-0.013939) | 0.461244 / 0.540337 (-0.079093) | 0.559956 / 1.386936 (-0.826980) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#651d96c1c4083a206c65f11602712d75f1f0453d \"CML watermark\")\n"
] | "2023-06-01T12:14:36Z" | "2023-06-01T12:42:10Z" | "2023-06-01T12:35:14Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5920.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5920",
"merged_at": "2023-06-01T12:35:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5920.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5920"
} | following https://github.com/huggingface/datasets/pull/5893 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5920/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5920/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4328 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4328/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4328/comments | https://api.github.com/repos/huggingface/datasets/issues/4328/events | https://github.com/huggingface/datasets/pull/4328 | 1,233,856,690 | PR_kwDODunzps43trrd | 4,328 | Fix and clean Apache Beam functionality | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-05-12T11:41:07Z" | "2022-05-24T13:43:11Z" | "2022-05-24T13:34:32Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4328.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4328",
"merged_at": "2022-05-24T13:34:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4328.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4328"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4328/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4328/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5998 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5998/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5998/comments | https://api.github.com/repos/huggingface/datasets/issues/5998/events | https://github.com/huggingface/datasets/issues/5998 | 1,781,805,018 | I_kwDODunzps5qNC_a | 5,998 | The current implementation has a potential bug in the sort method | {
"avatar_url": "https://avatars.githubusercontent.com/u/22192665?v=4",
"events_url": "https://api.github.com/users/wangyuxinwhy/events{/privacy}",
"followers_url": "https://api.github.com/users/wangyuxinwhy/followers",
"following_url": "https://api.github.com/users/wangyuxinwhy/following{/other_user}",
"gists_url": "https://api.github.com/users/wangyuxinwhy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wangyuxinwhy",
"id": 22192665,
"login": "wangyuxinwhy",
"node_id": "MDQ6VXNlcjIyMTkyNjY1",
"organizations_url": "https://api.github.com/users/wangyuxinwhy/orgs",
"received_events_url": "https://api.github.com/users/wangyuxinwhy/received_events",
"repos_url": "https://api.github.com/users/wangyuxinwhy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wangyuxinwhy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wangyuxinwhy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wangyuxinwhy"
} | [] | closed | false | null | [] | null | [
"Thanks for reporting, @wangyuxinwhy. "
] | "2023-06-30T03:16:57Z" | "2023-06-30T14:21:03Z" | "2023-06-30T14:11:25Z" | NONE | null | null | null | ### Describe the bug
In the sort method,here's a piece of code
```python
# column_names: Union[str, Sequence_[str]]
# Check proper format of and for duplicates in column_names
if not isinstance(column_names, list):
column_names = [column_names]
```
I get an error when I pass in a tuple based on the column_names type annotation, it will raise an errror.As in the example below, while the type annotation implies that a tuple can be passed.
```python
from datasets import load_dataset
dataset = load_dataset('glue', 'ax')['test']
dataset.sort(column_names=('premise', 'hypothesis'))
# Raise ValueError: Column '('premise', 'hypothesis')' not found in the dataset.
```
Of course, after I modified the tuple into a list, everything worked fine
Change the code to the following so there will be no problem
```python
# Check proper format of and for duplicates in column_names
if not isinstance(column_names, list):
if isinstance(column_names, str):
column_names = [column_names]
else:
column_names = list(column_names)
```
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('glue', 'ax')['test']
dataset.sort(column_names=('premise', 'hypothesis'))
# Raise ValueError: Column '('premise', 'hypothesis')' not found in the dataset.
```
### Expected behavior
Passing tuple into column_names should be equivalent to passing list
### Environment info
- `datasets` version: 2.13.0
- Platform: macOS-13.1-arm64-arm-64bit
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5998/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5998/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/625 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/625/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/625/comments | https://api.github.com/repos/huggingface/datasets/issues/625/events | https://github.com/huggingface/datasets/issues/625 | 701,057,799 | MDU6SXNzdWU3MDEwNTc3OTk= | 625 | dtype of tensors should be preserved | {
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BramVanroy",
"id": 2779410,
"login": "BramVanroy",
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BramVanroy"
} | [] | closed | false | null | [] | null | [
"Indeed we convert tensors to list to be able to write in arrow format. Because of this conversion we lose the dtype information. We should add the dtype detection when we do type inference. However it would require a bit of refactoring since currently the conversion happens before the type inference..\r\n\r\nAnd then for your information, when reading from arrow format we have to cast from arrow to numpy (which is fast since pyarrow has a numpy integration), and then to torch.\r\n\r\nHowever there's one thing that can help you: we make sure that the dtypes correspond to what is defined in `features`.\r\nTherefore what you can do is provide `features` in `.map(preprocess, feature=...)` to specify the output types.\r\n\r\nFor example in your case:\r\n```python\r\nfrom datasets import Features, Value, Sequence\r\n\r\nfeatures = Features({\r\n \"input_ids\": Sequence(Value(\"int32\")),\r\n \"sembedding\": Sequence(Value(\"float32\"))\r\n})\r\npreprocessed_dataset = dataset.map(preprocess, features=features)\r\n\r\npreprocessed_dataset.set_format(\"torch\", columns=[\"input_ids\", \"sembedding\"])\r\nprint(preprocessed_dataset[0][\"sembedding\"].dtype)\r\n# \"torch.float32\"\r\n```\r\n\r\nLet me know if it helps",
"If the arrow format is basically lists, why is the intermediate step to numpy necessary? I am a bit confused about that part.\r\n\r\nThanks for your suggestion. as I have currently implemented this, I cast to torch.Tensor in my collate_fn to save disk space (so I do not have to save padded tensors to max_len but can pad up to max batch len in collate_fn) at the cost of a bit slower processing. So for me this is not relevant anymore, but I am sure it is for others!",
"I'm glad you managed to figure something out :)\r\n\r\nCasting from arrow to numpy can be 100x faster than casting from arrow to list.\r\nThis is because arrow has an integration with numpy that allows it to instantiate numpy arrays with zero-copy from arrow.\r\nOn the other hand to create python lists it is slow since it has to recreate the list object by iterating through each element in python.",
"Ah that is interesting. I have no direct experience with arrow so I didn't know. ",
"I encountered a simliar issue: `datasets` converted my float numpy array to `torch.float64` tensors, while many pytorch operations require `torch.float32` inputs and it's very troublesome. \r\n\r\nI tried @lhoestq 's solution, but since it's mixed with the preprocess function, it's not very intuitive. \r\n\r\nI just want to share another possible simpler solution: directly cast the dtype of the processed dataset.\r\n\r\nNow I want to change the type of `labels` in `train_dataset` from float64 to float32, I can do this.\r\n\r\n```\r\nfrom datasets import Value, Sequence, Features\r\nfeats = train_dataset.features.copy()\r\nfeats['labels'].feature = Value(dtype='float32')\r\nfeats = Features(feats)\r\ntrain_dataset.cast_(feats)\r\n```\r\n",
"Reopening since @bhavitvyamalik started looking into it !\r\n\r\nAlso I'm posting here a function that could be helpful to support preserving the dtype of tensors.\r\n\r\nIt's used to build a pyarrow array out of a numpy array and:\r\n- it doesn't convert the numpy array to a python list\r\n- it keeps the precision of the numpy array for the pyarrow array\r\n- it works with multidimensional arrays (while `pa.array` can only take a 1D array as input)\r\n- it builds the pyarrow ListArray from offsets created on-the-fly and values that come from the flattened numpy array\r\n\r\n```python\r\nfrom functools import reduce\r\nfrom operator import mul\r\n\r\nimport numpy as np\r\nimport pyarrow as pa\r\n\r\ndef pa_ndarray(a):\r\n \"\"\"Build a PyArrow ListArray from a multidimensional NumPy array\"\"\"\r\n values = pa.array(a.flatten()) \r\n for i in range(a.ndim - 1): \r\n n_offsets = reduce(mul, a.shape[:a.ndim - i - 1], 1) \r\n step_offsets = a.shape[a.ndim - i - 1] \r\n offsets = pa.array(np.arange(n_offsets + 1) * step_offsets, type=pa.int32()) \r\n values = pa.ListArray.from_arrays(offsets, values) \r\n return values \r\n\r\nnarr = np.arange(42).reshape(7, 2, 3).astype(np.uint8)\r\nparr = pa_ndarray(narr)\r\nassert isinstance(parr, pa.Array)\r\nassert parr.type == pa.list_(pa.list_(pa.uint8()))\r\nassert narr.tolist() == parr.to_pylist()\r\n```\r\n\r\nThe only costly operation is the offsets computations. Since it doesn't iterate on the numpy array values this function is pretty fast.",
"@lhoestq Have you thought about this further?\r\n\r\nWe have a use case where we're attempting to load data containing numpy arrays using the `datasets` library.\r\n\r\nWhen using one of the \"standard\" methods (`[Value(...)]` or `Sequence()`) we see ~200 samples processed per second during the call to `_prepare_split`. This slowdown is caused by the vast number of calls to `encode_nested_example` (each sequence is converted to a list, and each element in the sequence...). \r\n\r\nUsing the `Feature` `ArrayND` improves this somewhat to ~500/s as it now uses numpy's `tolist()` rather than iterating over each value in the array and converting them individually.\r\n\r\nHowever, it's still pretty slow and in theory it should be possible to avoid the `numpy -> python -> arrow` dance altogether. To demonstrate this, if you keep the `Feature` set to an `ArrayND` but instead return a `pa_ndarray(...)` in `_generate_examples` it skips the conversion (`return obj, False`) and hits ~11_000/s. Two orders of magnitude speed up! The problem is this then fails later on when the `ArrowWriter` tries to write the examples to disk :-( \r\n\r\nIt would be nice to have first-class support for user-defined PyArrow objects. Is this a possibility? We have _large_ datasets where even an order of magnitude difference is important so settling on the middle ~500/s is less than ideal! \r\n\r\nIs there a workaround for this or another method that should be used instead that gets near-to or equal performance to returning PyArrow arrays?",
"Note that manually generating the table using `pyarrow` achieves ~30_000/s",
"Hi !\r\n\r\nIt would be awesome to achieve this speed for numpy arrays !\r\nFor now we have to use `encode_nested_example` to convert numpy arrays to python lists since pyarrow doesn't support multidimensional numpy arrays (only 1D).\r\n\r\nMaybe let's start a new PR from your PR @bhavitvyamalik (idk why we didn't answer your PR at that time, sorry about that).\r\nBasically the idea is to allow `TypedSequence` to support numpy arrays as you did, and remove the numpy->python casting in `_cast_to_python_objects`.\r\n\r\nThis is really important since we are starting to have a focus on other modalities than text as well (audio, images).\r\n\r\nThough until then @samgd, there is another feature that may interest you and that may give you the speed you want:\r\n\r\nIn a dataset script you can subclass either a GeneratorBasedBuilder (with the `_generate_examples ` method) or an ArrowBasedBuilder if you want. the ArrowBasedBuilder allows to yield arrow data by implementing the `_generate_tables` method (it's the same as `_generate_examples` except you must yield arrow tables). Since the data are already in arrow format, it doesn't call `encode_nested_example`. Let me know if that helps."
] | "2020-09-14T12:38:05Z" | "2021-08-17T08:30:04Z" | "2021-08-17T08:30:04Z" | CONTRIBUTOR | null | null | null | After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-required-that-input-and-hidden-for-gru-have-the-same-dtype-float32/96221)).
As a user I did not expect this bug. I have a `map` function that I call on the Dataset that looks like this:
```python
def preprocess(sentences: List[str]):
token_ids = [[vocab.to_index(t) for t in s.split()] for s in sentences]
sembeddings = stransformer.encode(sentences)
print(sembeddings.dtype)
return {"input_ids": token_ids, "sembedding": sembeddings}
```
Given a list of `sentences` (`List[str]`), it converts those into token_ids on the one hand (list of lists of ints; `List[List[int]]`) and into sentence embeddings on the other (Tensor of dtype `torch.float32`). That means that I actually set the column "sembedding" to a tensor that I as a user expect to be a float32.
It appears though that behind the scenes, this tensor is converted into a **list**. I did not find this documented anywhere but I might have missed it. From a user's perspective this is incredibly important though, because it means you cannot do any data_type or tensor casting yourself in a mapping function! Furthermore, this can lead to issues, as was my case.
My model expected float32 precision, which I thought `sembedding` was because that is what `stransformer.encode` outputs. But behind the scenes this tensor is first cast to a list, and when we then set its format, as below, this column is cast not to float32 but to double precision float64.
```python
dataset.set_format(type="torch", columns=["input_ids", "sembedding"])
```
This happens because apparently there is an intermediate step of casting to a **numpy** array (?) **whose dtype creation/deduction is different from torch dtypes** (see the snippet below). As you can see, this means that the dtype is not preserved: if I got it right, the dataset goes from torch.float32 -> list -> float64 (numpy) -> torch.float64.
```python
import torch
import numpy as np
l = [-0.03010837361216545, -0.035979013890028, -0.016949838027358055]
torch_tensor = torch.tensor(l)
np_array = np.array(l)
np_to_torch = torch.from_numpy(np_array)
print(torch_tensor.dtype)
# torch.float32
print(np_array.dtype)
# float64
print(np_to_torch.dtype)
# torch.float64
```
This might lead to unwanted behaviour. I understand that the whole library is probably built around casting from numpy to other frameworks, so this might be difficult to solve. Perhaps `set_format` should include a `dtypes` option where for each input column the user can specify the wanted precision.
The alternative is that the user needs to cast manually after loading data from the dataset but that does not seem user-friendly, makes the dataset less portable, and might use more space in memory as well as on disk than is actually needed. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/625/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/625/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3588 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3588/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3588/comments | https://api.github.com/repos/huggingface/datasets/issues/3588/events | https://github.com/huggingface/datasets/pull/3588 | 1,106,749,000 | PR_kwDODunzps4xMdiC | 3,588 | Update HellaSwag README.md | {
"avatar_url": "https://avatars.githubusercontent.com/u/6416600?v=4",
"events_url": "https://api.github.com/users/borgr/events{/privacy}",
"followers_url": "https://api.github.com/users/borgr/followers",
"following_url": "https://api.github.com/users/borgr/following{/other_user}",
"gists_url": "https://api.github.com/users/borgr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/borgr",
"id": 6416600,
"login": "borgr",
"node_id": "MDQ6VXNlcjY0MTY2MDA=",
"organizations_url": "https://api.github.com/users/borgr/orgs",
"received_events_url": "https://api.github.com/users/borgr/received_events",
"repos_url": "https://api.github.com/users/borgr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/borgr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borgr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/borgr"
} | [] | closed | false | null | [] | null | [] | "2022-01-18T10:46:15Z" | "2022-01-20T16:57:43Z" | "2022-01-20T16:57:43Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3588.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3588",
"merged_at": "2022-01-20T16:57:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3588.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3588"
} | Adding information from the git repo and paper that were missing | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3588/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3588/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1381 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1381/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1381/comments | https://api.github.com/repos/huggingface/datasets/issues/1381/events | https://github.com/huggingface/datasets/pull/1381 | 760,320,960 | MDExOlB1bGxSZXF1ZXN0NTM1MTcyMjkw | 1,381 | Add twi text c3 | {
"avatar_url": "https://avatars.githubusercontent.com/u/23586676?v=4",
"events_url": "https://api.github.com/users/dadelani/events{/privacy}",
"followers_url": "https://api.github.com/users/dadelani/followers",
"following_url": "https://api.github.com/users/dadelani/following{/other_user}",
"gists_url": "https://api.github.com/users/dadelani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dadelani",
"id": 23586676,
"login": "dadelani",
"node_id": "MDQ6VXNlcjIzNTg2Njc2",
"organizations_url": "https://api.github.com/users/dadelani/orgs",
"received_events_url": "https://api.github.com/users/dadelani/received_events",
"repos_url": "https://api.github.com/users/dadelani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dadelani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dadelani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dadelani"
} | [] | closed | false | null | [] | null | [
"looks like this PR includes changes about other datasets\r\n\r\nCan you only include the changes related to twi text c3 please ?",
"Hi @lhoestq , I have removed the unnecessary files. Can you please confirm?",
"You might need to either find a way to go back to the commit before it changes 389 files or create a new branch.",
"okay, I have created another branch, see the latest pull https://github.com/huggingface/datasets/pull/1518 @cstorm125 ",
"Hii please follow me",
"Closing this one in favor of #1518"
] | "2020-12-09T13:16:38Z" | "2020-12-13T18:39:27Z" | "2020-12-13T18:39:27Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1381.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1381",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1381.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1381"
} | Added Twi texts for training embeddings and language models based on the paper https://www.aclweb.org/anthology/2020.lrec-1.335/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1381/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1381/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2435 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2435/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2435/comments | https://api.github.com/repos/huggingface/datasets/issues/2435/events | https://github.com/huggingface/datasets/pull/2435 | 907,505,531 | MDExOlB1bGxSZXF1ZXN0NjU4MzQzNDE2 | 2,435 | Insert Extractive QA templates for SQuAD-like datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
} | [] | closed | false | null | [] | null | [
"hi @lhoestq @SBrandeis i've now added the missing YAML tags, so this PR should be good to go :)",
"urgh, the windows tests are failing because of encoding issues 😢 \r\n\r\n```\r\ndataset_name = 'squad_kor_v1'\r\n\r\n @pytest.mark.parametrize(\"dataset_name\", get_changed_datasets(repo_path))\r\n def test_changed_dataset_card(dataset_name):\r\n card_path = repo_path / \"datasets\" / dataset_name / \"README.md\"\r\n assert card_path.exists()\r\n error_messages = []\r\n try:\r\n ReadMe.from_readme(card_path)\r\n except Exception as readme_error:\r\n error_messages.append(f\"The following issues have been found in the dataset cards:\\nREADME:\\n{readme_error}\")\r\n try:\r\n DatasetMetadata.from_readme(card_path)\r\n except Exception as metadata_error:\r\n error_messages.append(\r\n f\"The following issues have been found in the dataset cards:\\nYAML tags:\\n{metadata_error}\"\r\n )\r\n \r\n if error_messages:\r\n> raise ValueError(\"\\n\".join(error_messages))\r\nE ValueError: The following issues have been found in the dataset cards:\r\nE README:\r\nE 'charmap' codec can't decode byte 0x90 in position 2283: character maps to <undefined>\r\nE The following issues have been found in the dataset cards:\r\nE YAML tags:\r\nE 'charmap' codec can't decode byte 0x90 in position 2283: character maps to <undefined>\r\n```",
"Seems like the encoding issues on windows is also being tackled in #2418 - will see if this solves the problem in the current PR"
] | "2021-05-31T14:09:11Z" | "2021-06-03T14:34:30Z" | "2021-06-03T14:32:27Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2435.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2435",
"merged_at": "2021-06-03T14:32:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2435.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2435"
} | This PR adds task templates for 9 SQuAD-like templates with the following properties:
* 1 config
* A schema that matches the `squad` one (i.e. same column names, especially for the nested `answers` column because the current implementation does not support casting with mismatched columns. see #2434)
* Less than 20GB (my laptop can't handle more right now)
The aim of this PR is to provide a few datasets to experiment with the task template integration in other libraries / services.
PR #2429 should be merged before this one.
cc @abhi1thakur | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2435/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2435/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/497 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/497/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/497/comments | https://api.github.com/repos/huggingface/datasets/issues/497/events | https://github.com/huggingface/datasets/pull/497 | 677,057,116 | MDExOlB1bGxSZXF1ZXN0NDY2MjQ2NDQ3 | 497 | skip header in PAWS-X | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2020-08-11T17:26:25Z" | "2020-08-19T09:50:02Z" | "2020-08-19T09:50:01Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/497.diff",
"html_url": "https://github.com/huggingface/datasets/pull/497",
"merged_at": "2020-08-19T09:50:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/497.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/497"
} | This should fix #485
I also updated the `dataset_infos.json` file that is used to verify the integrity of the generated splits (the number of examples was reduced by one).
Note that there are new fields in `dataset_infos.json` introduced in the latest release 0.4.0 corresponding to post processing info. I removed them in this case when I ran `nlp-cli ./datasets/xtreme --save_infos` to keep backward compatibility (versions 0.3.0 can't load these fields).
I think I'll change the logic so that `nlp-cli test` doesn't create these fields for dataset with no post processing | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/497/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/497/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/511 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/511/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/511/comments | https://api.github.com/repos/huggingface/datasets/issues/511/events | https://github.com/huggingface/datasets/issues/511 | 681,055,553 | MDU6SXNzdWU2ODEwNTU1NTM= | 511 | dataset.shuffle() and select() resets format. Intended? | {
"avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4",
"events_url": "https://api.github.com/users/vegarab/events{/privacy}",
"followers_url": "https://api.github.com/users/vegarab/followers",
"following_url": "https://api.github.com/users/vegarab/following{/other_user}",
"gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vegarab",
"id": 24683907,
"login": "vegarab",
"node_id": "MDQ6VXNlcjI0NjgzOTA3",
"organizations_url": "https://api.github.com/users/vegarab/orgs",
"received_events_url": "https://api.github.com/users/vegarab/received_events",
"repos_url": "https://api.github.com/users/vegarab/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vegarab/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vegarab"
} | [] | closed | false | null | [] | null | [
"Hi @vegarab yes feel free to open a discussion here.\r\n\r\nThis design choice was not very much thought about.\r\n\r\nSince `dataset.select()` (like all the method without a trailing underscore) is non-destructive and returns a new dataset it has most of its properties initialized from scratch (except the table and infos).\r\n\r\nThinking about it I don't see a strong reason against transmitting the format from the parent dataset to its newly created child. It's probably what's expected by the user in most cases. What do you think @lhoestq?\r\n\r\nBy the way, I've been working today on a refactoring of all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`). The idea is to speed them up by a lot (like, really a lot) by working as much as possible with an indices mapping table instead of doing a deep copy of the full dataset as we've been doing currently. You can give it a look and try it here: https://github.com/huggingface/nlp/pull/513\r\nFeedbacks are very much welcome",
"I think it's ok to keep the format.\r\nIf we want to have this behavior for `.map` too we just have to make sure it doesn't keep a column that's been removed.",
"Shall we have this in the coming release by the way @lhoestq ?",
"Yes sure !",
"Since datasets 1.0.0 the format is not reset anymore.\r\nClosing this one, but feel free to re-open if you have other questions"
] | "2020-08-18T13:46:01Z" | "2020-09-14T08:45:38Z" | "2020-09-14T08:45:38Z" | CONTRIBUTOR | null | null | null | Calling `dataset.shuffle()` or `dataset.select()` on a dataset resets its format set by `dataset.set_format()`. Is this intended or an oversight?
When working on quite large datasets that require a lot of preprocessing I find it convenient to save the processed dataset to file using `torch.save("dataset.pt")`. Later loading the dataset object using `torch.load("dataset.pt")`, which conserves the defined format before saving.
I do shuffling and selecting (for controlling dataset size) after loading the data from .pt-file, as it's convenient whenever you train multiple models with varying sizes of the same dataset.
The obvious workaround for this is to set the format again after using `dataset.select()` or `dataset.shuffle()`.
_I guess this is more of a discussion on the design philosophy of the functions. Please let me know if this is not the right channel for these kinds of discussions or if they are not wanted at all!_
#### How to reproduce:
```python
import nlp
from transformers import T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("t5-base")
def create_features(batch):
context_encoding = tokenizer.batch_encode_plus(batch["context"])
return {"input_ids": context_encoding["input_ids"]}
dataset = nlp.load_dataset("cosmos_qa", split="train")
dataset = dataset.map(create_features, batched=True)
dataset.set_format(type="torch", columns=["input_ids"])
dataset[0]
# {'input_ids': tensor([ 1804, 3525, 1602, ... 0, 0])}
dataset = dataset.shuffle()
dataset[0]
# {'id': '3Q9(...)20', 'context': "Good Old War an (...) play ?', 'answer0': 'None of the above choices .', 'answer1': 'This person likes music and likes to see the show , they will see other bands play .', (...) 'input_ids': [1804, 3525, 1602, ... , 0, 0]}
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/511/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/511/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/480 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/480/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/480/comments | https://api.github.com/repos/huggingface/datasets/issues/480/events | https://github.com/huggingface/datasets/pull/480 | 674,245,959 | MDExOlB1bGxSZXF1ZXN0NDYzOTcwNjQ2 | 480 | Column indexing hotfix | {
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TevenLeScao",
"id": 26709476,
"login": "TevenLeScao",
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TevenLeScao"
} | [] | closed | false | null | [] | null | [
"Looks good to me as well but we'll want to add a test indeed.\r\nYou can add one if you have time @TevenLeScao.\r\nOtherwise, we'll do it when we are back with Quentin. ",
"I fixed it in #494 "
] | "2020-08-06T11:37:05Z" | "2023-09-24T09:49:33Z" | "2020-08-12T08:36:10Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/480.diff",
"html_url": "https://github.com/huggingface/datasets/pull/480",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/480.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/480"
} | As observed for example in #469 , currently `__getitem__` does not convert the data to the dataset format when indexing by column. This is a hotfix that imitates functional 0.3.0. code. In the future it'd probably be nice to have a test there. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/480/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/480/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1443 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1443/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1443/comments | https://api.github.com/repos/huggingface/datasets/issues/1443/events | https://github.com/huggingface/datasets/pull/1443 | 761,033,061 | MDExOlB1bGxSZXF1ZXN0NTM1NzYyNTQ1 | 1,443 | Add OPUS Wikimedia Translations Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abhishekkrthakur",
"id": 1183441,
"login": "abhishekkrthakur",
"node_id": "MDQ6VXNlcjExODM0NDE=",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abhishekkrthakur"
} | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | [] | null | [
"Thanks for your contribution, @abhishekkrthakur. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help."
] | "2020-12-10T08:43:02Z" | "2023-09-24T09:40:41Z" | "2022-10-03T09:38:48Z" | MEMBER | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1443.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1443",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1443.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1443"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1443/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1443/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2595 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2595/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2595/comments | https://api.github.com/repos/huggingface/datasets/issues/2595/events | https://github.com/huggingface/datasets/issues/2595 | 937,483,120 | MDU6SXNzdWU5Mzc0ODMxMjA= | 2,595 | ModuleNotFoundError: No module named 'datasets.tasks' while importing common voice datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/41314912?v=4",
"events_url": "https://api.github.com/users/profsatwinder/events{/privacy}",
"followers_url": "https://api.github.com/users/profsatwinder/followers",
"following_url": "https://api.github.com/users/profsatwinder/following{/other_user}",
"gists_url": "https://api.github.com/users/profsatwinder/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/profsatwinder",
"id": 41314912,
"login": "profsatwinder",
"node_id": "MDQ6VXNlcjQxMzE0OTEy",
"organizations_url": "https://api.github.com/users/profsatwinder/orgs",
"received_events_url": "https://api.github.com/users/profsatwinder/received_events",
"repos_url": "https://api.github.com/users/profsatwinder/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/profsatwinder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/profsatwinder/subscriptions",
"type": "User",
"url": "https://api.github.com/users/profsatwinder"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi @profsatwinder.\r\n\r\nIt looks like you are using an old version of `datasets`. Please update it with `pip install -U datasets` and indicate if the problem persists.",
"@albertvillanova Thanks for the information. I updated it to 1.9.0 and the issue is resolved. Thanks again. "
] | "2021-07-06T03:20:55Z" | "2021-07-06T05:59:49Z" | "2021-07-06T05:59:49Z" | NONE | null | null | null | Error traceback:
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-8-a7b592d3bca0> in <module>()
1 from datasets import load_dataset, load_metric
2
----> 3 common_voice_train = load_dataset("common_voice", "pa-IN", split="train+validation")
4 common_voice_test = load_dataset("common_voice", "pa-IN", split="test")
9 frames
/root/.cache/huggingface/modules/datasets_modules/datasets/common_voice/078d412587e9efeb0ae2e574da99c31e18844c496008d53dc5c60f4159ed639b/common_voice.py in <module>()
19
20 import datasets
---> 21 from datasets.tasks import AutomaticSpeechRecognition
22
23
ModuleNotFoundError: No module named 'datasets.tasks' | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2595/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2595/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1787 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1787/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1787/comments | https://api.github.com/repos/huggingface/datasets/issues/1787/events | https://github.com/huggingface/datasets/pull/1787 | 795,485,842 | MDExOlB1bGxSZXF1ZXN0NTYyODI1NTI3 | 1,787 | Update the CommonGen citation information | {
"avatar_url": "https://avatars.githubusercontent.com/u/10104354?v=4",
"events_url": "https://api.github.com/users/yuchenlin/events{/privacy}",
"followers_url": "https://api.github.com/users/yuchenlin/followers",
"following_url": "https://api.github.com/users/yuchenlin/following{/other_user}",
"gists_url": "https://api.github.com/users/yuchenlin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yuchenlin",
"id": 10104354,
"login": "yuchenlin",
"node_id": "MDQ6VXNlcjEwMTA0MzU0",
"organizations_url": "https://api.github.com/users/yuchenlin/orgs",
"received_events_url": "https://api.github.com/users/yuchenlin/received_events",
"repos_url": "https://api.github.com/users/yuchenlin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yuchenlin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuchenlin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yuchenlin"
} | [] | closed | false | null | [] | null | [] | "2021-01-27T22:12:47Z" | "2021-01-28T13:56:29Z" | "2021-01-28T13:56:29Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1787.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1787",
"merged_at": "2021-01-28T13:56:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1787.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1787"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1787/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1787/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2741 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2741/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2741/comments | https://api.github.com/repos/huggingface/datasets/issues/2741/events | https://github.com/huggingface/datasets/issues/2741 | 957,979,559 | MDU6SXNzdWU5NTc5Nzk1NTk= | 2,741 | Add Hypersim dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/osanseviero",
"id": 7246357,
"login": "osanseviero",
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"type": "User",
"url": "https://api.github.com/users/osanseviero"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",
"default": false,
"description": "Vision datasets",
"id": 3608941089,
"name": "vision",
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision"
}
] | open | false | null | [] | null | [] | "2021-08-02T10:06:50Z" | "2021-12-08T12:06:51Z" | null | MEMBER | null | null | null | ## Adding a Dataset
- **Name:** Hypersim
- **Description:** photorealistic synthetic dataset for holistic indoor scene understanding
- **Paper:** *link to the dataset paper if available*
- **Data:** https://github.com/apple/ml-hypersim
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2741/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2741/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2828 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2828/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2828/comments | https://api.github.com/repos/huggingface/datasets/issues/2828/events | https://github.com/huggingface/datasets/pull/2828 | 977,181,517 | MDExOlB1bGxSZXF1ZXN0NzE3OTYwODg3 | 2,828 | Add code-mixed Kannada Hope speech dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/46108405?v=4",
"events_url": "https://api.github.com/users/adeepH/events{/privacy}",
"followers_url": "https://api.github.com/users/adeepH/followers",
"following_url": "https://api.github.com/users/adeepH/following{/other_user}",
"gists_url": "https://api.github.com/users/adeepH/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/adeepH",
"id": 46108405,
"login": "adeepH",
"node_id": "MDQ6VXNlcjQ2MTA4NDA1",
"organizations_url": "https://api.github.com/users/adeepH/orgs",
"received_events_url": "https://api.github.com/users/adeepH/received_events",
"repos_url": "https://api.github.com/users/adeepH/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/adeepH/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adeepH/subscriptions",
"type": "User",
"url": "https://api.github.com/users/adeepH"
} | [] | closed | false | null | [] | null | [] | "2021-08-23T15:55:09Z" | "2021-10-01T17:21:03Z" | "2021-10-01T17:21:03Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2828.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2828",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2828.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2828"
} | ## Adding a Dataset
- **Name:** *KanHope*
- **Description:** *A code-mixed English-Kannada dataset for Hope speech detection*
- **Paper:** *https://arxiv.org/abs/2108.04616*
- **Data:** *https://github.com/adeepH/KanHope/tree/main/dataset*
- **Motivation:** *The dataset is amongst the very few resources available for code-mixed low-resourced Dravidian languages of India* | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2828/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2828/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5768 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5768/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5768/comments | https://api.github.com/repos/huggingface/datasets/issues/5768/events | https://github.com/huggingface/datasets/issues/5768 | 1,672,494,561 | I_kwDODunzps5jsD3h | 5,768 | load_dataset("squad") doesn't work in 2.7.1 and 2.10.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/57412770?v=4",
"events_url": "https://api.github.com/users/yaseen157/events{/privacy}",
"followers_url": "https://api.github.com/users/yaseen157/followers",
"following_url": "https://api.github.com/users/yaseen157/following{/other_user}",
"gists_url": "https://api.github.com/users/yaseen157/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yaseen157",
"id": 57412770,
"login": "yaseen157",
"node_id": "MDQ6VXNlcjU3NDEyNzcw",
"organizations_url": "https://api.github.com/users/yaseen157/orgs",
"received_events_url": "https://api.github.com/users/yaseen157/received_events",
"repos_url": "https://api.github.com/users/yaseen157/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yaseen157/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yaseen157/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yaseen157"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Thanks for reporting, @yaseen157.\r\n\r\nCould you please give the complete error stack trace?",
"I am not able to reproduce your issue: the dataset loads perfectly on my local machine and on a Colab notebook: https://colab.research.google.com/drive/1Fbdoa1JdNz8DOdX6gmIsOK1nCT8Abj4O?usp=sharing\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"squad\")\r\nDownloading builder script: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5.27k/5.27k [00:00<00:00, 3.22MB/s]\r\nDownloading metadata: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.36k/2.36k [00:00<00:00, 1.60MB/s]\r\nDownloading readme: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7.67k/7.67k [00:00<00:00, 4.58MB/s]\r\nDownloading and preparing dataset squad/plain_text to ...t/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453...\r\nDownloading data: 30.3MB [00:00, 91.8MB/s] | 0/2 [00:00<?, ?it/s]\r\nDownloading data: 4.85MB [00:00, 75.3MB/s] \r\nDownloading data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.31it/s]\r\nExtracting data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2157.01it/s]\r\nDataset squad downloaded and prepared to .../.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 463.95it/s]\r\n\r\nIn [3]: ds\r\nOut[3]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'title', 'context', 'question', 'answers'],\r\n num_rows: 87599\r\n })\r\n validation: Dataset({\r\n features: ['id', 'title', 'context', 'question', 'answers'],\r\n num_rows: 10570\r\n })\r\n})\r\n```",
"I am at a complete loss for what's happening here. A quick summary, I have 3 machines to try this with:\r\n1) My windows 10 laptop\r\n2) Linux machine1, super computer login node\r\n3) Linux machine2, super computer compute node\r\n\r\nLet's define the following as a test script for the machines:\r\n\r\n```\r\nimport traceback\r\nimport datasets\r\nprint(f\"{datasets.__version__=}\")\r\ntry:\r\n ds = datasets.load_dataset(\"squad\")\r\nexcept:\r\n traceback.print_exc()\r\nelse:\r\n print(\"Success!\")\r\n```\r\n\r\nThe Windows laptop enters some sort of traceback recursion loop:\r\n\r\n> datasets.__version__='2.7.1'\r\n> Downloading and preparing dataset squad/plain_text to C:/Users/yr3g17/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453...\r\n> Downloading data files: 100%|██████████| 2/2 [00:00<?, ?it/s]\r\n> Traceback (most recent call last):\r\n> File \"<string>\", line 1, in <module>\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 116, in spawn_main\r\n> exitcode = _main(fd, parent_sentinel)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 125, in _main\r\n> prepare(preparation_data)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 236, in prepare\r\n> _fixup_main_from_path(data['init_main_from_path'])\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 287, in _fixup_main_from_path\r\n> main_content = runpy.run_path(main_path,\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\runpy.py\", line 267, in run_path\r\n> code, fname = _get_code_from_file(run_name, path_name)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\runpy.py\", line 237, in _get_code_from_file\r\n> with io.open_code(decoded_path) as f:\r\n> OSError: [Errno 22] Invalid argument: 'C:\\\\Users\\\\yr3g17\\\\OneDrive - University of Southampton\\\\Documents\\\\PhD-repository\\\\<input>'\r\n> Traceback (most recent call last):\r\n> File \"<string>\", line 1, in <module>\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 116, in spawn_main\r\n> exitcode = _main(fd, parent_sentinel)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 125, in _main\r\n> prepare(preparation_data)\r\n**this error traceback is endlessly recursive**\r\n\r\nThis is a brand new issue that started today and I didn't even realise was a thing, as I had been using my windows machine to follow tracebacks for the other machines...\r\n\r\nI suspect this issue had something to do with my filepath naming, but I couldn't confirm this when I spent time trying to debug this myself weeks ago, something to do with files being locked and never released. I'm not too concerned about my laptop not working here because I've had so many issues with Microsoft OneDrive and my filesystem.\r\n\r\nLinux machines 1 and 2 were working fine for months, but have all of a sudden stopped working. Trying to run linux machine 1 (login node), I get:\r\n\r\n> datasets.__version__='2.10.1'\r\n> Downloading and preparing dataset json/squad to /home/yr3g17/.cache/hugg\r\ningface/datasets/json/squad-d733af945be1d2c2/0.0.0/0f7e3662623656454fcd2\r\nb650f34e886a7db4b9104504885bd462096cc7a9f51...\r\n> Downloading data files: 100%|███████████████████████████████████████████\r\n█████████████████████████████████████████████| 2/2 [00:00<00:00, 4042.70\r\nit/s]\r\n>Extracting data files: 100%|███████████████████████████████████████\r\n███████████████████████████████████████████████████| 2/2 [00:00<00:00, 1\r\n11.15it/s]\r\n> Generating train split: 0 examples [00:00, ? examples/s]\r\n\r\n and hangs here. This has not happened to me before on the Linux machine. If I forcefully keyboard interrupt, I get:\r\n \r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 2, in <module>\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/load.py\", line 1782, in load_dataset\r\n> builder_instance.download_and_prepare(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/builder.py\", line 793, in download_and_prepare\r\n> with FileLock(lock_path) if is_local else contextlib.nullcontext():\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/utils/filelock.py\", line 320, in __enter__\r\n> self.acquire()\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/utils/filelock.py\", line 282, in acquire\r\n> time.sleep(poll_intervall)\r\n\r\nWhich also appears to be file lock related! I resolved this by navigating to my ~/.cache/huggingface/datasets directory and wiping out anything to do with the squad dataset in *.lock files. Now I get:\r\n\r\n```\r\nfrom datasets import load_dataset\r\ndataset_load(\"squad\")\r\n\r\n```\r\n> Downloading and preparing dataset squad/plain_text to /home/yr3g17/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb\r\n> 2511d223b9150cce08a837ef62ffea453...\r\n> Downloading data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 44.75it/s]\r\n> Extracting data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 8.54it/s]\r\n> Dataset squad downloaded and prepared to /home/yr3g17/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150\r\n> cce08a837ef62ffea453. Subsequent calls will reuse this data.\r\n> 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 19.77it/s]\r\n> DatasetDict({\r\n> train: Dataset({\r\n> features: ['id', 'title', 'context', 'question', 'answers'],\r\n> num_rows: 87599\r\n> })\r\n> validation: Dataset({\r\n> features: ['id', 'title', 'context', 'question', 'answers'],\r\n> num_rows: 10570\r\n> })\r\n> })\r\n> \r\n\r\nWhich all seems fine right, it's doing what it should be. But now, without ever leaving the IDE, I \"make a subsequent call\" to reuse the data by repeating the command. I encounter the following traceback\r\n\r\n`load_dataset(\"squad\")`\r\n\r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 1, in <module>\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1759, in load_dataset\r\n> builder_instance = load_dataset_builder(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1496, in load_dataset_builder\r\n> dataset_module = dataset_module_factory(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1151, in dataset_module_factory\r\n> ).get_module()\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 631, in get_module\r\n> data_files = DataFilesDict.from_local_or_remote(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/data_files.py\", line 796, in from_local_or_remote\r\n> DataFilesList.from_local_or_remote(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/data_files.py\", line 764, in from_local_or_remote\r\n> data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/data_files.py\", line 369, in resolve_patterns_locally_or_by_urls\r\n> raise FileNotFoundError(error_msg)\r\n> FileNotFoundError: Unable to resolve any data file that matches '['train[-._ 0-9/]**', '**[-._ 0-9/]train[-._ 0-9/]**', 'training[-._ 0-9/]**', '**[-\r\n> ._ 0-9/]training[-._ 0-9/]**']' at /mainfs/home/yr3g17/.cache/huggingface/datasets/squad with any supported extension ['csv', 'tsv', 'json', 'jsonl',\r\n> 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'gr\r\n> ib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', '\r\n> mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', '\r\n> emf', 'xbm', 'xpm', 'BLP', 'BMP', 'DIB', 'BUFR', 'CUR', 'PCX', 'DCX', 'DDS', 'PS', 'EPS', 'FIT', 'FITS', 'FLI', 'FLC', 'FTC', 'FTU', 'GBR', 'GIF', 'G\r\n> RIB', 'H5', 'HDF', 'PNG', 'APNG', 'JP2', 'J2K', 'JPC', 'JPF', 'JPX', 'J2C', 'ICNS', 'ICO', 'IM', 'IIM', 'TIF', 'TIFF', 'JFIF', 'JPE', 'JPG', 'JPEG',\r\n> 'MPG', 'MPEG', 'MSP', 'PCD', 'PXR', 'PBM', 'PGM', 'PPM', 'PNM', 'PSD', 'BW', 'RGB', 'RGBA', 'SGI', 'RAS', 'TGA', 'ICB', 'VDA', 'VST', 'WEBP', 'WMF',\r\n> 'EMF', 'XBM', 'XPM', 'aiff', 'au', 'avr', 'caf', 'flac', 'htk', 'svx', 'mat4', 'mat5', 'mpc2k', 'ogg', 'paf', 'pvf', 'raw', 'rf64', 'sd2', 'sds', 'ir\r\n> cam', 'voc', 'w64', 'wav', 'nist', 'wavex', 'wve', 'xi', 'mp3', 'opus', 'AIFF', 'AU', 'AVR', 'CAF', 'FLAC', 'HTK', 'SVX', 'MAT4', 'MAT5', 'MPC2K', 'O\r\n> GG', 'PAF', 'PVF', 'RAW', 'RF64', 'SD2', 'SDS', 'IRCAM', 'VOC', 'W64', 'WAV', 'NIST', 'WAVEX', 'WVE', 'XI', 'MP3', 'OPUS', 'zip']\r\n\r\nIt doesn't even appear like I can reliably repeat this process. I'll nuke squad files in my dataset cache and run the Python code again (which downloads a new copy of the dataset to cache). It will either fail (as it just did in the quote above), or it will successfully recall the dataset.\r\n\r\nI repeated this nuking process a few times until calling load_dataset was reliably giving me the correct result (no filelocking issues or tracebacks). I then sent the test script as a job to the supercomputer compute nodes (which do not have internet access and therefore depend on cached data from Linux machine 1 login nodes)\r\n\r\n> Using the latest cached version of the module from /home/yr3g17/.cache/huggingface/modules/datasets_modules/datasets/squad/8730650fed465361f38ac4d810\r\n> ccdd16e8fc87b56498e52fb7e2cadaefc1f177 (last modified on Tue Feb 14 10:12:56 2023) since it couldn't be found locally at squad., or remotely on the Hugging Face Hub.\r\n> Traceback (most recent call last):\r\n> File \"/mainfs/scratch/yr3g17/squad_qanswering/3054408/0/../../main.py\", line 5, in <module>\r\n> dataset = load_dataset(\"squad\")\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1759, in load_dataset\r\n> builder_instance = load_dataset_builder(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1522, in load_dataset_builder\r\n> builder_instance: DatasetBuilder = builder_cls(\r\n> TypeError: 'NoneType' object is not callable\r\n\r\nand I have absolutely no idea why the second and third machines are producing different tracebacks. I have previously run these exact scripts successfully on the login and compute nodes of the supercomputer, this issue I'm raising has appeared fairly recently for me. This, is where I encounter the TypeError that I opened this issue with, which I was able to traceback (using my laptop before it too started not working) to whatever was dynamically importing \"builder_cls\". That bit of code wasn't doing importing builder_cls correctly and would effectively make the assignment \"builder_cls=None\" resulting in the TypeError. Does any of this help?",
"I'm back on linux machine 1 (login node) now. After submitting that as a job to machine 2 and it failing with TypeError, linux machine 1 now produces identical traceback to machine 2:\r\n\r\n> (arkroyal) [yr3g17@cyan52 squad_qanswering]$ python\r\n> Python 3.10.8 (main, Nov 24 2022, 14:13:03) [GCC 11.2.0] on linux\r\n> Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>\r\n> from datasets import load_dataset\r\n> load_dataset(\"squad\")\r\n>\r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 1, in <module>\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1759, in load_dataset\r\n> builder_instance = load_dataset_builder(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1522, in load_dataset_builder\r\n> builder_instance: DatasetBuilder = builder_cls(\r\n> TypeError: 'NoneType' object is not callable\r\n\r\nI thought it might be useful to provide you with my cache file structure:\r\n\r\n>_home_yr3g17_.cache_huggingface_datasets_casino_default_1.1.0_302c3b1ac78c48091deabe83a11f4003c7b472a4e11a8eb92799653785bd5da1.lock\r\n>_home_yr3g17_.cache_huggingface_datasets_imdb_plain_text_1.0.0_2fdd8b9bcadd6e7055e742a706876ba43f19faee861df134affd7a3f60fc38a1.lock\r\n>_home_yr3g17_.cache_huggingface_datasets_squad_plain_text_1.0.0_d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453.lock\r\n>_home_yr3g17_.cache_huggingface_datasets_yelp_review_full_yelp_review_full_1.0.0_e8e18e19d7be9e75642fc66b198abadb116f73599ec89a69ba5dd8d1e57ba0bf.lock\r\n> casino\r\n> downloads\r\n> imdb\r\n> json\r\n> squad\r\n> squad_v2\r\n> yelp_review_full\r\n\r\nThe inside of squad/plain_text/1.0.0/ looks like\r\n\r\n> d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453\r\n> d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453.incomplete_info.lock\r\n> d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453_builder.lock\r\n",
"I see this is quite a complex use case...\r\n\r\nLet's try multiple things:\r\n- First, update `datasets` and make sure you use the same version in all machines, so that we can easily compare different behaviors.\r\n ```\r\n pip install -U datasets\r\n ```\r\n- Second, wherever you run the `load_dataset(\"squad\")` command, make sure there is not a local directory named \"squad\". The datasets library gives priority to any local file/directory over the datasets on the Hugging Face Hub\r\n - I tell you this, because in one of your trace backs, it seems it refers to a local directory:\r\n ```\r\n Downloading and preparing dataset json/squad to /home/yr3g17/.cache/huggingface/datasets/json/squad-d733af945be1d2c2/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...\r\n ```\r\n- Third, to use the \"squad\" dataset from the Hub, you need to have internet connection, so that you can download the \"squad\" Python loading script from the Hub. Do all your machines have internet connection?\r\n - I ask this because of this error message:\r\n ```\r\n Using the latest cached version of the module from /home/yr3g17/.cache/huggingface/modules/datasets_modules/datasets/squad/8730650fed465361f38ac4d810ccdd16e8fc87b56498e52fb7e2cadaefc1f177 (last modified on Tue Feb 14 10:12:56 2023) since it couldn't be found locally at squad., or remotely on the Hugging Face Hub.\r\n ```\r\n- Fourth, to be sure that we avoid any issues with the cache, it is a good idea to remove it and regenerate it. Remove `.cache/huggingface/datasets` and also `.cache/huggingface/modules`\r\n- Fifth, as an additional debugging tool, let's be sure we use the latest \"squad\" Python loading script by passing the revision parameter:\r\n ```\r\n ds = load_dataset(\"squad\", revision=\"5fe18c4c680f9922d794e3f4dd673a751c74ee37\")\r\n ```",
"Additionally, we just had an infrastructure issue on the Hugging Face Hub at around 11:30 today. That might have contributed to the connectivity issue... It is fixed now.\r\n\r\nhttps://status.huggingface.co/",
"Hi again, thanks for your help and insight Albert Villanova.\r\n\r\nIt's all working now, so thank you for that. For the benefit of anyone else who ends up in this thread, I solved the problem by addressing Albert's advice:\r\n\r\n(1) Both Windows and Linux machine 1 (have internet access) and can now access the SQuAD dataset. The supercomputer login node can only access version 2.7.1, but my Windows laptop is running on datasets 2.11.0 just fine. I suspect it was just a perfect storm alongside the aforementioned \"infrastructure issue\".\r\n\r\n(2) I did have a local directory called squad, because I was using a local copy of evaluate's \"SQuAD\" metric. The supercomputer compute nodes do not have internet access and treat `metric = evaluate.load('<x>')` as a way of loading a metric at the local path `./<x>/<x>.py`, which could've been a related issue as I was storing the metric under `squad/squad.py`. Don't be lazy like me and store the evaluation code under a path with a name that can be misinterpreted.\r\n\r\n(3) I can't give internet access to the supercomputer compute nodes, so local files do just fine here.\r\n\r\n(4) The windows and Linux machine 1 can both access the internet and were getting fresh copies of the dataset from the huggingface hub. Linux machine 2 was working after I cleared the contents of ~/.cache/huggingface/....\r\n\r\nI feel silly now, knowing it was all so simple! Sorry about that Albert, and thanks again for the help. I've not raised a Github issue like this before, so I'm not sure if I should be close my own issues or if this is something you guys do?",
"Thanks for your detailed feedback which for sure will be useful to other community members."
] | "2023-04-18T07:10:56Z" | "2023-04-20T10:27:23Z" | "2023-04-20T10:27:22Z" | NONE | null | null | null | ### Describe the bug
There is an issue that seems to be unique to the "squad" dataset, in which it cannot be loaded using standard methods. This issue is most quickly reproduced from the command line, using the HF examples to verify a dataset is loaded properly.
This is not a problem with "squad_v2" dataset for example.
### Steps to reproduce the bug
cmd line
> $ python -c "from datasets import load_dataset; print(load_dataset('squad', split='train')[0])"
OR
Python IDE
> from datasets import load_dataset
> load_dataset("squad")
### Expected behavior
I expected to either see the output described here from running the very same command in command line ([https://huggingface.co/docs/datasets/installation]), or any output that does not raise Python's TypeError.
There is some funky behaviour in the dataset builder portion of the codebase that means it is trying to import the squad dataset with an incorrect path, or the squad dataset couldn't be downloaded. I'm not really sure what the problem is beyond that. Messing around with caching I did manage to get it to load the dataset once, and then couldn't repeat this.
### Environment info
datasets=2.7.1 **or** 2.10.1, python=3.10.8, Linux 3.10.0-1160.36.2.el7.x86_64 **or** Windows 10-64
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5768/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5768/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3391 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3391/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3391/comments | https://api.github.com/repos/huggingface/datasets/issues/3391/events | https://github.com/huggingface/datasets/issues/3391 | 1,072,849,055 | I_kwDODunzps4_8mCf | 3,391 | method to select columns | {
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cccntu",
"id": 31893406,
"login": "cccntu",
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"repos_url": "https://api.github.com/users/cccntu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cccntu"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"duplicate of #2655"
] | "2021-12-07T02:44:19Z" | "2021-12-07T02:45:27Z" | "2021-12-07T02:45:27Z" | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
* There is currently no way to select some columns of a dataset. In pandas, one can use `df[['col1', 'col2']]` to select columns, but in `datasets`, it results in error.
**Describe the solution you'd like**
* A new method that can be used to create a new dataset with only a list of specified columns.
**Describe alternatives you've considered**
`.remove_columns(self, columns: Union[str, List[str]], inverse: bool = False)`
Or
`.select(self, indices: Iterable = None, columns: List[str] = None)`
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3391/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3391/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1525 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1525/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1525/comments | https://api.github.com/repos/huggingface/datasets/issues/1525/events | https://github.com/huggingface/datasets/pull/1525 | 764,530,582 | MDExOlB1bGxSZXF1ZXN0NTM4NTUwMzI2 | 1,525 | Adding a second branch for Atomic to fix git errors | {
"avatar_url": "https://avatars.githubusercontent.com/u/8900094?v=4",
"events_url": "https://api.github.com/users/ontocord/events{/privacy}",
"followers_url": "https://api.github.com/users/ontocord/followers",
"following_url": "https://api.github.com/users/ontocord/following{/other_user}",
"gists_url": "https://api.github.com/users/ontocord/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ontocord",
"id": 8900094,
"login": "ontocord",
"node_id": "MDQ6VXNlcjg5MDAwOTQ=",
"organizations_url": "https://api.github.com/users/ontocord/orgs",
"received_events_url": "https://api.github.com/users/ontocord/received_events",
"repos_url": "https://api.github.com/users/ontocord/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ontocord/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ontocord/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ontocord"
} | [] | closed | false | null | [] | null | [] | "2020-12-12T22:54:50Z" | "2020-12-28T15:51:11Z" | "2020-12-28T15:51:11Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1525.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1525",
"merged_at": "2020-12-28T15:51:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1525.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1525"
} | Adding the Atomic common sense dataset.
See https://homes.cs.washington.edu/~msap/atomic/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1525/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1525/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4174 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4174/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4174/comments | https://api.github.com/repos/huggingface/datasets/issues/4174/events | https://github.com/huggingface/datasets/pull/4174 | 1,205,575,941 | PR_kwDODunzps42SnJS | 4,174 | Fix when map function modifies input in-place | {
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomasw21",
"id": 24695242,
"login": "thomasw21",
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomasw21"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-04-15T13:23:15Z" | "2022-04-15T14:52:07Z" | "2022-04-15T14:45:58Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4174.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4174",
"merged_at": "2022-04-15T14:45:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4174.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4174"
} | When `function` modifies input in-place, the guarantee that columns in `remove_columns` are contained in `input` doesn't hold true anymore. Therefore we need to relax way we pop elements by checking if that column exists. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4174/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4174/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/244 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/244/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/244/comments | https://api.github.com/repos/huggingface/datasets/issues/244/events | https://github.com/huggingface/datasets/pull/244 | 631,869,155 | MDExOlB1bGxSZXF1ZXN0NDI4NjgxMTcx | 244 | Add Allociné Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/37028092?v=4",
"events_url": "https://api.github.com/users/TheophileBlard/events{/privacy}",
"followers_url": "https://api.github.com/users/TheophileBlard/followers",
"following_url": "https://api.github.com/users/TheophileBlard/following{/other_user}",
"gists_url": "https://api.github.com/users/TheophileBlard/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TheophileBlard",
"id": 37028092,
"login": "TheophileBlard",
"node_id": "MDQ6VXNlcjM3MDI4MDky",
"organizations_url": "https://api.github.com/users/TheophileBlard/orgs",
"received_events_url": "https://api.github.com/users/TheophileBlard/received_events",
"repos_url": "https://api.github.com/users/TheophileBlard/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TheophileBlard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheophileBlard/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TheophileBlard"
} | [] | closed | false | null | [] | null | [
"great work @TheophileBlard ",
"LGTM, thanks a lot for adding dummy data tests :-) Was it difficult to create the correct dummy data folder? ",
"It was pretty easy actually. Documentation is on point !"
] | "2020-06-05T19:19:26Z" | "2020-06-11T07:47:26Z" | "2020-06-11T07:47:26Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/244.diff",
"html_url": "https://github.com/huggingface/datasets/pull/244",
"merged_at": "2020-06-11T07:47:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/244.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/244"
} | This is a french binary sentiment classification dataset, which was used to train this model: https://huggingface.co/tblard/tf-allocine.
Basically, it's a french "IMDB" dataset, with more reviews.
More info on [this repo](https://github.com/TheophileBlard/french-sentiment-analysis-with-bert). | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/244/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/244/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/237 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/237/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/237/comments | https://api.github.com/repos/huggingface/datasets/issues/237/events | https://github.com/huggingface/datasets/issues/237 | 631,199,940 | MDU6SXNzdWU2MzExOTk5NDA= | 237 | Can't download MultiNLI | {
"avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4",
"events_url": "https://api.github.com/users/patpizio/events{/privacy}",
"followers_url": "https://api.github.com/users/patpizio/followers",
"following_url": "https://api.github.com/users/patpizio/following{/other_user}",
"gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patpizio",
"id": 15801338,
"login": "patpizio",
"node_id": "MDQ6VXNlcjE1ODAxMzM4",
"organizations_url": "https://api.github.com/users/patpizio/orgs",
"received_events_url": "https://api.github.com/users/patpizio/received_events",
"repos_url": "https://api.github.com/users/patpizio/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patpizio/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patpizio"
} | [] | closed | false | null | [] | null | [
"You should use `load_dataset('glue', 'mnli')`",
"Thanks! I thought I had to use the same code displayed in the live viewer:\r\n```python\r\n!pip install nlp\r\nfrom nlp import load_dataset\r\ndataset = load_dataset('multi_nli', 'plain_text')\r\n```\r\nYour suggestion works, even if then I got a different issue (#242). ",
"Glad it helps !\nThough I am not one of hf team, but maybe you should close this issue first."
] | "2020-06-04T23:05:21Z" | "2020-06-06T10:51:34Z" | "2020-06-06T10:51:34Z" | CONTRIBUTOR | null | null | null | When I try to download MultiNLI with
```python
dataset = load_dataset('multi_nli')
```
I get this long error:
```python
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-13-3b11f6be4cb9> in <module>
1 # Load a dataset and print the first examples in the training set
2 # nli_dataset = nlp.load_dataset('multi_nli')
----> 3 dataset = load_dataset('multi_nli')
4 # nli_dataset = nlp.load_dataset('multi_nli', split='validation_matched[:10%]')
5 # print(nli_dataset['train'][0])
~\Miniconda3\envs\nlp\lib\site-packages\nlp\load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
514
515 # Download and prepare data
--> 516 builder_instance.download_and_prepare(
517 download_config=download_config,
518 download_mode=download_mode,
~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
417 with utils.temporary_assignment(self, "_cache_dir", tmp_data_dir):
418 verify_infos = not save_infos and not ignore_verifications
--> 419 self._download_and_prepare(
420 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
421 )
~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
455 split_dict = SplitDict(dataset_name=self.name)
456 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 457 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
458 # Checksums verification
459 if verify_infos:
~\Miniconda3\envs\nlp\lib\site-packages\nlp\datasets\multi_nli\60774175381b9f3f1e6ae1028229e3cdb270d50379f45b9f2c01008f50f09e6b\multi_nli.py in _split_generators(self, dl_manager)
99 def _split_generators(self, dl_manager):
100
--> 101 downloaded_dir = dl_manager.download_and_extract(
102 "http://storage.googleapis.com/tfds-data/downloads/multi_nli/multinli_1.0.zip"
103 )
~\Miniconda3\envs\nlp\lib\site-packages\nlp\utils\download_manager.py in download_and_extract(self, url_or_urls)
214 extracted_path(s): `str`, extracted paths of given URL(s).
215 """
--> 216 return self.extract(self.download(url_or_urls))
217
218 def get_recorded_sizes_checksums(self):
~\Miniconda3\envs\nlp\lib\site-packages\nlp\utils\download_manager.py in extract(self, path_or_paths)
194 path_or_paths.
195 """
--> 196 return map_nested(
197 lambda path: cached_path(path, extract_compressed_file=True, force_extract=False), path_or_paths,
198 )
~\Miniconda3\envs\nlp\lib\site-packages\nlp\utils\py_utils.py in map_nested(function, data_struct, dict_only, map_tuple)
168 return tuple(mapped)
169 # Singleton
--> 170 return function(data_struct)
171
172
~\Miniconda3\envs\nlp\lib\site-packages\nlp\utils\download_manager.py in <lambda>(path)
195 """
196 return map_nested(
--> 197 lambda path: cached_path(path, extract_compressed_file=True, force_extract=False), path_or_paths,
198 )
199
~\Miniconda3\envs\nlp\lib\site-packages\nlp\utils\file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
231 if is_zipfile(output_path):
232 with ZipFile(output_path, "r") as zip_file:
--> 233 zip_file.extractall(output_path_extracted)
234 zip_file.close()
235 elif tarfile.is_tarfile(output_path):
~\Miniconda3\envs\nlp\lib\zipfile.py in extractall(self, path, members, pwd)
1644
1645 for zipinfo in members:
-> 1646 self._extract_member(zipinfo, path, pwd)
1647
1648 @classmethod
~\Miniconda3\envs\nlp\lib\zipfile.py in _extract_member(self, member, targetpath, pwd)
1698
1699 with self.open(member, pwd=pwd) as source, \
-> 1700 open(targetpath, "wb") as target:
1701 shutil.copyfileobj(source, target)
1702
OSError: [Errno 22] Invalid argument: 'C:\\Users\\Python\\.cache\\huggingface\\datasets\\3e12413b8ec69f22dfcfd54a79d1ba9e7aac2e18e334bbb6b81cca64fd16bffc\\multinli_1.0\\Icon\r'
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/237/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/237/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3341 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3341/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3341/comments | https://api.github.com/repos/huggingface/datasets/issues/3341/events | https://github.com/huggingface/datasets/issues/3341 | 1,067,449,569 | I_kwDODunzps4_n_zh | 3,341 | Mirror the canonical datasets to the Hugging Face Hub | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [
"I created a GitHub project to keep track of what needs to be done:\r\nhttps://github.com/huggingface/datasets/projects/3\r\n\r\nI also store my code in a (private for now) repository at https://github.com/huggingface/mirror_canonical_datasets_on_hub",
"I understand that the datasets are mirrored on the Hub now, right? Might I close @lhoestq @SBrandeis?"
] | "2021-11-30T16:42:05Z" | "2022-01-26T14:47:37Z" | "2022-01-26T14:47:37Z" | CONTRIBUTOR | null | null | null | - [ ] create a repo on https://hf.co/datasets for every canonical dataset
- [ ] on every commit related to a dataset, update the hf.co repo
See https://github.com/huggingface/moon-landing/pull/1562
@SBrandeis: I let you edit this description if needed to precise the intent. | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3341/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3341/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1077 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1077/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1077/comments | https://api.github.com/repos/huggingface/datasets/issues/1077/events | https://github.com/huggingface/datasets/pull/1077 | 756,617,964 | MDExOlB1bGxSZXF1ZXN0NTMyMTM5ODMx | 1,077 | Added glucose dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TevenLeScao",
"id": 26709476,
"login": "TevenLeScao",
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TevenLeScao"
} | [] | closed | false | null | [] | null | [] | "2020-12-03T21:49:01Z" | "2020-12-04T09:55:53Z" | "2020-12-04T09:55:52Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1077.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1077",
"merged_at": "2020-12-04T09:55:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1077.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1077"
} | This PR adds the [Glucose](https://github.com/ElementalCognition/glucose) dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1077/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1077/timeline | null | null | true |