url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
600M
2.05B
node_id
stringlengths
18
32
number
int64
2
6.51k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
sequencelengths
0
30
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
float64
draft
float64
0
1
pull_request
dict
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
float64
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/5649
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5649/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5649/comments
https://api.github.com/repos/huggingface/datasets/issues/5649/events
https://github.com/huggingface/datasets/issues/5649
1,630,173,460
I_kwDODunzps5hKnkU
5,649
The index column created with .to_sql() is dependent on the batch_size when writing
{ "avatar_url": "https://avatars.githubusercontent.com/u/45281?v=4", "events_url": "https://api.github.com/users/lsb/events{/privacy}", "followers_url": "https://api.github.com/users/lsb/followers", "following_url": "https://api.github.com/users/lsb/following{/other_user}", "gists_url": "https://api.github.com/users/lsb/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lsb", "id": 45281, "login": "lsb", "node_id": "MDQ6VXNlcjQ1Mjgx", "organizations_url": "https://api.github.com/users/lsb/orgs", "received_events_url": "https://api.github.com/users/lsb/received_events", "repos_url": "https://api.github.com/users/lsb/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lsb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lsb/subscriptions", "type": "User", "url": "https://api.github.com/users/lsb" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Thanks for reporting, @lsb. \r\n\r\nWe are investigating it.\r\n\r\nOn the other hand, please note that in the next `datasets` release, the index will not be created by default (see #5583). If you would like to have it, you will need to explicitly pass `index=True`. ", "I think this is low enough priority for me to close this as Won't Fix. If I need any primary keys I can generate them beforehand. Feel free to reopen." ]
"2023-03-18T05:25:17Z"
"2023-06-17T07:01:57Z"
"2023-06-17T07:01:57Z"
NONE
null
null
null
### Describe the bug It seems like the "index" column is designed to be unique? The values are only unique per batch. The SQL index is not a unique index. This can be a problem, for instance, when building a faiss index on a dataset and then trying to match up ids with a sql export. ### Steps to reproduce the bug ``` from datasets import Dataset import sqlite3 db = sqlite3.connect(":memory:") nice_numbers = Dataset.from_dict({"nice_number": range(101,106)}) nice_numbers.to_sql("nice1", db, batch_size=1) nice_numbers.to_sql("nice2", db, batch_size=2) print(db.execute("select * from nice1").fetchall()) # [(0, 101), (0, 102), (0, 103), (0, 104), (0, 105)] print(db.execute("select * from nice2").fetchall()) # [(0, 101), (1, 102), (0, 103), (1, 104), (0, 105)] ``` ### Expected behavior I expected the "index" column to be unique ### Environment info ``` % datasets-cli env Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.10.1 - Platform: macOS-13.2.1-arm64-arm-64bit - Python version: 3.9.6 - PyArrow version: 7.0.0 - Pandas version: 1.5.2 zsh: segmentation fault datasets-cli env ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5649/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5649/timeline
null
not_planned
false
https://api.github.com/repos/huggingface/datasets/issues/5252
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5252/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5252/comments
https://api.github.com/repos/huggingface/datasets/issues/5252/events
https://github.com/huggingface/datasets/pull/5252
1,451,765,838
PR_kwDODunzps5DCI1U
5,252
Support for decoding Image/Audio types in map when format type is not default one
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5252). All of your documentation changes will be reflected on that endpoint.", "Yes, if the image column is the first in the batch keys, it will decode the images because it reads the actual values. We could avoid this by checking the batch type, and if it's `LazyDict`, `num_examples` is equal to `len(batch.pa_table)`, which doesn't lead to decoding.", "Good idea. This can be done in a subsequent PR btw, since it's out of scope of the original goal of this PR", "Just fixed a small bug where it would show the pyarrow 10 warning about None -> empty lists conversions even with an Array2D with no nulls", "Fixed another bug when your map function returns a mix of LazyDict or regular dict and added some tests" ]
"2022-11-16T15:02:13Z"
"2022-12-13T17:01:54Z"
"2022-12-13T16:59:04Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5252.diff", "html_url": "https://github.com/huggingface/datasets/pull/5252", "merged_at": "2022-12-13T16:59:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/5252.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5252" }
Add support for decoding the `Image`/`Audio` types in `map` for the formats (Numpy, TF, Jax, PyTorch) other than the default one (Python). Additional improvements: * make `Dataset`'s "iter" API cleaner by removing `_iter` and replacing `_iter_batches` with `iter(batch_size)` (also implemented for `IterableDataset`) * iterate over arrow tables in `map` to avoid `_getitem` calls, which are much slower than `__iter__`/`iter(batch_size)`, when the `format_type` is not Python * fix `_iter_batches` (now named `iter`) when `drop_last_batch=True` and `pyarrow<=8.0.0` is installed * lazily extract and decode arrow data in the default format TODO: * [x] update the `iter` benchmark in the docs (the `BeamBuilder` cannot load the preprocessed datasets from our bucket, so wait for this to be fixed (cc @lhoestq)) Fix https://github.com/huggingface/datasets/issues/3992, fix https://github.com/huggingface/datasets/issues/3756
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5252/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5252/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5990
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5990/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5990/comments
https://api.github.com/repos/huggingface/datasets/issues/5990/events
https://github.com/huggingface/datasets/issues/5990
1,774,389,854
I_kwDODunzps5pwwpe
5,990
Pushing a large dataset on the hub consistently hangs
{ "avatar_url": "https://avatars.githubusercontent.com/u/10792502?v=4", "events_url": "https://api.github.com/users/AntreasAntoniou/events{/privacy}", "followers_url": "https://api.github.com/users/AntreasAntoniou/followers", "following_url": "https://api.github.com/users/AntreasAntoniou/following{/other_user}", "gists_url": "https://api.github.com/users/AntreasAntoniou/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AntreasAntoniou", "id": 10792502, "login": "AntreasAntoniou", "node_id": "MDQ6VXNlcjEwNzkyNTAy", "organizations_url": "https://api.github.com/users/AntreasAntoniou/orgs", "received_events_url": "https://api.github.com/users/AntreasAntoniou/received_events", "repos_url": "https://api.github.com/users/AntreasAntoniou/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AntreasAntoniou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AntreasAntoniou/subscriptions", "type": "User", "url": "https://api.github.com/users/AntreasAntoniou" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "Hi @AntreasAntoniou , sorry to know you are facing this issue. To help debugging it, could you tell me:\r\n- What is the total dataset size?\r\n- Is it always failing on the same shard or is the hanging problem happening randomly?\r\n- Were you able to save the dataset as parquet locally? This would help us determine if the problem comes from the upload or the file generation.\r\n\r\nI'm cc-ing @lhoestq who might have some insights from a `datasets` perspective.", "One trick that can also help is to check the traceback when you kill your python process: it will show where in the code it was hanging", "Right. So I did the trick @lhoestq suggested. Here is where things seem to hang\r\n\r\n```\r\nError while uploading 'data/train-00120-of-00195-466c2dbab2eb9989.parquet' to the Hub. \r\nPushing split train to the Hub. \r\nCreating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:03<00:00, 1.15s/ba]\r\nUpload 1 LFS files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:52<00:00, 52.12s/it]\r\nCreating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:03<00:00, 1.08s/ba]\r\nUpload 1 LFS files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:45<00:00, 45.54s/it]\r\nCreating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:03<00:00, 1.08s/ba]\r\nCreating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:03<00:00, 1.03s/ba^Upload 1 LFS files: 0%| | 0/1 [\r\n21:27:35<?, ?it/s] \r\nPushing dataset shards to the dataset hub: 63%|█████████████████████████████████████████████████████████████▎ | 122/195 [23:37:11<14:07:59, 696.98s/it]\r\n^CError in sys.excepthook: \r\nTraceback (most recent call last): \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/console.py\", line 1699, in print \r\n extend(render(renderable, render_options)) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/console.py\", line 1335, in render \r\n yield from self.render(render_output, _options) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/console.py\", line 1331, in render \r\n for render_output in iter_render: \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/constrain.py\", line 29, in __rich_console__ \r\n yield from console.render(self.renderable, child_options) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/console.py\", line 1331, in render \r\n for render_output in iter_render: \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/panel.py\", line 220, in __rich_console__ \r\n lines = console.render_lines(renderable, child_options, style=style) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/console.py\", line 1371, in render_lines \r\n lines = list( \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/segment.py\", line 292, in split_and_crop_lines \r\n for segment in segments: \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/console.py\", line 1331, in render \r\n for render_output in iter_render: \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/padding.py\", line 97, in __rich_console__ \r\n lines = console.render_lines( \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/console.py\", line 1371, in render_lines \r\n lines = list( \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/segment.py\", line 292, in split_and_crop_lines \r\n for segment in segments: \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/console.py\", line 1335, in render \r\n yield from self.render(render_output, _options) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/console.py\", line 1331, in render \r\n for render_output in iter_render: \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/syntax.py\", line 611, in __rich_console__ \r\n segments = Segments(self._get_syntax(console, options)) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/segment.py\", line 668, in __init__ \r\n self.segments = list(segments) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/syntax.py\", line 674, in _get_syntax \r\n lines: Union[List[Text], Lines] = text.split(\"\\n\", allow_blank=ends_on_nl) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/text.py\", line 1042, in split \r\n lines = Lines( \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/containers.py\", line 70, in __init__ \r\n self._lines: List[\"Text\"] = list(lines) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/text.py\", line 1043, in <genexpr> \r\n line for line in self.divide(flatten_spans()) if line.plain != separator \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/text.py\", line 385, in plain \r\n if len(self._text) != 1: \r\nKeyboardInterrupt \r\n \r\nOriginal exception was: \r\nTraceback (most recent call last): \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/tqdm/contrib/concurrent.py\", line 51, in _executor_map \r\n return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs)) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/tqdm/std.py\", line 1178, in __iter__ \r\n for obj in iterable: \r\n File \"/opt/conda/envs/main/lib/python3.10/concurrent/futures/_base.py\", line 621, in result_iterator \r\n yield _result_or_cancel(fs.pop()) \r\n File \"/opt/conda/envs/main/lib/python3.10/concurrent/futures/_base.py\", line 319, in _result_or_cancel \r\n return fut.result(timeout) \r\n File \"/opt/conda/envs/main/lib/python3.10/concurrent/futures/_base.py\", line 453, in result \r\n self._condition.wait(timeout) \r\n File \"/opt/conda/envs/main/lib/python3.10/threading.py\", line 320, in wait \r\n waiter.acquire() \r\nKeyboardInterrupt \r\n \r\nDuring handling of the above exception, another exception occurred: \r\n \r\nTraceback (most recent call last): \r\n File \"/TALI/tali/scripts/validate_dataset.py\", line 127, in <module> \r\n train_dataset.push_to_hub(repo_id=\"Antreas/TALI-base\", max_shard_size=\"5GB\") \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/datasets/dataset_dict.py\", line 1583, in push_to_hub \r\n repo_id, split, uploaded_size, dataset_nbytes, _, _ = self[split]._push_parquet_shards_to_hub( \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 5275, in _push_parquet_shards_to_hub \r\n _retry( \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/datasets/utils/file_utils.py\", line 282, in _retry \r\n return func(*func_args, **func_kwargs) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py\", line 118, in _inner_fn \r\n return fn(*args, **kwargs) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/huggingface_hub/hf_api.py\", line 826, in _inner \r\n return fn(self, *args, **kwargs) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/huggingface_hub/hf_api.py\", line 3205, in upload_file \r\n commit_info = self.create_commit( \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py\", line 118, in _inner_fn \r\n return fn(*args, **kwargs) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/huggingface_hub/hf_api.py\", line 826, in _inner \r\n return fn(self, *args, **kwargs) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/huggingface_hub/hf_api.py\", line 2680, in create_commit \r\n upload_lfs_files( \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py\", line 118, in _inner_fn \r\n return fn(*args, **kwargs) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/huggingface_hub/_commit_api.py\", line 353, in upload_lfs_files \r\n thread_map( \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/tqdm/contrib/concurrent.py\", line 69, in thread_map \r\n return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/tqdm/contrib/concurrent.py\", line 49, in _executor_map \r\n with PoolExecutor(max_workers=max_workers, initializer=tqdm_class.set_lock, \r\n File \"/opt/conda/envs/main/lib/python3.10/concurrent/futures/_base.py\", line 649, in __exit__ \r\n self.shutdown(wait=True) \r\n File \"/opt/conda/envs/main/lib/python3.10/concurrent/futures/thread.py\", line 235, in shutdown \r\n t.join() \r\n File \"/opt/conda/envs/main/lib/python3.10/threading.py\", line 1096, in join \r\n self._wait_for_tstate_lock() \r\n File \"/opt/conda/envs/main/lib/python3.10/threading.py\", line 1116, in _wait_for_tstate_lock \r\n if lock.acquire(block, timeout): \r\nKeyboardInterrupt \r\n```", "@Wauplin \r\n\r\n>What is the total dataset size?\r\n\r\nThere are three variants, and the random hanging happens on all three. The sizes are 2TB, 1TB, and 200GB. \r\n\r\n>Is it always failing on the same shard or is the hanging problem happening randomly?\r\n\r\nIt seems to be very much random, as restarting can help move past the previous hang, only to find a new one, or not. \r\n\r\n>Were you able to save the dataset as parquet locally? This would help us determine if the problem comes from the upload or the file generation.\r\n\r\nYes. The dataset seems to be locally stored as parquet. ", "Hmm it looks like an issue with TQDM lock. Maybe you can try updating TQDM ?", "I am using the latest version of tqdm\r\n\r\n```\r\n⬢ [Docker] ❯ pip install tqdm --upgrade\r\nRequirement already satisfied: tqdm in /opt/conda/envs/main/lib/python3.10/site-packages (4.65.0)\r\nWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv\r\n```", "I tried trying to catch the hanging issue in action again\r\n\r\n```\r\nPushing dataset shards to the dataset hub: 65%|█████████████████████████████████████████████████████████████████▊ | 127/195 [2:28:02<1:19:15, 69.94s/it] \r\nError while uploading 'data/train-00127-of-00195-3f8d036ade107c27.parquet' to the Hub. \r\nPushing split train to the Hub. \r\nPushing dataset shards to the dataset hub: 64%|████████████████████████████████████████████████████████████████▏ | 124/195 [2:06:10<1:12:14, 61.05s/it]C^[^C^C^C \r\n╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ \r\n│ /TALI/tali/scripts/validate_dataset.py:127 in <module> │ \r\n│ │ \r\n│ 124 │ │ \r\n│ 125 │ while not succesful_competion: │ \r\n│ 126 │ │ try: │ \r\n│ ❱ 127 │ │ │ train_dataset.push_to_hub(repo_id=\"Antreas/TALI-base\", max_shard_size=\"5GB\") │ \r\n│ 128 │ │ │ succesful_competion = True │ \r\n│ 129 │ │ except Exception as e: │ \r\n│ 130 │ │ │ print(e) │ \r\n│ │ \r\n│ /opt/conda/envs/main/lib/python3.10/site-packages/datasets/dataset_dict.py:1583 in push_to_hub │ \r\n│ │ \r\n│ 1580 │ │ for split in self.keys(): │ \r\n│ 1581 │ │ │ logger.warning(f\"Pushing split {split} to the Hub.\") │ \r\n│ 1582 │ │ │ # The split=key needs to be removed before merging │ \r\n│ ❱ 1583 │ │ │ repo_id, split, uploaded_size, dataset_nbytes, _, _ = self[split]._push_parq │ \r\n│ 1584 │ │ │ │ repo_id, │ \r\n│ 1585 │ │ │ │ split=split, │ \r\n│ 1586 │ │ │ │ private=private, │ \r\n│ │ \r\n│ /opt/conda/envs/main/lib/python3.10/site-packages/datasets/arrow_dataset.py:5263 in │ \r\n│ _push_parquet_shards_to_hub │ \r\n│ │ \r\n│ 5260 │ │ │ \r\n│ 5261 │ │ uploaded_size = 0 │ \r\n│ 5262 │ │ shards_path_in_repo = [] │ \r\n│ ❱ 5263 │ │ for index, shard in logging.tqdm( │ \r\n│ 5264 │ │ │ enumerate(itertools.chain([first_shard], shards_iter)), │ \r\n│ 5265 │ │ │ desc=\"Pushing dataset shards to the dataset hub\", │ \r\n│ 5266 │ │ │ total=num_shards, │ \r\n│ │ \r\n│ /opt/conda/envs/main/lib/python3.10/site-packages/tqdm/std.py:1178 in __iter__ │ \r\n│ │ \r\n│ 1175 │ │ time = self._time │ \r\n│ 1176 │ │ │ \r\n│ 1177 │ │ try: │\r\n│ ❱ 1178 │ │ │ for obj in iterable: │\r\n│ 1179 │ │ │ │ yield obj │\r\n│ 1180 │ │ │ │ # Update and possibly print the progressbar. │\r\n│ 1181 │ │ │ │ # Note: does not call self.update(1) for speed optimisation. │\r\n│ │\r\n│ /opt/conda/envs/main/lib/python3.10/site-packages/datasets/arrow_dataset.py:5238 in │\r\n│ shards_with_embedded_external_files │\r\n│ │\r\n│ 5235 │ │ │ │ for shard in shards: │\r\n│ 5236 │ │ │ │ │ format = shard.format │\r\n│ 5237 │ │ │ │ │ shard = shard.with_format(\"arrow\") │\r\n│ ❱ 5238 │ │ │ │ │ shard = shard.map( │\r\n│ 5239 │ │ │ │ │ │ embed_table_storage, │\r\n│ 5240 │ │ │ │ │ │ batched=True, │\r\n│ 5241 │ │ │ │ │ │ batch_size=1000, │\r\n│ │\r\n│ /opt/conda/envs/main/lib/python3.10/site-packages/datasets/arrow_dataset.py:578 in wrapper │\r\n│ │\r\n│ 575 │ │ else: │\r\n│ 576 │ │ │ self: \"Dataset\" = kwargs.pop(\"self\") │\r\n│ 577 │ │ # apply actual function │\r\n│ ❱ 578 │ │ out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs) │ \r\n│ 579 │ │ datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [ou │ \r\n│ 580 │ │ for dataset in datasets: │ \r\n│ 581 │ │ │ # Remove task templates if a column mapping of the template is no longer val │ \r\n│ │ \r\n│ /opt/conda/envs/main/lib/python3.10/site-packages/datasets/arrow_dataset.py:543 in wrapper │ \r\n│ │ \r\n│ 540 │ │ │ \"output_all_columns\": self._output_all_columns, │ \r\n│ 541 │ │ } │ \r\n│ 542 │ │ # apply actual function │ \r\n│ ❱ 543 │ │ out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs) │ \r\n│ 544 │ │ datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [ou │ \r\n│ 545 │ │ # re-apply format to the output │ \r\n│ 546 │ │ for dataset in datasets: │ \r\n│ │ \r\n│ /opt/conda/envs/main/lib/python3.10/site-packages/datasets/arrow_dataset.py:3073 in map │ \r\n│ │ \r\n│ 3070 │ │ │ │ │ leave=False, │ \r\n│ 3071 │ │ │ │ │ desc=desc or \"Map\", │ \r\n│ 3072 │ │ │ │ ) as pbar: │ \r\n│ ❱ 3073 │ │ │ │ │ for rank, done, content in Dataset._map_single(**dataset_kwargs): │ \r\n│ 3074 │ │ │ │ │ │ if done: │ \r\n│ 3075 │ │ │ │ │ │ │ shards_done += 1 │ \r\n│ 3076 │ │ │ │ │ │ │ logger.debug(f\"Finished processing shard number {rank} of {n │ \r\n│ │ \r\n│ /opt/conda/envs/main/lib/python3.10/site-packages/datasets/arrow_dataset.py:3464 in _map_single │ \r\n│ │ \r\n│ 3461 │ │ │ │ │ │ │ │ buf_writer, writer, tmp_file = init_buffer_and_writer() │ \r\n│ 3462 │ │ │ │ │ │ │ │ stack.enter_context(writer) │ \r\n│ 3463 │ │ │ │ │ │ │ if isinstance(batch, pa.Table): │ \r\n│ ❱ 3464 │ │ │ │ │ │ │ │ writer.write_table(batch) │ \r\n│ 3465 │ │ │ │ │ │ │ else: │ \r\n│ 3466 │ │ │ │ │ │ │ │ writer.write_batch(batch) │ \r\n│ 3467 │ │ │ │ │ │ num_examples_progress_update += num_examples_in_batch │ \r\n│ │ \r\n│ /opt/conda/envs/main/lib/python3.10/site-packages/datasets/arrow_writer.py:567 in write_table │ \r\n│ │ \r\n│ 564 │ │ │ writer_batch_size = self.writer_batch_size │ \r\n│ 565 │ │ if self.pa_writer is None: │ \r\n│ 566 │ │ │ self._build_writer(inferred_schema=pa_table.schema) │ \r\n│ ❱ 567 │ │ pa_table = pa_table.combine_chunks() │ \r\n│ 568 │ │ pa_table = table_cast(pa_table, self._schema) │ \r\n│ 569 │ │ if self.embed_local_files: │ \r\n│ 570 │ │ │ pa_table = embed_table_storage(pa_table) │ \r\n╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ \r\nKeyboardInterrupt \r\n```", "I'm on my phone so can't help that much. What I'd advice to do is to [save_to_disk](https://huggingface.co/docs/datasets/package_reference/main_classes#save_to_disk) if it's not already done and then upload the files/folder to the Hub separately. You can find what you need in the [upload guide](https://huggingface.co/docs/huggingface_hub/guides/upload). It might not help finding the exact issue for now but at least it can unblock you. ", "In your last stacktrace it interrupted while embedding external content - in case your dataset in made of images or audio files that live on your disk. Is it the case ?", "Yeah, the dataset has images, audio, video and text. ", "It's maybe related to https://github.com/apache/arrow/issues/34455: are you using ArrayND features ?\r\n\r\nAlso what's your `pyarrow` version ? Could you try updating to >= 12.0.1 ?", "I was using pyarrow == 12.0.0\r\n\r\nI am not explicitly using ArrayND features, unless the hub API automatically converts my files to such. ", "I have now updated to pyarrow == 12.0.1 and retrying", "You can also try to reduce the `max_shard_size` - Sometimes parquet has a hard time working with data bigger than 2GB", "So, updating the pyarrow seems to help. It can still throw errors here and there but I can retry when that happens. It's better than hanging. \r\n\r\nHowever, I am a bit confused about something. I have uploaded my datasets, but while earlier I could see all three sets, now I can only see 1. What's going on? \r\nhttps://huggingface.co/datasets/Antreas/TALI-base\r\n\r\nI have seen this happen before as well, so I deleted and reuploaded, but this dataset is way too large for me to do this. ", "It's a bug on our side, I'll update the dataset viewer ;)\r\n\r\nThanks for reporting !", "Apparently this happened because of bad modifications in the README.md split metadata.\r\n\r\nI fixed them in this PR: https://huggingface.co/datasets/Antreas/TALI-base/discussions/1", "@lhoestq It's a bit odd that when uploading a dataset, one set at a time \"train\", \"val\", \"test\", the push_to_hub function overwrites the readme and removes differently named sets from previous commits. i.e., you push \"val\", all is well. Then you push \"test\", and the \"val\" entry disappears from the readme, while the data remain intact. ", "Also, just found another related issue. One of the many that make things hang or fail when pushing to hub. \r\n\r\nIn the following code:\r\n\r\n```python\r\ntrain_generator = lambda: data_generator(\"train\", percentage=1.0)\r\n val_generator = lambda: data_generator(\"val\")\r\n test_generator = lambda: data_generator(\"test\")\r\n\r\n train_data = datasets.Dataset.from_generator(\r\n train_generator,\r\n num_proc=mp.cpu_count(),\r\n writer_batch_size=5000,\r\n cache_dir=tali_dataset_dir,\r\n )\r\n\r\n val_data = datasets.Dataset.from_generator(\r\n val_generator,\r\n writer_batch_size=5000,\r\n num_proc=mp.cpu_count(),\r\n cache_dir=tali_dataset_dir,\r\n )\r\n\r\n test_data = datasets.Dataset.from_generator(\r\n test_generator,\r\n writer_batch_size=5000,\r\n num_proc=mp.cpu_count(),\r\n cache_dir=tali_dataset_dir,\r\n )\r\n\r\n print(f\"Pushing TALI-large to hub\")\r\n\r\n dataset = datasets.DatasetDict(\r\n {\"train\": train_data, \"val\": val_data, \"test\": test_data}\r\n )\r\n succesful_competion = False\r\n\r\n while not succesful_competion:\r\n try:\r\n dataset.push_to_hub(repo_id=\"Antreas/TALI-large\", max_shard_size=\"2GB\")\r\n succesful_competion = True\r\n except Exception as e:\r\n print(e)\r\n ```\r\n \r\n \r\n Things keep failing in the push_to_repo step, at random places, with the following error:\r\n \r\n ```bash\r\n Pushing dataset shards to the dataset hub: 7%|██████████▋ | 67/950 [42:41<9:22:37, 38.23s/it]\r\nError while uploading 'data/train-00067-of-00950-a4d179ed5a593486.parquet' to the Hub.\r\nPushing split train to the Hub.\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:01<00:00, 1.81ba/s]\r\nUpload 1 LFS files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:11<00:00, 11.20s/it]\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.48ba/s]\r\nUpload 1 LFS files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:15<00:00, 15.30s/it]\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.39ba/s]\r\nUpload 1 LFS files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:11<00:00, 11.52s/it]\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.47ba/s]\r\nUpload 1 LFS files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:10<00:00, 10.39s/it]\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.26ba/s]\r\nUpload 1 LFS files: 0%| | 0/1 [16:38<?, ?it/s]\r\nPushing dataset shards to the dataset hub: 7%|███████████▎ | 71/950 [44:37<9:12:28, 37.71s/it]\r\nError while uploading 'data/train-00071-of-00950-72bab6e5cb223aee.parquet' to the Hub.\r\nPushing split train to the Hub.\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.18ba/s]\r\nUpload 1 LFS files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:10<00:00, 10.94s/it]\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.36ba/s]\r\nUpload 1 LFS files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:10<00:00, 10.67s/it]\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.57ba/s]\r\nUpload 1 LFS files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:10<00:00, 10.16s/it]\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.68ba/s]\r\nUpload 1 LFS files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:09<00:00, 9.63s/it]\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.36ba/s]\r\nUpload 1 LFS files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:10<00:00, 10.67s/it]\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.37ba/s]\r\nUpload 1 LFS files: 0%| | 0/1 [16:39<?, ?it/s]\r\nPushing dataset shards to the dataset hub: 8%|████████████ | 76/950 [46:21<8:53:08, 36.60s/it]\r\nError while uploading 'data/train-00076-of-00950-b90e4e3b433db179.parquet' to the Hub.\r\nPushing split train to the Hub.\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.21ba/s]\r\nUpload 1 LFS files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:25<00:00, 25.40s/it]\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:01<00:00, 1.56ba/s]\r\nUpload 1 LFS files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:10<00:00, 10.40s/it]\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.49ba/s]\r\nUpload 1 LFS files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:23<00:00, 23.53s/it]\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.27ba/s]\r\nUpload 1 LFS files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:10<00:00, 10.25s/it]\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.42ba/s]\r\nUpload 1 LFS files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:11<00:00, 11.03s/it]\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.39ba/s]\r\nUpload 1 LFS files: 0%| | 0/1 [16:39<?, ?it/s]\r\nPushing dataset shards to the dataset hub: 9%|████████████▊ | 81/950 [48:30<8:40:22, 35.93s/it]\r\nError while uploading 'data/train-00081-of-00950-84b0450a1df093a9.parquet' to the Hub.\r\nPushing split train to the Hub.\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.18ba/s]\r\nUpload 1 LFS files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:11<00:00, 11.65s/it]\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:01<00:00, 1.92ba/s]\r\nUpload 1 LFS files: 0%| | 0/1 [16:38<?, ?it/s]\r\nPushing dataset shards to the dataset hub: 9%|█████████████ | 82/950 [48:55<8:37:57, 35.80s/it]\r\nError while uploading 'data/train-00082-of-00950-0a1f52da35653e08.parquet' to the Hub.\r\nPushing split train to the Hub.\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.31ba/s]\r\nUpload 1 LFS files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:26<00:00, 26.29s/it]\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.42ba/s]\r\nUpload 1 LFS files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:10<00:00, 10.57s/it]\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.64ba/s]\r\nUpload 1 LFS files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:10<00:00, 10.35s/it]\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.64ba/s]\r\nUpload 1 LFS files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:11<00:00, 11.74s/it]\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.31ba/s]\r\nUpload 1 LFS files: 0%| | 0/1 [16:40<?, ?it/s]\r\nPushing dataset shards to the dataset hub: 9%|█████████████▋ | 86/950 [50:48<8:30:25, 35.45s/it]\r\nError while uploading 'data/train-00086-of-00950-e1cc80dd17191b20.parquet' to the Hub.\r\n```\r\n\r\nI have a while loop that forces retries, but it seems that the progress itself is randomly getting lost as well. Any ideas on how to improve this? It has been blocking me for way too long. \r\n\r\nShould I build the parquet manually and then push manually as well? If I do things manually, how can I ensure my dataset works properly with \"stream=True\"? \r\n\r\nThank you for your help and time. ", "> @lhoestq It's a bit odd that when uploading a dataset, one set at a time \"train\", \"val\", \"test\", the push_to_hub function overwrites the readme and removes differently named sets from previous commits. i.e., you push \"val\", all is well. Then you push \"test\", and the \"val\" entry disappears from the readme, while the data remain intact.\r\n\r\nHmm this shouldn't happen. What code did you run exactly ? Using which version of `datasets` ?", "> I have a while loop that forces retries, but it seems that the progress itself is randomly getting lost as well. Any ideas on how to improve this? It has been blocking me for way too long.\r\n\r\nCould you also print the cause of the error (`e.__cause__`) ? Or show the full stack trace when the error happens ?\r\nThis would give more details about why it failed and would help investigate.", "> Should I build the parquet manually and then push manually as well? If I do things manually, how can I ensure my dataset works properly with \"stream=True\"?\r\n\r\nParquet is supported out of the box ^^\r\n\r\nIf you want to make sure it works as expected you can try locally first:\r\n```python\r\nds = load_dataset(\"path/to/local\", streaming=True)\r\n```", "@lhoestq @AntreasAntoniou I transferred this issue to the `datasets` repository as the questions and answers are more related to this repo. Hope it can help other users find the bug and fixes more easily (like updating [tqdm](https://github.com/huggingface/datasets/issues/5990#issuecomment-1607120204) and [pyarrow](https://github.com/huggingface/datasets/issues/5990#issuecomment-1607120278) or [setting a lower `max_shard_size`](https://github.com/huggingface/datasets/issues/5990#issuecomment-1607120328)).\r\n\r\n~For the initial \"pushing large dataset consistently hangs\"-issue, I still think it's best to try to `save_to_disk` first and then upload it manually/with a script (see [upload_folder](https://huggingface.co/docs/huggingface_hub/guides/upload#upload-a-folder)). It's not the most satisfying solution but at least it would confirm from where the problem comes from.~\r\n\r\n**EDIT:** removed suggestion about saving to disk first (see https://github.com/huggingface/datasets/issues/5990#issuecomment-1607186914).", "> @lhoestq @AntreasAntoniou I transferred this issue to the datasets repository as the questions and answers are more related to this repo. Hope it can help other users find the bug and fixes more easily (like updating https://github.com/huggingface/datasets/issues/5990#issuecomment-1607120204 and https://github.com/huggingface/datasets/issues/5990#issuecomment-1607120278 or https://github.com/huggingface/datasets/issues/5990#issuecomment-1607120328).\r\n\r\nthanks :)\r\n\r\n> For the initial \"pushing large dataset consistently hangs\"-issue, I still think it's best to try to save_to_disk first and then upload it manually/with a script (see [upload_folder](https://huggingface.co/docs/huggingface_hub/guides/upload#upload-a-folder)). It's not the most satisfying solution but at least it would confirm from where the problem comes from.\r\n\r\nAs I've already said in other discussions, I would not recommend pushing files saved with `save_to_disk` to the Hub but save to parquet shards and upload them instead. The Hub does not support datasets saved with `save_to_disk`, which is meant for disk only.", "> As I've already said in other discussions, I would not recommend pushing files saved with save_to_disk to the Hub but save to parquet shards and upload them instead. The Hub does not support datasets saved with save_to_disk, which is meant for disk only.\r\n\r\nWell noted, thanks. That part was not clear to me :)", "Sorry for not replying in a few days, I was on leave. :) \r\n\r\nSo, here are more information as to the error that causes some of the delay\r\n\r\n```bash\r\nPushing Antreas/TALI-tiny to hub\r\nAttempting to push to hub\r\nPushing split train to the Hub.\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:24<00:00, 4.06s/ba]\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:24<00:00, 4.15s/ba]\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:26<00:00, 4.45s/ba]\r\n/opt/conda/envs/main/lib/python3.10/site-packages/huggingface_hub/lfs.py:310: UserWarning: hf_transfer is enabled but does not support uploading from bytes or BinaryIO, falling back to regular upload\r\n warnings.warn(\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:25<00:00, 4.26s/ba]\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:27<00:00, 4.58s/ba]\r\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:24<00:00, 4.10s/ba]\r\nPushing dataset shards to the dataset hub: 22%|████████████████████████▎ | 5/23 [52:23<3:08:37, 628.74s/it]\r\nException: Error while uploading 'data/train-00005-of-00023-e224d901fd65e062.parquet' to the Hub., with stacktrace: <traceback object at 0x7f745458d0c0>, and type: <class 'RuntimeError'>, and \r\ncause: HTTPSConnectionPool(host='s3.us-east-1.amazonaws.com', port=443): Max retries exceeded with url: \r\n/lfs.huggingface.co/repos/7c/d3/7cd385d9324302dc13e3986331d72d9be6fa0174c63dcfe0e08cd474f7f1e8b7/3415166ae28c0beccbbc692f38742b8dea2c197f5c805321104e888d21d7eb90?X-Amz-Algorithm=AWS4-HMAC-SHA256\r\n&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230627%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230627T003349Z&X-Amz-Expires=86400&X-Amz-Signature=5a12ff96f2\r\n91f644134170992a6628e5f3c4e7b2e7fc3e940b4378fe11ae5390&X-Amz-SignedHeaders=host&partNumber=1&uploadId=JSsK8r63XSF.VlKQx3Vf8OW4DEVp5YIIY7LPnuapNIegsxs5EHgM1p4u0.Nn6_wlPlQnvxm8HKMxZhczKE9KB74t0etB\r\noLcxqBIvsgey3uXBTZMAEGwU6y7CDUADiEIO&x-id=UploadPart (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2426)')))\r\nPush failed, retrying\r\nAttempting to push to hub\r\nPushing split train to the Hub.\r\n```\r\n\r\nOne issue is that the uploading does not continue from the chunk it failed off. It often continues from a very old chunk. e.g. if it failed on chunk 192/250, it will continue from say 53/250, and this behaviour appears almost random. ", "Are you using a proxy of some sort ?", "I am using a kubernetes cluster built into a university VPN. ", "So, other than the random connection drops here and there, any idea why the progress does not continue where it left off?\r\n\r\n```bash\r\nPushing split train to the Hub.\r\nCreating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 28/28 [00:02<00:00, 10.79ba/s]\r\nCreating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 28/28 [00:02<00:00, 13.65ba/s]\r\nCreating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 28/28 [00:02<00:00, 13.39ba/s]\r\nCreating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 28/28 [00:02<00:00, 13.04ba/s]\r\nCreating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 28/28 [00:02<00:00, 13.52ba/s]\r\nCreating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 28/28 [00:02<00:00, 12.28ba/s]\r\nPushing dataset shards to the dataset hub: 20%|██████████████████████ | 75/381 [1:34:39<6:26:11, 75.72s/it]\r\nException: Error while uploading 'data/train-00075-of-00381-1614bc251b778766.parquet' to the Hub., with stacktrace: <traceback object at 0x7fab6d9a4980>, and type: <class 'RuntimeError'>, and \r\ncause: HTTPSConnectionPool(host='s3.us-east-1.amazonaws.com', port=443): Max retries exceeded with url: \r\n/lfs.huggingface.co/repos/3b/31/3b311464573d8d63b137fcd5b40af1e7a5b1306843c88e80372d0117157504e5/ed8dae933fb79ae1ef5fb1f698f5125d3e1c02977ac69438631f152bb3bfdd1e?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-\r\nAmz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230629%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230629T053004Z&X-Amz-Expires=86400&X-Amz-Signature=da2b26270edfd6d0\r\nd069c015a5a432031107a8664c3f0917717e5e40c688183c&X-Amz-SignedHeaders=host&partNumber=1&uploadId=2erWGHTh3ICqBLU_QvHfnygZ2tkMWbL0rEqpJdYohCKHUHnfwMjvoBIg0TI_KSGn4rSKxUxOyqSIzFUFSRSzixZeLeneaXJOw.Qx8\r\nzLKSV5xV7HRQDj4RBesNve6cSoo&x-id=UploadPart (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2426)')))\r\nPush failed, retrying\r\nAttempting to push to hub\r\nPushing split train to the Hub.\r\nCreating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 28/28 [00:02<00:00, 12.09ba/s]\r\nCreating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 28/28 [00:02<00:00, 11.51ba/s]\r\nCreating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 28/28 [00:02<00:00, 10.77ba/s]\r\nPushing dataset shards to the dataset hub: 20%|██████████████████████▋ | 77/381 [1:32:50<6:06:34, 72.35s/it]\r\nException: Error while uploading 'data/train-00077-of-00381-368b2327a9908aab.parquet' to the Hub., with stacktrace: <traceback object at 0x7fab45b27f80>, and type: <class 'RuntimeError'>, and \r\ncause: HTTPSConnectionPool(host='s3.us-east-1.amazonaws.com', port=443): Max retries exceeded with url: \r\n/lfs.huggingface.co/repos/3b/31/3b311464573d8d63b137fcd5b40af1e7a5b1306843c88e80372d0117157504e5/9462ff2c5e61283b53b091984a22de2f41a2f6e37b681171e2eca4a998f979cb?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-\r\nAmz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230629%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230629T070510Z&X-Amz-Expires=86400&X-Amz-Signature=9ab8487b93d443cd\r\n21f05476405855d46051a0771b4986bbb20f770ded21b1a4&X-Amz-SignedHeaders=host&partNumber=1&uploadId=UiHX1B.DcoAO2QmIHpWpCuNPwhXU_o1dsTkTGPqZt1P51o9k0yz.EsFD9eKpQMwgAST3jOatRG78I_JWRBeLBDYYVNp8r0TpIdeSg\r\neUg8uwPZOCPw9y5mWOw8MWJrnBo&x-id=UploadPart (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2426)')))\r\nPush failed, retrying\r\nAttempting to push to hub\r\nPushing split train to the Hub.\r\nPushing dataset shards to the dataset hub: 8%|████████▋ | 29/381 [27:39<5:50:03, 59.67s/it]\r\nMap: 36%|████████████████████████████████████████████████████ | 1000/2764 [00:35<00:34, 51.63 examples/Map: 72%|████████████████████████████████████████████████████████████████████████████████████████████████████████▏ | 2000/2764 [00:40<00:15, 49.06 examples/Map: 72%|████████████████████████████████████████████████████████████████████████████████████████████████████████▏ | 2000/2764 [00:55<00:15, 49.06 examples/Map: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2764/2764 [00:56<00:00, 48.82 examples/Pushing dataset shards to the dataset hub: 8%|████████▉ | 30/381 [28:35<5:43:03, 58.64s/iPushing dataset shards to the dataset hub: 8%|█████████▎ | 31/381 [29:40<5:52:18, 60.40s/iPushing dataset shards to the dataset hub: 8%|█████████▌ | 32/381 [30:46<6:02:20, 62.29s/it] \r\nMap: 36%|███████████████████████████████████████████████████▎ \r\n```\r\n\r\nThis is actually the issue that wastes the most time for me, and I need it fixed. Please advice on how I can go about it.\r\n\r\nNotice how the progress goes from \r\n| 77/381 to 30/381", "If the any shard is missing on the Hub, it will re-upload it. It looks like the 30th shard was missing on the Hub in your case. \r\n\r\nIt also means that the other files up to the 77th that were successfully uploaded won't be uploaded again.\r\n\r\ncc @mariosasko who might know better" ]
"2023-06-10T14:46:47Z"
"2023-08-17T09:54:11Z"
null
NONE
null
null
null
### Describe the bug Once I have locally built a large dataset that I want to push to hub, I use the recommended approach of .push_to_hub to get the dataset on the hub, and after pushing a few shards, it consistently hangs. This has happened over 40 times over the past week, and despite my best efforts to try and catch this happening and kill a process and restart, it seems to be extremely time wasting -- so I came to you to report this and to seek help. I already tried installing hf_transfer, but it doesn't support Byte file uploads so I uninstalled it. ### Reproduction ```python import multiprocessing as mp import pathlib from math import ceil import datasets import numpy as np from tqdm.auto import tqdm from tali.data.data import select_subtitles_between_timestamps from tali.utils import load_json tali_dataset_dir = "/data/" if __name__ == "__main__": full_dataset = datasets.load_dataset( "Antreas/TALI", num_proc=mp.cpu_count(), cache_dir=tali_dataset_dir ) def data_generator(set_name, percentage: float = 1.0): dataset = full_dataset[set_name] for item in tqdm(dataset): video_list = item["youtube_content_video"] video_list = np.random.choice( video_list, int(ceil(len(video_list) * percentage)) ) if len(video_list) == 0: continue captions = item["youtube_subtitle_text"] captions = select_subtitles_between_timestamps( subtitle_dict=load_json( captions.replace( "/data/", tali_dataset_dir, ) ), starting_timestamp=0, ending_timestamp=100000000, ) for video_path in video_list: temp_path = video_path.replace("/data/", tali_dataset_dir) video_path_actual: pathlib.Path = pathlib.Path(temp_path) if video_path_actual.exists(): item["youtube_content_video"] = open(video_path_actual, "rb").read() item["youtube_subtitle_text"] = captions yield item train_generator = lambda: data_generator("train", percentage=0.1) val_generator = lambda: data_generator("val") test_generator = lambda: data_generator("test") train_data = datasets.Dataset.from_generator( train_generator, num_proc=mp.cpu_count(), writer_batch_size=5000, cache_dir=tali_dataset_dir, ) val_data = datasets.Dataset.from_generator( val_generator, writer_batch_size=5000, num_proc=mp.cpu_count(), cache_dir=tali_dataset_dir, ) test_data = datasets.Dataset.from_generator( test_generator, writer_batch_size=5000, num_proc=mp.cpu_count(), cache_dir=tali_dataset_dir, ) dataset = datasets.DatasetDict( { "train": train_data, "val": val_data, "test": test_data, } ) succesful_competion = False while not succesful_competion: try: dataset.push_to_hub(repo_id="Antreas/TALI-small", max_shard_size="5GB") succesful_competion = True except Exception as e: print(e) ``` ### Logs ```shell Pushing dataset shards to the dataset hub: 33%|██████████████████████████████████████▎ | 7/21 [24:33<49:06, 210.45s/it] Error while uploading 'data/val-00007-of-00021-6b216a984af1a4c8.parquet' to the Hub. Pushing split train to the Hub. Resuming upload of the dataset shards. Pushing dataset shards to the dataset hub: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 46/46 [42:10<00:00, 55.01s/it] Pushing split val to the Hub. Resuming upload of the dataset shards. Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 1.55ba/s] Upload 1 LFS files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:23<00:00, 23.51s/it] Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.39ba/s] Upload 1 LFS files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:30<00:00, 30.19s/it] Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.28ba/s] Upload 1 LFS files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:24<00:00, 24.08s/it] Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.42ba/s] Upload 1 LFS files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:23<00:00, 23.97s/it] Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.49ba/s] Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.54ba/s^ Upload 1 LFS files: 0%| | 0/1 [04:42<?, ?it/s] Pushing dataset shards to the dataset hub: 52%|████████████████████████████████████████████████████████████▏ | 11/21 [17:23<15:48, 94.82s/it] That's where it got stuck ``` ### System info ```shell - huggingface_hub version: 0.15.1 - Platform: Linux-5.4.0-147-generic-x86_64-with-glibc2.35 - Python version: 3.10.11 - Running in iPython ?: No - Running in notebook ?: No - Running in Google Colab ?: No - Token path ?: /root/.cache/huggingface/token - Has saved token ?: True - Who am I ?: Antreas - Configured git credential helpers: store - FastAI: N/A - Tensorflow: N/A - Torch: 2.1.0.dev20230606+cu121 - Jinja2: 3.1.2 - Graphviz: N/A - Pydot: N/A - Pillow: 9.5.0 - hf_transfer: N/A - gradio: N/A - numpy: 1.24.3 - ENDPOINT: https://huggingface.co - HUGGINGFACE_HUB_CACHE: /root/.cache/huggingface/hub - HUGGINGFACE_ASSETS_CACHE: /root/.cache/huggingface/assets - HF_TOKEN_PATH: /root/.cache/huggingface/token - HF_HUB_OFFLINE: False - HF_HUB_DISABLE_TELEMETRY: False - HF_HUB_DISABLE_PROGRESS_BARS: None - HF_HUB_DISABLE_SYMLINKS_WARNING: False - HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False - HF_HUB_DISABLE_IMPLICIT_TOKEN: False - HF_HUB_ENABLE_HF_TRANSFER: False ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5990/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5990/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5841
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5841/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5841/comments
https://api.github.com/repos/huggingface/datasets/issues/5841/events
https://github.com/huggingface/datasets/issues/5841
1,705,286,639
I_kwDODunzps5lpJvv
5,841
Abusurdly slow on iteration
{ "avatar_url": "https://avatars.githubusercontent.com/u/41792945?v=4", "events_url": "https://api.github.com/users/fecet/events{/privacy}", "followers_url": "https://api.github.com/users/fecet/followers", "following_url": "https://api.github.com/users/fecet/following{/other_user}", "gists_url": "https://api.github.com/users/fecet/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/fecet", "id": 41792945, "login": "fecet", "node_id": "MDQ6VXNlcjQxNzkyOTQ1", "organizations_url": "https://api.github.com/users/fecet/orgs", "received_events_url": "https://api.github.com/users/fecet/received_events", "repos_url": "https://api.github.com/users/fecet/repos", "site_admin": false, "starred_url": "https://api.github.com/users/fecet/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fecet/subscriptions", "type": "User", "url": "https://api.github.com/users/fecet" }
[]
closed
false
null
[]
null
[ "Hi ! You can try to use the [Image](https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Image) type which [decodes images on-the-fly](https://huggingface.co/docs/datasets/v2.12.0/en/about_dataset_features#image-feature) into pytorch tensors :)\r\n\r\n```python\r\nds = Dataset.from_dict({\"tensor\":a}).with_format(\"torch\")\r\n%time sum(1 for _ in ds)\r\n# CPU times: user 5.04 s, sys: 96.5 ms, total: 5.14 s\r\n# Wall time: 5.14 s\r\n# 10000\r\n```\r\n\r\n```python\r\nfeatures = Features({\"tensor\": Image()})\r\nds = Dataset.from_dict({\"tensor\":a}, features=features).with_format(\"torch\")\r\n%time sum(1 for _ in ds)\r\n# CPU times: user 1.86 s, sys: 49 ms, total: 1.91 s\r\n# Wall time: 1.9 s\r\n# 10000\r\n```\r\n\r\n-> Speed x2.7\r\n\r\nAnd if you want to keep using arrays of integers, consider using the [Array2D](https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Array2D) or [Array3D](https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Array3D) types which are even faster (since it doesn't decode images):\r\n\r\n```python\r\nfeatures = Features({\"tensor\": Array2D(shape=(100, 224), dtype=\"float32\")})\r\nds = Dataset.from_dict({\"tensor\":a}, features=features).with_format(\"torch\")\r\n%time sum(1 for _ in ds)\r\n# CPU times: user 828 ms, sys: 68.4 ms, total: 896 ms\r\n# Wall time: 897 ms\r\n# 10000\r\n```\r\n\r\n-> Speed x5.7\r\n\r\nBatching also speeds up a lot\r\n\r\n```python\r\nfrom torch.utils.data import DataLoader\r\ndl = DataLoader(ds, batch_size=100)\r\n%time sum(1 for _ in dl)\r\n# CPU times: user 564 ms, sys: 83.5 ms, total: 648 ms\r\n# Wall time: 579 ms\r\n# 100\r\n```\r\n\r\n-> Speed x8.9\r\n\r\n```python\r\n%time sum(1 for _ in ds.iter(batch_size=100))\r\n# CPU times: user 119 ms, sys: 96.8 ms, total: 215 ms\r\n# Wall time: 117 ms\r\n# 100\r\n```\r\n\r\n-> Speed x46", "Anyway, regarding the speed difference between numpy and pytorch, I think the issue is that we first convert numpy sub-arrays to pytorch and then consolidate into one tensor, while we should to the opposite. Indeed converting a numpy array to pytorch has a fix cost that seems to cause a slow down. The current pipeline is\r\n\r\n```\r\narrow -> nested numpy arrays -> lists of torch tensors -> one torch tensor\r\n```\r\n\r\nand we should do\r\n\r\n```\r\narrow -> nested numpy arrays -> one numpy array -> one torch tensor\r\n```", "I have a similar issue: iterating over a dataset takes 5s without applying any transform, but takes ~30s after applying a transform.\r\nHere is the minimum code to reproduce the problem\r\n\r\n```python\r\nimport numpy as np\r\nfrom datasets import Dataset, DatasetDict, load_dataset, Array3D, Image, Features\r\nfrom torch.utils.data import DataLoader\r\nfrom tqdm import tqdm\r\nimport torchvision \r\nfrom torchvision.transforms import ToTensor, Normalize\r\n\r\n\r\n#################################\r\n# Without transform\r\n#################################\r\n \r\ntrain_dataset = load_dataset(\r\n 'cifar100',\r\n split='train',\r\n use_auth_token=True,\r\n)\r\n\r\ntrain_dataset.set_format(type=\"numpy\", columns=[\"img\", \"fine_label\"])\r\n\r\ntrain_loader= DataLoader(\r\n train_dataset,\r\n batch_size=100,\r\n pin_memory=False,\r\n shuffle=True,\r\n num_workers=8,\r\n)\r\n\r\nfor batch in tqdm(train_loader, desc=\"Loading data, no transform\"):\r\n pass\r\n\r\n\r\n#################################\r\n# With transform\r\n#################################\r\n\r\ntransform_func = torchvision.transforms.Compose([\r\n ToTensor(), \r\n Normalize(mean=[0.485, 0.456, 0.406], std= [0.229, 0.224, 0.225]),] \r\n)\r\n \r\ntrain_dataset = train_dataset.map(\r\n desc=f\"Preprocessing samples\",\r\n function=lambda x: {\"img\": transform_func(x[\"img\"])},\r\n)\r\n\r\ntrain_dataset.set_format(type=\"numpy\", columns=[\"img\", \"fine_label\"])\r\n\r\n\r\ntrain_loader= DataLoader(\r\n train_dataset,\r\n batch_size=100,\r\n pin_memory=False,\r\n shuffle=True,\r\n num_workers=8,\r\n)\r\n\r\n\r\nfor batch in tqdm(train_loader, desc=\"Loading data after transform\"):\r\n pass \r\n```\r\n\r\nI have also tried converting the Image column to an Array3D\r\n```python\r\nimg_shape = train_dataset[0][\"img\"].shape\r\n\r\nfeatures = train_dataset.features.copy()\r\nfeatures[\"x\"] = Array3D(shape=img_shape, dtype=\"float32\")\r\n\r\ntrain_dataset = train_dataset.map(\r\n desc=f\"Preprocessing samples\",\r\n function=lambda x: {\"x\": np.array(x[\"img\"], dtype=np.uint8)},\r\n features=features,\r\n)\r\ntrain_dataset.cast_column(\"x\", Array3D(shape=img_shape, dtype=\"float32\"))\r\ntrain_dataset.set_format(type=\"numpy\", columns=[\"x\", \"fine_label\"])\r\n```\r\nbut to no avail. Any clue?", "Thanks! I convert my dataset feature to Array3D and this speed became awesome!" ]
"2023-05-11T08:04:09Z"
"2023-05-15T15:38:13Z"
"2023-05-15T15:38:13Z"
NONE
null
null
null
### Describe the bug I am attempting to iterate through an image dataset, but I am encountering a significant slowdown in the iteration speed. In order to investigate this issue, I conducted the following experiment: ```python a=torch.randn(100,224) a=torch.stack([a] * 10000) a.shape # %% ds=Dataset.from_dict({"tensor":a}) for i in tqdm(ds.with_format("numpy")): pass for i in tqdm(ds.with_format("torch")): pass ``` I noticed that the dataset in numpy format performs significantly faster than the one in torch format. My hypothesis is that the dataset undergoes a transformation process of torch->python->numpy(torch) in the background, which might be causing the slowdown. Is there any way to expedite the process by bypassing such transformations? Furthermore, if I increase the size of a to an image shape, like: ```python a=torch.randn(3,224,224) ``` the iteration speed becomes absurdly slow, around 100 iterations per second, whereas the speed with numpy format is approximately 250 iterations per second. This level of speed would be unacceptable for large image datasets, as it could take several hours just to iterate through a single epoch. ### Steps to reproduce the bug ```python a=torch.randn(100,224) a=torch.stack([a] * 10000) a.shape # %% ds=Dataset.from_dict({"tensor":a}) for i in tqdm(ds.with_format("numpy")): pass for i in tqdm(ds.with_format("torch")): pass ``` ### Expected behavior iteration faster ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.10 - Python version: 3.8.16 - Huggingface_hub version: 0.13.4 - PyArrow version: 11.0.0 - Pandas version: 2.0.0
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/5841/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5841/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4650
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4650/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4650/comments
https://api.github.com/repos/huggingface/datasets/issues/4650/events
https://github.com/huggingface/datasets/issues/4650
1,296,680,037
I_kwDODunzps5NScRl
4,650
Add SPECTER dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4", "events_url": "https://api.github.com/users/omarespejel/events{/privacy}", "followers_url": "https://api.github.com/users/omarespejel/followers", "following_url": "https://api.github.com/users/omarespejel/following{/other_user}", "gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/omarespejel", "id": 4755430, "login": "omarespejel", "node_id": "MDQ6VXNlcjQ3NTU0MzA=", "organizations_url": "https://api.github.com/users/omarespejel/orgs", "received_events_url": "https://api.github.com/users/omarespejel/received_events", "repos_url": "https://api.github.com/users/omarespejel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions", "type": "User", "url": "https://api.github.com/users/omarespejel" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
[]
null
[ "uploaded dataset [here](https://huggingface.co/datasets/embedding-data/SPECTER)" ]
"2022-07-07T01:41:32Z"
"2022-07-14T02:07:49Z"
null
NONE
null
null
null
## Adding a Dataset - **Name:** *SPECTER* - **Description:** *SPECTER: Document-level Representation Learning using Citation-informed Transformers* - **Paper:** *https://doi.org/10.18653/v1/2020.acl-main.207* - **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/specter_train_triples.jsonl.gz* - **Motivation:** *Dataset for training and evaluating models of conversational response*
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4650/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4650/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5237
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5237/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5237/comments
https://api.github.com/repos/huggingface/datasets/issues/5237/events
https://github.com/huggingface/datasets/pull/5237
1,448,202,491
PR_kwDODunzps5C2KGz
5,237
Encode path only for old versions of hfh
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-11-14T14:46:57Z"
"2022-11-14T17:38:18Z"
"2022-11-14T17:35:59Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5237.diff", "html_url": "https://github.com/huggingface/datasets/pull/5237", "merged_at": "2022-11-14T17:35:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/5237.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5237" }
Next version of `huggingface-hub` 0.11 does encode the `path`, and we don't want to encode twice
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5237/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5237/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2012
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2012/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2012/comments
https://api.github.com/repos/huggingface/datasets/issues/2012/events
https://github.com/huggingface/datasets/issues/2012
825,634,064
MDU6SXNzdWU4MjU2MzQwNjQ=
2,012
No upstream branch
{ "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/theo-m", "id": 17948980, "login": "theo-m", "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "organizations_url": "https://api.github.com/users/theo-m/orgs", "received_events_url": "https://api.github.com/users/theo-m/received_events", "repos_url": "https://api.github.com/users/theo-m/repos", "site_admin": false, "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "type": "User", "url": "https://api.github.com/users/theo-m" }
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[ "What's the issue exactly ?\r\n\r\nGiven an `upstream` remote repository with url `https://github.com/huggingface/datasets.git`, you can totally rebase from `upstream/master`.\r\n\r\nIt's mentioned at the beginning how to add the `upstream` remote repository\r\n\r\nhttps://github.com/huggingface/datasets/blob/987df6b4e9e20fc0c92bc9df48137d170756fd7b/ADD_NEW_DATASET.md#L10-L14", "~~What difference is there with the default `origin` remote that is set when you clone the repo?~~ I've just understood that this applies to **forks** of the repo 🤡 " ]
"2021-03-09T09:48:55Z"
"2021-03-09T11:33:31Z"
"2021-03-09T11:33:31Z"
CONTRIBUTOR
null
null
null
Feels like the documentation on adding a new dataset is outdated? https://github.com/huggingface/datasets/blob/987df6b4e9e20fc0c92bc9df48137d170756fd7b/ADD_NEW_DATASET.md#L49-L54 There is no upstream branch on remote.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2012/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2012/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/785
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/785/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/785/comments
https://api.github.com/repos/huggingface/datasets/issues/785/events
https://github.com/huggingface/datasets/pull/785
733,719,419
MDExOlB1bGxSZXF1ZXN0NTEzNDMyNTM1
785
feat(aslg_pc12): add dev and test data splits
{ "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AmitMY", "id": 5757359, "login": "AmitMY", "node_id": "MDQ6VXNlcjU3NTczNTk=", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "repos_url": "https://api.github.com/users/AmitMY/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "type": "User", "url": "https://api.github.com/users/AmitMY" }
[]
closed
false
null
[]
null
[ "Hi ! I'm not sure we should make this split decision arbitrarily on our side. Users can split it afterwards to whatever they want using `dataset.train_test_split` for example.\r\nMoreover it looks like there's already papers that use this dataset and propose their own splits ([here](http://xanthippi.ceid.upatras.gr/HealthSign/resources/Publications/sitis_paper_25_10.pdf) 80-20) \r\nWhat do you think ?", "I was not aware of the `train_test_split` method, thanks!\r\nSoe ven though it contributes to reproducibility, no need to do this split then." ]
"2020-10-31T13:25:38Z"
"2020-11-10T15:29:30Z"
"2020-11-10T15:29:30Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/785.diff", "html_url": "https://github.com/huggingface/datasets/pull/785", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/785.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/785" }
For reproducibility sake, it's best if there are defined dev and test splits. The original paper author did not define splits for the entire dataset, not for the sample loaded via this library, so I decided to define: - 5/7th for train - 1/7th for dev - 1/7th for test
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/785/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/785/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4325
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4325/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4325/comments
https://api.github.com/repos/huggingface/datasets/issues/4325/events
https://github.com/huggingface/datasets/issues/4325
1,233,812,191
I_kwDODunzps5Jinrf
4,325
Dataset Viewer issue for strombergnlp/offenseval_2020, strombergnlp/polstance
{ "avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4", "events_url": "https://api.github.com/users/leondz/events{/privacy}", "followers_url": "https://api.github.com/users/leondz/followers", "following_url": "https://api.github.com/users/leondz/following{/other_user}", "gists_url": "https://api.github.com/users/leondz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/leondz", "id": 121934, "login": "leondz", "node_id": "MDQ6VXNlcjEyMTkzNA==", "organizations_url": "https://api.github.com/users/leondz/orgs", "received_events_url": "https://api.github.com/users/leondz/received_events", "repos_url": "https://api.github.com/users/leondz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leondz/subscriptions", "type": "User", "url": "https://api.github.com/users/leondz" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" } ]
null
[ "Not sure if it's related... I was going to raise an issue for https://huggingface.co/datasets/domenicrosati/TruthfulQA which also has the same issue... https://huggingface.co/datasets/domenicrosati/TruthfulQA/viewer/domenicrosati--TruthfulQA/train \r\n\r\n", "Yes, it's related. The backend behind the dataset viewer is currently under too much load, and these datasets are still in the jobs queue. We're actively working on this issue, and we expect to fix the issue permanently soon. Thanks for your patience 🙏  ", "Thanks @severo and no worries! - a suggestion for a UI usability thing maybe is to indicate that the dataset processing is in the job queue (rather than no data?)", "Thanks, these are working great now (including @domenicrosati 's, afaics!)" ]
"2022-05-12T10:59:08Z"
"2022-05-13T10:57:15Z"
"2022-05-13T10:57:02Z"
CONTRIBUTOR
null
null
null
### Link https://huggingface.co/datasets/strombergnlp/offenseval_2020/viewer/ar/train ### Description The viewer isn't running for these two datasets. I left it overnight because a wait sometimes helps things get loaded, and the error messages have all gone, but the datasets are still turning up blank in viewer. Maybe it needs a bit more time. * https://huggingface.co/datasets/strombergnlp/polstance/viewer/PolStance/train * https://huggingface.co/datasets/strombergnlp/offenseval_2020/viewer/ar/train While offenseval_2020 is gated w. prompt, the other gated previews I have run fine in Viewer, e.g. https://huggingface.co/datasets/strombergnlp/shaj , so I'm a bit stumped! ### Owner Yes
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4325/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4325/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/266
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/266/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/266/comments
https://api.github.com/repos/huggingface/datasets/issues/266/events
https://github.com/huggingface/datasets/pull/266
637,156,392
MDExOlB1bGxSZXF1ZXN0NDMzMTk1NDgw
266
Add sort, shuffle, test_train_split and select methods
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
[ "Nice !\r\n\r\nAlso it looks like we can have a train_test_split method for free:\r\n```python\r\ntrain_indices, test_indices = train_test_split(range(len(dataset)))\r\ntrain = dataset.sort(indices=train_indices)\r\ntest = dataset.sort(indices=test_indices)\r\n```\r\n\r\nand a shuffling method for free:\r\n```python\r\nshuffled_indices = shuffle(range(len(dataset)))\r\nshuffled_dataset = dataset.sort(indices=shuffled_indices)\r\n```\r\n\r\nMaybe we can have a specific API for train_test_split and shuffle. They are two features asked quite often (see #147, #166)", "Ok, I think this one is ready to merge.\r\n\r\n@patrickvonplaten @jplu @mariamabarham @joeddav @n1t0 @julien-c you may want to give it a look, it adds a bunch of methods to reorder/split/select rows in a dataset:\r\n- `dataset.select(indices)`: Create a new dataset with rows selected following the list/array of indices (which can have a different size than the dataset and contain duplicated indices, the only constrain is that all the integers in the list must be smaller than the dataset size, otherwise we're indexing outside the dataset...)\r\n- `dataset.sort(column_name)`: sort a dataset according to a column (has to be a column with a numpy compatible type)\r\n- `dataset.shuffle(seed)`: shuffle a dataset rows\r\n- `dataset.train_test_split(test_size, train_size)`: Return a dictionary with two random train and test subsets (`train` and `test` ``Dataset`` splits)\r\n\r\nAll these methods are **not** in-place which means they return new ``Dataset``, which is the default behavior in the library.", "> Might be a solution to put 0.25 and 0.75 as default values for respectively `test_size` and `train_size`. WDYT?\r\n\r\nAccording to sklearn documentation, it is indeed set to 0.25 and 0.75 if both `test_size` and `train_size` are None.\r\nLet me add it.", "I think we're good to go now :) @joeddav @thomwolf @jplu " ]
"2020-06-11T16:22:20Z"
"2020-06-18T16:23:25Z"
"2020-06-18T16:23:24Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/266.diff", "html_url": "https://github.com/huggingface/datasets/pull/266", "merged_at": "2020-06-18T16:23:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/266.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/266" }
Add a bunch of methods to reorder/split/select rows in a dataset: - `dataset.select(indices)`: Create a new dataset with rows selected following the list/array of indices (which can have a different size than the dataset and contain duplicated indices, the only constrain is that all the integers in the list must be smaller than the dataset size, otherwise we're indexing outside the dataset...) - `dataset.sort(column_name)`: sort a dataset according to a column (has to be a column with a numpy compatible type) - `dataset.shuffle(seed)`: shuffle a dataset rows - `dataset.train_test_split(test_size, train_size)`: Return a dictionary with two random train and test subsets (`train` and `test` ``Dataset`` splits) All these methods are **not** in-place which means they return new ``Dataset``. This is the default behavior in the library. Fix #147 #166 #259
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/266/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/266/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1305
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1305/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1305/comments
https://api.github.com/repos/huggingface/datasets/issues/1305/events
https://github.com/huggingface/datasets/pull/1305
759,446,665
MDExOlB1bGxSZXF1ZXN0NTM0NDUxNzEx
1,305
[README] Added Windows command to enable slow tests
{ "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TevenLeScao", "id": 26709476, "login": "TevenLeScao", "node_id": "MDQ6VXNlcjI2NzA5NDc2", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "type": "User", "url": "https://api.github.com/users/TevenLeScao" }
[]
closed
false
null
[]
null
[]
"2020-12-08T13:29:04Z"
"2020-12-08T13:56:33Z"
"2020-12-08T13:56:32Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1305.diff", "html_url": "https://github.com/huggingface/datasets/pull/1305", "merged_at": "2020-12-08T13:56:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/1305.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1305" }
The Windows command to run slow tests has caused issues, so this adds a functional Windows command.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1305/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1305/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4839
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4839/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4839/comments
https://api.github.com/repos/huggingface/datasets/issues/4839/events
https://github.com/huggingface/datasets/issues/4839
1,337,206,377
I_kwDODunzps5PtCZp
4,839
ImageFolder dataset builder does not read the validation data set if it is named as "val"
{ "avatar_url": "https://avatars.githubusercontent.com/u/98386959?v=4", "events_url": "https://api.github.com/users/akt42/events{/privacy}", "followers_url": "https://api.github.com/users/akt42/followers", "following_url": "https://api.github.com/users/akt42/following{/other_user}", "gists_url": "https://api.github.com/users/akt42/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/akt42", "id": 98386959, "login": "akt42", "node_id": "U_kgDOBd1EDw", "organizations_url": "https://api.github.com/users/akt42/orgs", "received_events_url": "https://api.github.com/users/akt42/received_events", "repos_url": "https://api.github.com/users/akt42/repos", "site_admin": false, "starred_url": "https://api.github.com/users/akt42/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akt42/subscriptions", "type": "User", "url": "https://api.github.com/users/akt42" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/98386959?v=4", "events_url": "https://api.github.com/users/akt42/events{/privacy}", "followers_url": "https://api.github.com/users/akt42/followers", "following_url": "https://api.github.com/users/akt42/following{/other_user}", "gists_url": "https://api.github.com/users/akt42/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/akt42", "id": 98386959, "login": "akt42", "node_id": "U_kgDOBd1EDw", "organizations_url": "https://api.github.com/users/akt42/orgs", "received_events_url": "https://api.github.com/users/akt42/received_events", "repos_url": "https://api.github.com/users/akt42/repos", "site_admin": false, "starred_url": "https://api.github.com/users/akt42/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akt42/subscriptions", "type": "User", "url": "https://api.github.com/users/akt42" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/98386959?v=4", "events_url": "https://api.github.com/users/akt42/events{/privacy}", "followers_url": "https://api.github.com/users/akt42/followers", "following_url": "https://api.github.com/users/akt42/following{/other_user}", "gists_url": "https://api.github.com/users/akt42/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/akt42", "id": 98386959, "login": "akt42", "node_id": "U_kgDOBd1EDw", "organizations_url": "https://api.github.com/users/akt42/orgs", "received_events_url": "https://api.github.com/users/akt42/received_events", "repos_url": "https://api.github.com/users/akt42/repos", "site_admin": false, "starred_url": "https://api.github.com/users/akt42/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akt42/subscriptions", "type": "User", "url": "https://api.github.com/users/akt42" } ]
null
[ "#take" ]
"2022-08-12T13:26:00Z"
"2022-08-30T10:14:55Z"
"2022-08-30T10:14:55Z"
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** Currently, the `'imagefolder'` data set builder in [`load_dataset()`](https://github.com/huggingface/datasets/blob/2.4.0/src/datasets/load.py#L1541] ) only [supports](https://github.com/huggingface/datasets/blob/6c609a322da994de149b2c938f19439bca99408e/src/datasets/data_files.py#L31) the following names as the validation data set directory name: `["validation", "valid", "dev"]`. When the validation directory is named as `'val'`, the Data set will not have a validation split. I expected this to be a trivial task but ended up spending a lot of time before knowing that only the above names are supported. Here's a minimal example of `val` not being recognized: ```python import os import numpy as np import cv2 from datasets import load_dataset # creating a dummy data set with the following structure: # ROOT # | -- train # | ---- class_1 # | ---- class_2 # | -- val # | ---- class_1 # | ---- class_2 ROOT = "data" for which in ["train", "val"]: for class_name in ["class_1", "class_2"]: dir_name = os.path.join(ROOT, which, class_name) if not os.path.exists(dir_name): os.makedirs(dir_name) for i in range(10): cv2.imwrite( os.path.join(dir_name, f"{i}.png"), np.random.random((224, 224)) ) # trying to create a data set dataset = load_dataset( "imagefolder", data_dir=ROOT ) >> dataset DatasetDict({ train: Dataset({ features: ['image', 'label'], num_rows: 20 }) }) # ^ note how the dataset only has a 'train' subset ``` **Describe the solution you'd like** The suggestion is to include `"val"` to [that list ](https://github.com/huggingface/datasets/blob/6c609a322da994de149b2c938f19439bca99408e/src/datasets/data_files.py#L31) as that's a commonly used phrase to name the validation directory. Also, In the documentation, explicitly mention that only such directory names are supported as train/val/test directories to avoid confusion. **Describe alternatives you've considered** In the documentation, explicitly mention that only such directory names are supported as train/val/test directories without adding `val` to the above list. **Additional context** A question asked in the forum: [ Loading an imagenet-style image dataset with train/val directories](https://discuss.huggingface.co/t/loading-an-imagenet-style-image-dataset-with-train-val-directories/21554)
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4839/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4839/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/257
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/257/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/257/comments
https://api.github.com/repos/huggingface/datasets/issues/257/events
https://github.com/huggingface/datasets/issues/257
635,620,979
MDU6SXNzdWU2MzU2MjA5Nzk=
257
Tokenizer pickling issue fix not landed in `nlp` yet?
{ "avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4", "events_url": "https://api.github.com/users/sarahwie/events{/privacy}", "followers_url": "https://api.github.com/users/sarahwie/followers", "following_url": "https://api.github.com/users/sarahwie/following{/other_user}", "gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sarahwie", "id": 8027676, "login": "sarahwie", "node_id": "MDQ6VXNlcjgwMjc2NzY=", "organizations_url": "https://api.github.com/users/sarahwie/orgs", "received_events_url": "https://api.github.com/users/sarahwie/received_events", "repos_url": "https://api.github.com/users/sarahwie/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions", "type": "User", "url": "https://api.github.com/users/sarahwie" }
[]
closed
false
null
[]
null
[ "Yes, the new release of tokenizers solves this and should be out soon.\r\nIn the meantime, you can install it with `pip install tokenizers==0.8.0-dev2`", "If others run into this issue, a quick fix is to use python 3.6 instead of 3.7+. Serialization differences between the 3rd party `dataclasses` package for 3.6 and the built in `dataclasses` in 3.7+ cause the issue.\r\n\r\nProbably a dumb fix, but it works for me." ]
"2020-06-09T17:12:34Z"
"2020-06-10T21:45:32Z"
"2020-06-09T17:26:53Z"
NONE
null
null
null
Unless I recreate an arrow_dataset from my loaded nlp dataset myself (which I think does not use the cache by default), I get the following error when applying the map function: ``` dataset = nlp.load_dataset('cos_e') tokenizer = GPT2TokenizerFast.from_pretrained('gpt2', cache_dir=cache_dir) for split in dataset.keys(): dataset[split].map(lambda x: some_function(x, tokenizer)) ``` ``` 06/09/2020 10:09:19 - INFO - nlp.builder - Constructing Dataset for split train[:10], from /home/sarahw/.cache/huggingface/datasets/cos_e/default/0.0.1 Traceback (most recent call last): File "generation/input_to_label_and_rationale.py", line 390, in <module> main() File "generation/input_to_label_and_rationale.py", line 263, in main dataset[split] = dataset[split].map(lambda x: input_to_explanation_plus_label(x, tokenizer, max_length, datasource=data_args.task_name, wt5=(model_class=='t5'), expl_only=model_args.rationale_only), batched=False) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/nlp/arrow_dataset.py", line 522, in map cache_file_name = self._get_cache_file_path(function, cache_kwargs) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/nlp/arrow_dataset.py", line 381, in _get_cache_file_path function_bytes = dumps(function) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/nlp/utils/py_utils.py", line 257, in dumps dump(obj, file) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/nlp/utils/py_utils.py", line 250, in dump Pickler(file).dump(obj) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 445, in dump StockPickler.dump(self, obj) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 485, in dump self.save(obj) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 1410, in save_function pickler.save_reduce(_create_function, (obj.__code__, File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 690, in save_reduce save(args) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 899, in save_tuple save(element) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 899, in save_tuple save(element) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 1147, in save_cell pickler.save_reduce(_create_cell, (f,), obj=obj) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 690, in save_reduce save(args) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 884, in save_tuple save(element) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 601, in save self.save_reduce(obj=obj, *rv) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 715, in save_reduce save(state) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 912, in save_module_dict StockPickler.save_dict(pickler, obj) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 969, in save_dict self._batch_setitems(obj.items()) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 995, in _batch_setitems save(v) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 601, in save self.save_reduce(obj=obj, *rv) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 715, in save_reduce save(state) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 912, in save_module_dict StockPickler.save_dict(pickler, obj) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 969, in save_dict self._batch_setitems(obj.items()) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 995, in _batch_setitems save(v) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 576, in save rv = reduce(self.proto) TypeError: cannot pickle 'Tokenizer' object ``` Fix seems to be in the tokenizers [`0.8.0.dev1 pre-release`](https://github.com/huggingface/tokenizers/issues/87), which I can't install with any package managers.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/257/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/257/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2598
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2598/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2598/comments
https://api.github.com/repos/huggingface/datasets/issues/2598/events
https://github.com/huggingface/datasets/issues/2598
937,930,632
MDU6SXNzdWU5Mzc5MzA2MzI=
2,598
Unable to download omp dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/25797960?v=4", "events_url": "https://api.github.com/users/erikadistefano/events{/privacy}", "followers_url": "https://api.github.com/users/erikadistefano/followers", "following_url": "https://api.github.com/users/erikadistefano/following{/other_user}", "gists_url": "https://api.github.com/users/erikadistefano/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/erikadistefano", "id": 25797960, "login": "erikadistefano", "node_id": "MDQ6VXNlcjI1Nzk3OTYw", "organizations_url": "https://api.github.com/users/erikadistefano/orgs", "received_events_url": "https://api.github.com/users/erikadistefano/received_events", "repos_url": "https://api.github.com/users/erikadistefano/repos", "site_admin": false, "starred_url": "https://api.github.com/users/erikadistefano/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/erikadistefano/subscriptions", "type": "User", "url": "https://api.github.com/users/erikadistefano" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Hi @erikadistefano , thanks for reporting the issue.\r\n\r\nI have created a Pull Request that should fix it. \r\n\r\nOnce merged into master, feel free to update your installed `datasets` library (either by installing it from our GitHub master branch or waiting until our next release) to be able to load omp dataset." ]
"2021-07-06T14:00:52Z"
"2021-07-07T12:56:35Z"
"2021-07-07T12:56:35Z"
NONE
null
null
null
## Describe the bug The omp dataset cannot be downloaded because of a DuplicatedKeysError ## Steps to reproduce the bug from datasets import load_dataset omp = load_dataset('omp', 'posts_labeled') print(omp) ## Expected results This code should download the omp dataset and print the dictionary ## Actual results Downloading and preparing dataset omp/posts_labeled (download: 1.27 MiB, generated: 13.31 MiB, post-processed: Unknown size, total: 14.58 MiB) to /home/erika_distefano/.cache/huggingface/datasets/omp/posts_labeled/1.1.0/2fe5b067be3bff1d4588d5b0cbb9b5b22ae1b9d5b026a8ff572cd389f862735b... 0 examples [00:00, ? examples/s]2021-07-06 09:43:55.868815: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.11.0 Traceback (most recent call last): File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/builder.py", line 990, in _prepare_split writer.write(example, key) File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/arrow_writer.py", line 338, in write self.check_duplicate_keys() File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/arrow_writer.py", line 349, in check_duplicate_keys raise DuplicatedKeysError(key) datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 3326 Keys should be unique and deterministic in nature During handling of the above exception, another exception occurred: Traceback (most recent call last): File "hf_datasets.py", line 32, in <module> omp = load_dataset('omp', 'posts_labeled') File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/load.py", line 748, in load_dataset use_auth_token=use_auth_token, File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/builder.py", line 575, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/builder.py", line 652, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/builder.py", line 992, in _prepare_split num_examples, num_bytes = writer.finalize() File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/arrow_writer.py", line 409, in finalize self.check_duplicate_keys() File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/arrow_writer.py", line 349, in check_duplicate_keys raise DuplicatedKeysError(key) datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 3326 Keys should be unique and deterministic in nature ## Environment info - `datasets` version: 1.8.0 - Platform: Ubuntu 18.04.4 LTS - Python version: 3.6.9 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2598/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2598/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1434
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1434/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1434/comments
https://api.github.com/repos/huggingface/datasets/issues/1434/events
https://github.com/huggingface/datasets/pull/1434
760,821,474
MDExOlB1bGxSZXF1ZXN0NTM1NTg3NjEx
1,434
add_sofc_materials_articles
{ "avatar_url": "https://avatars.githubusercontent.com/u/7950786?v=4", "events_url": "https://api.github.com/users/ZacharySBrown/events{/privacy}", "followers_url": "https://api.github.com/users/ZacharySBrown/followers", "following_url": "https://api.github.com/users/ZacharySBrown/following{/other_user}", "gists_url": "https://api.github.com/users/ZacharySBrown/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ZacharySBrown", "id": 7950786, "login": "ZacharySBrown", "node_id": "MDQ6VXNlcjc5NTA3ODY=", "organizations_url": "https://api.github.com/users/ZacharySBrown/orgs", "received_events_url": "https://api.github.com/users/ZacharySBrown/received_events", "repos_url": "https://api.github.com/users/ZacharySBrown/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ZacharySBrown/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZacharySBrown/subscriptions", "type": "User", "url": "https://api.github.com/users/ZacharySBrown" }
[]
closed
false
null
[]
null
[ "Hey @lhoestq , thanks for the feedback on this! I updated the `_generate_examples` with some comments on the process, and reduced the `dummy_data.zip` down quite a bit as well. \r\n\r\nFor the dummy data, I reduced the text to only three sentences, and aligned the corresponding entity/token/sentence annotations to that (reduced accordingly). The frames file is a strange combined format for the annotations and I found if I reduced that that would break the parser no matter what I did, so I left that as is. The difference between a reduced frames and non-reduced frames file in the compressed dummy data was only about ~4kb, so hopefully leaving this as is will be ok!" ]
"2020-12-10T02:15:02Z"
"2020-12-17T09:59:54Z"
"2020-12-17T09:59:54Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1434.diff", "html_url": "https://github.com/huggingface/datasets/pull/1434", "merged_at": "2020-12-17T09:59:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/1434.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1434" }
adding [SOFC-Exp Corpus](https://arxiv.org/abs/2006.03039)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1434/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1434/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3562
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3562/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3562/comments
https://api.github.com/repos/huggingface/datasets/issues/3562/events
https://github.com/huggingface/datasets/pull/3562
1,098,341,351
PR_kwDODunzps4wwa44
3,562
Allow multiple task templates of the same type
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[]
"2022-01-10T20:32:07Z"
"2022-01-11T14:16:47Z"
"2022-01-11T14:16:47Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3562.diff", "html_url": "https://github.com/huggingface/datasets/pull/3562", "merged_at": "2022-01-11T14:16:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/3562.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3562" }
Add support for multiple task templates of the same type. Fixes (partially) #2520. CC: @lewtun
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3562/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3562/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4492
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4492/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4492/comments
https://api.github.com/repos/huggingface/datasets/issues/4492/events
https://github.com/huggingface/datasets/pull/4492
1,271,112,497
PR_kwDODunzps45pktu
4,492
Pin the revision in imagenet download links
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-06-14T17:15:17Z"
"2022-06-14T17:35:13Z"
"2022-06-14T17:25:45Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4492.diff", "html_url": "https://github.com/huggingface/datasets/pull/4492", "merged_at": "2022-06-14T17:25:45Z", "patch_url": "https://github.com/huggingface/datasets/pull/4492.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4492" }
Use the commit sha in the data files URLs of the imagenet-1k download script, in case we want to restructure the data files in the future. For example we may split it into many more shards for better paralellism. cc @mariosasko
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4492/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4492/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1119
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1119/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1119/comments
https://api.github.com/repos/huggingface/datasets/issues/1119/events
https://github.com/huggingface/datasets/pull/1119
757,156,781
MDExOlB1bGxSZXF1ZXN0NTMyNTc5ODA5
1,119
Add Google Great Code Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/abhishekkrthakur", "id": 1183441, "login": "abhishekkrthakur", "node_id": "MDQ6VXNlcjExODM0NDE=", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "site_admin": false, "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "type": "User", "url": "https://api.github.com/users/abhishekkrthakur" }
[]
closed
false
null
[]
null
[]
"2020-12-04T14:46:28Z"
"2020-12-06T17:33:14Z"
"2020-12-06T17:33:13Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1119.diff", "html_url": "https://github.com/huggingface/datasets/pull/1119", "merged_at": "2020-12-06T17:33:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/1119.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1119" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1119/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1119/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1108
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1108/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1108/comments
https://api.github.com/repos/huggingface/datasets/issues/1108/events
https://github.com/huggingface/datasets/pull/1108
757,054,732
MDExOlB1bGxSZXF1ZXN0NTMyNDk0MjY4
1,108
Add Sepedi NER corpus
{ "avatar_url": "https://avatars.githubusercontent.com/u/7923902?v=4", "events_url": "https://api.github.com/users/yvonnegitau/events{/privacy}", "followers_url": "https://api.github.com/users/yvonnegitau/followers", "following_url": "https://api.github.com/users/yvonnegitau/following{/other_user}", "gists_url": "https://api.github.com/users/yvonnegitau/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yvonnegitau", "id": 7923902, "login": "yvonnegitau", "node_id": "MDQ6VXNlcjc5MjM5MDI=", "organizations_url": "https://api.github.com/users/yvonnegitau/orgs", "received_events_url": "https://api.github.com/users/yvonnegitau/received_events", "repos_url": "https://api.github.com/users/yvonnegitau/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yvonnegitau/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yvonnegitau/subscriptions", "type": "User", "url": "https://api.github.com/users/yvonnegitau" }
[]
closed
false
null
[]
null
[]
"2020-12-04T12:11:24Z"
"2020-12-04T14:39:00Z"
"2020-12-04T14:39:00Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1108.diff", "html_url": "https://github.com/huggingface/datasets/pull/1108", "merged_at": "2020-12-04T14:39:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/1108.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1108" }
Finally a clean PR for Sepedi
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1108/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1108/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/705
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/705/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/705/comments
https://api.github.com/repos/huggingface/datasets/issues/705/events
https://github.com/huggingface/datasets/issues/705
713,709,100
MDU6SXNzdWU3MTM3MDkxMDA=
705
TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit'
{ "avatar_url": "https://avatars.githubusercontent.com/u/12713359?v=4", "events_url": "https://api.github.com/users/pvcastro/events{/privacy}", "followers_url": "https://api.github.com/users/pvcastro/followers", "following_url": "https://api.github.com/users/pvcastro/following{/other_user}", "gists_url": "https://api.github.com/users/pvcastro/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pvcastro", "id": 12713359, "login": "pvcastro", "node_id": "MDQ6VXNlcjEyNzEzMzU5", "organizations_url": "https://api.github.com/users/pvcastro/orgs", "received_events_url": "https://api.github.com/users/pvcastro/received_events", "repos_url": "https://api.github.com/users/pvcastro/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pvcastro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pvcastro/subscriptions", "type": "User", "url": "https://api.github.com/users/pvcastro" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[ "Hi !\r\nThanks for reporting :) \r\nIndeed this is an issue on the `datasets` side.\r\nI'm creating a PR", "Thanks @lhoestq !" ]
"2020-10-02T15:27:55Z"
"2020-10-05T08:14:59Z"
"2020-10-05T08:14:59Z"
NONE
null
null
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 (installed from master) - `datasets` version: 1.0.2 (installed as a dependency from transformers) - Platform: Linux-4.15.0-118-generic-x86_64-with-debian-stretch-sid - Python version: 3.7.9 I'm testing my own text classification dataset using [this example](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow) from transformers. The dataset is split into train / dev / test, and in csv format, containing just a text and a label columns, using comma as sep. Here's a sample: ``` text,label "Registra-se a presença do acadêmico <name> . <REL_SEP> Ao me deparar com a descrição de dois autores no polo ativo da ação junto ao PJe , margem esquerda foi informado pela procuradora do reclamante que se trata de uma reclamação trabalhista individual . <REL_SEP> Diante disso , face a ausência injustificada do autor <name> , determina-se o ARQUIVAMENTO do presente processo , com relação a este , nos termos do [[ art . 844 da CLT ]] . <REL_SEP> CUSTAS AUTOR - DISPENSADO <REL_SEP> Custas pelo autor no importe de R $326,82 , calculadas sobre R $16.341,03 , dispensadas na forma da lei , em virtude da concessão dos benefícios da Justiça Gratuita , ora deferida . <REL_SEP> Cientes os presentes . <REL_SEP> Audiência encerrada às 8h42min . <REL_SEP> <name> <REL_SEP> Juíza do Trabalho <REL_SEP> Ata redigida por << <name> >> , Secretário de Audiência .",NO_RELATION ``` However, @Santosh-Gupta reported in #7351 that he had the exact same problem using the ChemProt dataset. His colab notebook is referenced in the following section. ## To reproduce Steps to reproduce the behavior: 1. Created a new conda environment using conda env -n transformers python=3.7 2. Cloned transformers master, `cd` into it and installed using pip install --editable . -r examples/requirements.txt 3. Installed tensorflow with `pip install tensorflow` 3. Ran `run_tf_text_classification.py` with the following parameters: ``` --train_file <DATASET_PATH>/train.csv \ --dev_file <DATASET_PATH>/dev.csv \ --test_file <DATASET_PATH>/test.csv \ --label_column_id 1 \ --model_name_or_path neuralmind/bert-base-portuguese-cased \ --output_dir <OUTPUT_PATH> \ --num_train_epochs 4 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --do_train \ --do_eval \ --do_predict \ --logging_steps 1000 \ --evaluate_during_training \ --save_steps 1000 \ --overwrite_output_dir \ --overwrite_cache ``` I have also copied [@Santosh-Gupta 's colab notebook](https://colab.research.google.com/drive/11APei6GjphCZbH5wD9yVlfGvpIkh8pwr?usp=sharing) as a reference. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> Here is the stack trace: ``` 2020-10-02 07:33:41.622011: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 /media/discoD/repositorios/transformers_pedro/src/transformers/training_args.py:333: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options) FutureWarning, 2020-10-02 07:33:43.471648: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1 2020-10-02 07:33:43.471791: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.472664: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1 coreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB/s 2020-10-02 07:33:43.472684: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 2020-10-02 07:33:43.472765: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10 2020-10-02 07:33:43.472809: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10 2020-10-02 07:33:43.472848: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10 2020-10-02 07:33:43.474209: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10 2020-10-02 07:33:43.474276: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10 2020-10-02 07:33:43.561219: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7 2020-10-02 07:33:43.561397: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.562345: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.563219: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0 2020-10-02 07:33:43.563595: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2020-10-02 07:33:43.570091: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 3591830000 Hz 2020-10-02 07:33:43.570494: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x560842432400 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-10-02 07:33:43.570511: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-10-02 07:33:43.570702: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.571599: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1 coreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB/s 2020-10-02 07:33:43.571633: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 2020-10-02 07:33:43.571645: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10 2020-10-02 07:33:43.571654: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10 2020-10-02 07:33:43.571664: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10 2020-10-02 07:33:43.571691: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10 2020-10-02 07:33:43.571704: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10 2020-10-02 07:33:43.571718: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7 2020-10-02 07:33:43.571770: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.572641: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.573475: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0 2020-10-02 07:33:47.139227: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-10-02 07:33:47.139265: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0 2020-10-02 07:33:47.139272: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N 2020-10-02 07:33:47.140323: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:47.141248: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:47.142085: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:47.142854: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5371 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1) 2020-10-02 07:33:47.146317: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5608b95dc5c0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-10-02 07:33:47.146336: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce GTX 1070, Compute Capability 6.1 10/02/2020 07:33:47 - INFO - __main__ - n_replicas: 1, distributed training: False, 16-bits training: False 10/02/2020 07:33:47 - INFO - __main__ - Training/evaluation parameters TFTrainingArguments(output_dir='/media/discoD/models/datalawyer/pedidos/transformers_tf', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=True, evaluate_during_training=True, evaluation_strategy=<EvaluationStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=4, per_device_eval_batch_size=4, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=4.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Oct02_07-33-43_user-XPS-8700', logging_first_step=False, logging_steps=1000, save_steps=1000, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=1000, dataloader_num_workers=0, past_index=-1, run_name='/media/discoD/models/datalawyer/pedidos/transformers_tf', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=False, tpu_name=None, xla=False) 10/02/2020 07:33:53 - INFO - filelock - Lock 140407857405776 acquired on /home/user/.cache/huggingface/datasets/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock 10/02/2020 07:33:53 - INFO - filelock - Lock 140407857405776 released on /home/user/.cache/huggingface/datasets/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock Using custom data configuration default Traceback (most recent call last): File "run_tf_text_classification.py", line 283, in <module> main() File "run_tf_text_classification.py", line 222, in main max_seq_length=data_args.max_seq_length, File "run_tf_text_classification.py", line 43, in get_tfds ds = datasets.load_dataset("csv", data_files=files) File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/load.py", line 604, in load_dataset **config_kwargs, File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 158, in __init__ **config_kwargs, File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 269, in _create_builder_config for key in sorted(data_files.keys()): TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit' ``` ## Expected behavior Should be able to run the text-classification example as described in [https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow) Originally opened this issue at transformers' repository: [https://github.com/huggingface/transformers/issues/7535](https://github.com/huggingface/transformers/issues/7535). @jplu instructed me to open here, since according to [this](https://github.com/huggingface/transformers/issues/7535#issuecomment-702778885) evidence, the problem is from datasets. Thanks!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/705/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/705/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6441
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6441/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6441/comments
https://api.github.com/repos/huggingface/datasets/issues/6441/events
https://github.com/huggingface/datasets/issues/6441
2,004,985,857
I_kwDODunzps53gagB
6,441
Trouble Loading a Gated Dataset For User with Granted Permission
{ "avatar_url": "https://avatars.githubusercontent.com/u/124715309?v=4", "events_url": "https://api.github.com/users/e-trop/events{/privacy}", "followers_url": "https://api.github.com/users/e-trop/followers", "following_url": "https://api.github.com/users/e-trop/following{/other_user}", "gists_url": "https://api.github.com/users/e-trop/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/e-trop", "id": 124715309, "login": "e-trop", "node_id": "U_kgDOB28BLQ", "organizations_url": "https://api.github.com/users/e-trop/orgs", "received_events_url": "https://api.github.com/users/e-trop/received_events", "repos_url": "https://api.github.com/users/e-trop/repos", "site_admin": false, "starred_url": "https://api.github.com/users/e-trop/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/e-trop/subscriptions", "type": "User", "url": "https://api.github.com/users/e-trop" }
[]
closed
false
null
[]
null
[ "> Also when they try to click the url link for the dataset they get a 404 error.\r\n\r\nThis seems to be a Hub error then (cc @SBrandeis)", "Could you report this to https://discuss.huggingface.co/c/hub/23, providing the URL of the dataset, or at least if the dataset is public or private?", "Thanks for the reply! I've created an issue on the hub's board here: https://discuss.huggingface.co/t/trouble-loading-a-gated-dataset-for-user-with-granted-permission/65565" ]
"2023-11-21T19:24:36Z"
"2023-12-13T08:27:16Z"
"2023-12-13T08:27:16Z"
NONE
null
null
null
### Describe the bug I have granted permissions to several users to access a gated huggingface dataset. The users accepted the invite and when trying to load the dataset using their access token they get `FileNotFoundError: Couldn't find a dataset script at .....` . Also when they try to click the url link for the dataset they get a 404 error. ### Steps to reproduce the bug 1. Grant access to gated dataset for specific users 2. Users accept invitation 3. Users login to hugging face hub using cli login 4. Users run load_dataset ### Expected behavior Dataset is loaded normally for users who were granted access to the gated dataset. ### Environment info datasets==2.15.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6441/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6441/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1607
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1607/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1607/comments
https://api.github.com/repos/huggingface/datasets/issues/1607/events
https://github.com/huggingface/datasets/pull/1607
771,325,852
MDExOlB1bGxSZXF1ZXN0NTQyODg5OTky
1,607
modified tweets hate speech detection
{ "avatar_url": "https://avatars.githubusercontent.com/u/44197177?v=4", "events_url": "https://api.github.com/users/darshan-gandhi/events{/privacy}", "followers_url": "https://api.github.com/users/darshan-gandhi/followers", "following_url": "https://api.github.com/users/darshan-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/darshan-gandhi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/darshan-gandhi", "id": 44197177, "login": "darshan-gandhi", "node_id": "MDQ6VXNlcjQ0MTk3MTc3", "organizations_url": "https://api.github.com/users/darshan-gandhi/orgs", "received_events_url": "https://api.github.com/users/darshan-gandhi/received_events", "repos_url": "https://api.github.com/users/darshan-gandhi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/darshan-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/darshan-gandhi/subscriptions", "type": "User", "url": "https://api.github.com/users/darshan-gandhi" }
[]
closed
false
null
[]
null
[]
"2020-12-19T07:13:40Z"
"2020-12-21T16:08:48Z"
"2020-12-21T16:08:48Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1607.diff", "html_url": "https://github.com/huggingface/datasets/pull/1607", "merged_at": "2020-12-21T16:08:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/1607.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1607" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1607/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1607/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3788
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3788/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3788/comments
https://api.github.com/repos/huggingface/datasets/issues/3788/events
https://github.com/huggingface/datasets/issues/3788
1,150,375,720
I_kwDODunzps5EkVco
3,788
Only-data dataset loaded unexpectedly as validation split
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "I see two options:\r\n1. drop the \"dev\" keyword since it can be considered too generic\r\n2. improve the pattern to something more reasonable, e.g. asking for a separator before and after \"dev\"\r\n```python\r\n[\"*[ ._-]dev[ ._-]*\", \"dev[ ._-]*\"]\r\n```\r\n\r\nI think 2. is nice. If we agree on this one we can even decide to require the separation for the other split keywords \"train\", \"test\" etc.", "Yes, I had something like that on mind: \"dev\" not being part of a word.\r\n```\r\n\"[^a-zA-Z]dev[^a-zA-Z]\"", "Is there a reason why we want that regex? It feels like something that'll still be an issue for some weird case. \"my_dataset_dev\" doesn't match your regex, \"my_dataset_validation\" doesn't either ... Why not always \"train\" unless specified?", "The regex is needed as part of our effort to make datasets configurable without code. In particular we define some generic dataset repository structures that users can follow\r\n\r\n> ```\r\n> \"[^a-zA-Z]*dev[^a-zA-Z]*\"\r\n> ```\r\n\r\nunfortunately our glob doesn't support \"^\": \r\n\r\nhttps://github.com/fsspec/filesystem_spec/blob/3e739db7e53f5b408319dcc9d11e92bc1f938902/fsspec/spec.py#L465-L479", "> \"my_dataset_dev\" doesn't match your regex, \"my_dataset_validation\" doesn't either ... Why not always \"train\" unless specified?\r\n\r\nAnd `my_dataset_dev.foo` would match the pattern, and we also have the same pattern but for the \"validation\" keyword so `my_dataset_validation.foo` would work too", "> The regex is needed as part of our effort to make datasets configurable without code\r\n\r\nThis feels like coding with the filename ^^'", "This is still much easier than having to write a full dataset script right ? :p" ]
"2022-02-25T12:11:39Z"
"2022-02-28T11:22:22Z"
null
MEMBER
null
null
null
## Describe the bug As reported by @thomasw21 and @lhoestq, a dataset containing only a data file whose name matches the pattern `*dev*` will be returned as VALIDATION split, even if this is not the desired behavior, e.g. a file named `datosdevision.jsonl.gz`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3788/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3788/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/310
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/310/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/310/comments
https://api.github.com/repos/huggingface/datasets/issues/310/events
https://github.com/huggingface/datasets/pull/310
644,806,720
MDExOlB1bGxSZXF1ZXN0NDM5MzY1MDg5
310
add wikisql
{ "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghomasHudson", "id": 13795113, "login": "ghomasHudson", "node_id": "MDQ6VXNlcjEzNzk1MTEz", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "type": "User", "url": "https://api.github.com/users/ghomasHudson" }
[]
closed
false
null
[]
null
[ "That's great work @ghomasHudson !" ]
"2020-06-24T18:00:35Z"
"2020-06-25T12:32:25Z"
"2020-06-25T12:32:25Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/310.diff", "html_url": "https://github.com/huggingface/datasets/pull/310", "merged_at": "2020-06-25T12:32:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/310.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/310" }
Adding the [WikiSQL](https://github.com/salesforce/WikiSQL) dataset. Interesting things to note: - Have copied the function (`_convert_to_human_readable`) which converts the SQL query to a human-readable (string) format as this is what most people will want when actually using this dataset for NLP applications. - `conds` was originally a tuple but is converted to a dictionary to support differing types. Would be nice to add the logical_form metrics too at some point.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/310/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/310/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1554
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1554/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1554/comments
https://api.github.com/repos/huggingface/datasets/issues/1554/events
https://github.com/huggingface/datasets/pull/1554
765,675,148
MDExOlB1bGxSZXF1ZXN0NTM5MDMwNDU2
1,554
Opus CAPES added
{ "avatar_url": "https://avatars.githubusercontent.com/u/22396042?v=4", "events_url": "https://api.github.com/users/rkc007/events{/privacy}", "followers_url": "https://api.github.com/users/rkc007/followers", "following_url": "https://api.github.com/users/rkc007/following{/other_user}", "gists_url": "https://api.github.com/users/rkc007/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rkc007", "id": 22396042, "login": "rkc007", "node_id": "MDQ6VXNlcjIyMzk2MDQy", "organizations_url": "https://api.github.com/users/rkc007/orgs", "received_events_url": "https://api.github.com/users/rkc007/received_events", "repos_url": "https://api.github.com/users/rkc007/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rkc007/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rkc007/subscriptions", "type": "User", "url": "https://api.github.com/users/rkc007" }
[]
closed
false
null
[]
null
[ "@lhoestq I saw some common changes you made on the other PR's (Similar Opus Datasets). I fixed those changes here. Can you please review it once ? \r\nThanks.", "Hi @rkc007 , thanks for the contribution.\r\nUnfortunately, the CAPES dataset has already been added here: #1307\r\nI'm closing the PR ", "@lhoestq FYI" ]
"2020-12-13T22:11:34Z"
"2020-12-18T09:54:57Z"
"2020-12-18T08:46:59Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1554.diff", "html_url": "https://github.com/huggingface/datasets/pull/1554", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1554.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1554" }
Dataset : http://opus.nlpl.eu/CAPES.php
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1554/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1554/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/364
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/364/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/364/comments
https://api.github.com/repos/huggingface/datasets/issues/364/events
https://github.com/huggingface/datasets/pull/364
653,821,597
MDExOlB1bGxSZXF1ZXN0NDQ2NjY0NzM5
364
add MS MARCO dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
[]
closed
false
null
[]
null
[ "The dummy data for v2.1 is missing as far as I can see. I think running the dummy data command should work correctly here. ", "Also, it might be that the structure of the dummy data is wrong - looking at `generate_examples` the structure does not look too easy.", "The fact that the dummy data for v2.1 is missing shouldn't make the test fails I think. But as you mention the dummy data structure of v1.1 is wrong. I tried to rename files but it does not solve the issue.", "Is MS mARCO added to nlp library?I am not able to view it?", "> Is MS mARCO added to nlp library?I am not able to view it?\r\n\r\nHi @parthplc ,the PR is not merged yet. The dummy data structure is still failing. Maybe @patrickvonplaten can help with it.", "Dataset is fixed and should be ready for use. @mariamabarham @lhoestq feel free to merge whenever!", "> Dataset is fixed and should be ready for use. @mariamabarham @lhoestq feel free to merge whenever!\r\n\r\nthanks" ]
"2020-07-09T07:11:19Z"
"2020-08-06T06:15:49Z"
"2020-08-06T06:15:48Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/364.diff", "html_url": "https://github.com/huggingface/datasets/pull/364", "merged_at": "2020-08-06T06:15:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/364.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/364" }
This PR adds the MS MARCO dataset as requested in this issue #336. MS mARCO has multiple task including: - Passage and Document Retrieval - Keyphrase Extraction - QA and NLG This PR only adds the 2 versions of the QA and NLG task dataset which was realeased with the original paper here https://arxiv.org/pdf/1611.09268.pdf Tests are failing because of the dummy data. I tried to fix it without success. Can you please have a look at it? @patrickvonplaten , @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/364/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/364/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/659
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/659/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/659/comments
https://api.github.com/repos/huggingface/datasets/issues/659/events
https://github.com/huggingface/datasets/pull/659
706,231,506
MDExOlB1bGxSZXF1ZXN0NDkwODE4NTY1
659
Keep new columns in transmit format
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2020-09-22T09:47:23Z"
"2020-09-22T10:07:22Z"
"2020-09-22T10:07:20Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/659.diff", "html_url": "https://github.com/huggingface/datasets/pull/659", "merged_at": "2020-09-22T10:07:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/659.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/659" }
When a dataset is formatted with a list of columns that `__getitem__` should return, then calling `map` to add new columns doesn't add the new columns to this list. It caused `KeyError` issues in #620 I changed the logic to add those new columns to the list that `__getitem__` should return.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/659/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/659/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1985
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1985/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1985/comments
https://api.github.com/repos/huggingface/datasets/issues/1985/events
https://github.com/huggingface/datasets/pull/1985
822,170,651
MDExOlB1bGxSZXF1ZXN0NTg0ODM4NjIw
1,985
Optimize int precision
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "@lhoestq, are the tests OK? Some other cases I missed? Do you agree with this approach?", "I just tested this and it works like a charm :) \r\n\r\nHowever tokenizing and then setting the format to \"torch\" to feed the tokens into a model doesn't seem to work anymore, since the pytorch tensors have the int32/int8 precisions instead of int64 that is required as model inputs.\r\n\r\nFor example:\r\n\r\n```python\r\nimport torch\r\nfrom datasets import Dataset\r\nfrom transformers import BertModel, BertTokenizer\r\n\r\ntorch.set_grad_enabled(False)\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\nmodel = BertModel.from_pretrained(\"bert-base-uncased\")\r\n\r\ndataset = Dataset.from_dict({\"text\": [\"hello there !\"]})\r\ndataset = dataset.map(tokenizer, input_columns=\"text\", remove_columns=dataset.column_names)\r\ndataset = dataset.with_format(\"torch\")\r\n\r\nprint(dataset.features)\r\n# {'attention_mask': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None),\r\n# 'input_ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), # this should be int32 though\r\n# 'token_type_ids': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None)}\r\n\r\nmodel(**dataset[:1])\r\n# RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.CharTensor instead (while checking arguments for embedding)\r\n\r\ndataset = dataset.with_format(\"torch\", dtype=torch.int64)\r\n\r\nmodel(**dataset[:1])\r\n# works as expected\r\n```\r\n\r\nPinging @sgugger here to make sure we take the right decision here.\r\n\r\nDo we want the \"torch\" format to always return int64 ? Or does it have to keep the precision defined by the `dataset.features` \r\n and therefore we would need to specify \"torch\" with `dtype=torch.int64` ?", "From a user perspective, I think it's fine if the \"torch\" format converts all ints types to `torch.int64` by default since it's what the model will need almost all the time. I don't see a case where you would want to keep the low precision at the top of my head, and one can always write a custom transform for an edge case.", "Sounds good to me !\r\nFor consistency maybe we should make the float precision fixed as well (float32, I guess)", "Yes, that would be the one used by default.", "Do we have the same requirements for TensorFlow?", "Yes I we should do the same for tensorflow as well since tf models would have the same issue\r\n\r\nThanks for adding this :)", "@lhoestq I think this PR is ready... :)" ]
"2021-03-04T14:12:23Z"
"2021-03-22T12:04:40Z"
"2021-03-16T09:44:00Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1985.diff", "html_url": "https://github.com/huggingface/datasets/pull/1985", "merged_at": "2021-03-16T09:44:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/1985.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1985" }
Optimize int precision to reduce dataset file size. Close #1973, close #1825, close #861.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 3, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/1985/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1985/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6394
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6394/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6394/comments
https://api.github.com/repos/huggingface/datasets/issues/6394/events
https://github.com/huggingface/datasets/issues/6394
1,985,947,116
I_kwDODunzps52XyXs
6,394
TorchFormatter images (H, W, C) instead of (C, H, W) format
{ "avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4", "events_url": "https://api.github.com/users/Modexus/events{/privacy}", "followers_url": "https://api.github.com/users/Modexus/followers", "following_url": "https://api.github.com/users/Modexus/following{/other_user}", "gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Modexus", "id": 37351874, "login": "Modexus", "node_id": "MDQ6VXNlcjM3MzUxODc0", "organizations_url": "https://api.github.com/users/Modexus/orgs", "received_events_url": "https://api.github.com/users/Modexus/received_events", "repos_url": "https://api.github.com/users/Modexus/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Modexus/subscriptions", "type": "User", "url": "https://api.github.com/users/Modexus" }
[]
open
false
null
[]
null
[ "Here's a PR for that. https://github.com/huggingface/datasets/pull/6402\r\n\r\nIt's not backward compatible, unfortunately. " ]
"2023-11-09T16:02:15Z"
"2023-11-11T19:41:03Z"
null
NONE
null
null
null
### Describe the bug Using .set_format("torch") leads to images having shape (H, W, C), the same as in numpy. However, pytorch normally uses (C, H, W) format. Maybe I'm missing something but this makes the format a lot less useful as I then have to permute it anyways. If not using the format it is possible to directly use torchvision transforms but any non-transformed value will not be a tensor. Is there a reason for this choice? ### Steps to reproduce the bug ```python from datasets import Dataset, Features, Audio, Image images = ["path/to/image.png"] * 10 features = Features({"image": Image()}) ds = Dataset.from_dict({"image": images}, features=features) ds = ds.with_format("torch") ds[0]["image"].shape ``` ```python torch.Size([512, 512, 4]) ``` ### Expected behavior ```python from datasets import Dataset, Features, Audio, Image images = ["path/to/image.png"] * 10 features = Features({"image": Image()}) ds = Dataset.from_dict({"image": images}, features=features) ds = ds.with_format("torch") ds[0]["image"].shape ``` ```python torch.Size([4, 512, 512]) ``` ### Environment info - `datasets` version: 2.14.6 - Platform: Linux-6.5.9-100.fc37.x86_64-x86_64-with-glibc2.31 - Python version: 3.11.6 - Huggingface_hub version: 0.18.0 - PyArrow version: 14.0.1 - Pandas version: 2.1.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6394/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6394/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6486
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6486/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6486/comments
https://api.github.com/repos/huggingface/datasets/issues/6486/events
https://github.com/huggingface/datasets/pull/6486
2,035,206,206
PR_kwDODunzps5hqCSc
6,486
Fix docs phrasing about supported formats when sharing a dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6486). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005042 / 0.011353 (-0.006311) | 0.003452 / 0.011008 (-0.007557) | 0.061845 / 0.038508 (0.023337) | 0.052042 / 0.023109 (0.028933) | 0.241791 / 0.275898 (-0.034107) | 0.264639 / 0.323480 (-0.058841) | 0.003940 / 0.007986 (-0.004045) | 0.002768 / 0.004328 (-0.001560) | 0.047851 / 0.004250 (0.043600) | 0.037599 / 0.037052 (0.000547) | 0.251462 / 0.258489 (-0.007028) | 0.274737 / 0.293841 (-0.019104) | 0.027723 / 0.128546 (-0.100823) | 0.010510 / 0.075646 (-0.065137) | 0.205581 / 0.419271 (-0.213691) | 0.035504 / 0.043533 (-0.008029) | 0.242380 / 0.255139 (-0.012759) | 0.259791 / 0.283200 (-0.023409) | 0.017752 / 0.141683 (-0.123931) | 1.089289 / 1.452155 (-0.362865) | 1.161958 / 1.492716 (-0.330759) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094288 / 0.018006 (0.076282) | 0.303253 / 0.000490 (0.302763) | 0.000216 / 0.000200 (0.000016) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018496 / 0.037411 (-0.018915) | 0.060411 / 0.014526 (0.045885) | 0.074294 / 0.176557 (-0.102262) | 0.122934 / 0.737135 (-0.614201) | 0.074710 / 0.296338 (-0.221629) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286394 / 0.215209 (0.071185) | 2.806145 / 2.077655 (0.728490) | 1.497071 / 1.504120 (-0.007049) | 1.362254 / 1.541195 (-0.178940) | 1.389642 / 1.468490 (-0.078848) | 0.554503 / 4.584777 (-4.030274) | 2.348029 / 3.745712 (-1.397684) | 2.780862 / 5.269862 (-2.489000) | 1.728058 / 4.565676 (-2.837619) | 0.062617 / 0.424275 (-0.361658) | 0.004901 / 0.007607 (-0.002707) | 0.346267 / 0.226044 (0.120223) | 3.363744 / 2.268929 (1.094815) | 1.826994 / 55.444624 (-53.617630) | 1.560656 / 6.876477 (-5.315820) | 1.561083 / 2.142072 (-0.580990) | 0.643395 / 4.805227 (-4.161832) | 0.116206 / 6.500664 (-6.384458) | 0.042008 / 0.075469 (-0.033461) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.953416 / 1.841788 (-0.888371) | 11.461665 / 8.074308 (3.387357) | 10.623865 / 10.191392 (0.432473) | 0.128071 / 0.680424 (-0.552353) | 0.014277 / 0.534201 (-0.519924) | 0.288810 / 0.579283 (-0.290474) | 0.267575 / 0.434364 (-0.166788) | 0.327422 / 0.540337 (-0.212916) | 0.435151 / 1.386936 (-0.951785) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005242 / 0.011353 (-0.006111) | 0.003515 / 0.011008 (-0.007493) | 0.048483 / 0.038508 (0.009975) | 0.051684 / 0.023109 (0.028575) | 0.276564 / 0.275898 (0.000666) | 0.297582 / 0.323480 (-0.025898) | 0.004117 / 0.007986 (-0.003869) | 0.002610 / 0.004328 (-0.001719) | 0.047811 / 0.004250 (0.043561) | 0.040622 / 0.037052 (0.003569) | 0.280265 / 0.258489 (0.021776) | 0.311719 / 0.293841 (0.017878) | 0.028811 / 0.128546 (-0.099735) | 0.010600 / 0.075646 (-0.065047) | 0.056660 / 0.419271 (-0.362611) | 0.032638 / 0.043533 (-0.010894) | 0.276434 / 0.255139 (0.021295) | 0.299095 / 0.283200 (0.015896) | 0.018483 / 0.141683 (-0.123200) | 1.156382 / 1.452155 (-0.295773) | 1.252205 / 1.492716 (-0.240511) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097868 / 0.018006 (0.079862) | 0.309438 / 0.000490 (0.308948) | 0.000229 / 0.000200 (0.000029) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021838 / 0.037411 (-0.015573) | 0.068358 / 0.014526 (0.053832) | 0.080432 / 0.176557 (-0.096125) | 0.119788 / 0.737135 (-0.617348) | 0.081742 / 0.296338 (-0.214597) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301239 / 0.215209 (0.086030) | 2.962242 / 2.077655 (0.884587) | 1.693918 / 1.504120 (0.189798) | 1.573663 / 1.541195 (0.032468) | 1.583125 / 1.468490 (0.114635) | 0.557267 / 4.584777 (-4.027510) | 2.440048 / 3.745712 (-1.305664) | 2.727572 / 5.269862 (-2.542290) | 1.713557 / 4.565676 (-2.852120) | 0.062526 / 0.424275 (-0.361749) | 0.004982 / 0.007607 (-0.002625) | 0.353850 / 0.226044 (0.127806) | 3.530887 / 2.268929 (1.261958) | 2.047864 / 55.444624 (-53.396761) | 1.770776 / 6.876477 (-5.105701) | 1.757621 / 2.142072 (-0.384451) | 0.633847 / 4.805227 (-4.171381) | 0.114055 / 6.500664 (-6.386609) | 0.040078 / 0.075469 (-0.035391) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.983721 / 1.841788 (-0.858066) | 11.896537 / 8.074308 (3.822229) | 10.529883 / 10.191392 (0.338491) | 0.129593 / 0.680424 (-0.550831) | 0.016213 / 0.534201 (-0.517988) | 0.289623 / 0.579283 (-0.289660) | 0.280073 / 0.434364 (-0.154291) | 0.327446 / 0.540337 (-0.212892) | 0.574847 / 1.386936 (-0.812089) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2684a98fe38e0c87bb11e050586004108e32b79d \"CML watermark\")\n" ]
"2023-12-11T09:21:22Z"
"2023-12-13T14:21:29Z"
"2023-12-13T14:15:21Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6486.diff", "html_url": "https://github.com/huggingface/datasets/pull/6486", "merged_at": "2023-12-13T14:15:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/6486.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6486" }
Fix docs phrasing.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6486/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6486/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4124
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4124/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4124/comments
https://api.github.com/repos/huggingface/datasets/issues/4124/events
https://github.com/huggingface/datasets/issues/4124
1,196,469,842
I_kwDODunzps5HUK5S
4,124
Image decoding often fails when transforming Image datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/17025191?v=4", "events_url": "https://api.github.com/users/RafayAK/events{/privacy}", "followers_url": "https://api.github.com/users/RafayAK/followers", "following_url": "https://api.github.com/users/RafayAK/following{/other_user}", "gists_url": "https://api.github.com/users/RafayAK/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/RafayAK", "id": 17025191, "login": "RafayAK", "node_id": "MDQ6VXNlcjE3MDI1MTkx", "organizations_url": "https://api.github.com/users/RafayAK/orgs", "received_events_url": "https://api.github.com/users/RafayAK/received_events", "repos_url": "https://api.github.com/users/RafayAK/repos", "site_admin": false, "starred_url": "https://api.github.com/users/RafayAK/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RafayAK/subscriptions", "type": "User", "url": "https://api.github.com/users/RafayAK" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "A quick hack I have found is that we can call the image first before running the transforms and it makes sure the image is decoded before being passed on.\r\n\r\nFor this I just needed to add `example['img'] = example['img']` to the top of my `generate_flipped_data` function, defined above, so that image decode in invoked.\r\n\r\nAfter this minor change this function works:\r\n```python\r\ndef generate_flipped_data(example, p=0.5):\r\n \"\"\"\r\n A Dataset mapping functions that transforms some of the image up-side-down.\r\n If the probability value (p) is 0.5 approximately half the images will be flipped upside-down\r\n Args:\r\n example: An example from the dataset containing a Python dictionary with \"img\" and \"is_flipped\" key-value pair\r\n p: probability of flipping the image up-side-down, Default 0.5\r\n\r\n Returns:\r\n example: A Dataset object\r\n\r\n \"\"\"\r\n example['img'] = example['img'] # <<< This is the only change\r\n if rng.random() > p: # the flip the image and set is_flipped column to 1\r\n example['img'] = example['img'].transpose(\r\n 1) # ImageOps.flip(example['img']) #example['img'].transpose(Image.FLIP_TOP_BOTTOM)\r\n example['is_flipped'] = 1\r\n\r\n return example\r\n```", "Hi @RafayAK, thanks for reporting.\r\n\r\nCurrent implementation of the Image feature performs the decoding only if the \"img\" field is accessed by the mapped function.\r\n\r\nIn your original `generate_flipped_data` function:\r\n- it only accesses the \"img\" field (and thus performs decoding) if `rng.random() > p`;\r\n- on the other hand, for the cases where `rng.random() <= p`, the \"img\" field is not accessed and thus no decoding is performed for those examples\r\n\r\nBy adding the code line `example['img'] = example['img']`, you make sure the \"img\" field is accessed in all cases, and the decoding is done for all examples.\r\n\r\nAlso note that there is a little bug in your implementation: `p` is not the probability of flipping, but the probability of not-flipping; the larger is `p`, the smaller is the probability of flipping.\r\n\r\nSome refactoring (fixing also `p`):\r\n```python\r\ndef generate_flipped_data(example, p=0.5):\r\n \"\"\"\r\n A Dataset mapping functions that transforms some of the image up-side-down.\r\n If the probability value (p) is 0.5 approximately half the images will be flipped upside-down.\r\n\r\n Args:\r\n example: An example from the dataset containing a Python dictionary with \"img\" and \"is_flipped\" key-value pair\r\n p: probability of flipping the image up-side-down, Default 0.5\r\n\r\n Returns:\r\n example: A Dataset object\r\n\r\n \"\"\"\r\n do_flip = rng.random() < p # Note the \"<\" sign here instead of \">\"\r\n example['img'] = example['img'].transpose(1) if do_flip else example['img'] # Note \"img\" is always accessed\r\n example['is_flipped'] = 1 if do_flip else 0\r\n return example", "@albertvillanova Thanks for letting me know this is intended behavior. The docs are severely lacking on this, if I hadn't posted this here I would have never found out how I'm actually supposed to modify images in a Dataset object.", "@albertvillanova Secondly if you check the error message it shows that around 1999 images were successfully created, I'm pretty sure some of them were also flipped during the process. Back to my main contention, sometimes the decoding takes place other times it fails. \r\n\r\nI suppose to run `map` on any dataset all the examples should be invoked even if on some of them we end up doing nothing, is that right?", "Hi @RafayAK! I've opened a PR with the fix, which adds a fallback to reattempt casting to PyArrow format with a more robust (but more expensive) procedure if the first attempt fails. Feel free to test it by installing `datasets` from the PR branch with the following command:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git@fix-4124\r\n```", "@mariosasko I'll try this right away and report back.", "@mariosasko Thanks a lot for looking into this, now the `map` function at least behaves as one would expect a function to behave. \r\n\r\nLooking forward to exploring Hugging Face more and even contributing 😃.\r\n\r\n```bash\r\n $ conda list | grep datasets\r\ndatasets 2.0.1.dev0 pypi_0 pypi\r\n\r\n```\r\n\r\n```python\r\ndef preprocess_data(dataset):\r\n \"\"\"\r\n Helper funtion to pre-process HuggingFace Cifar-100 Dataset to remove fine_label and coarse_label columns and\r\n add is_flipped column\r\n Args:\r\n dataset: HuggingFace CIFAR-100 Dataset Object\r\n\r\n Returns:\r\n new_dataset: A Dataset object with \"img\" and \"is_flipped\" columns only\r\n\r\n \"\"\"\r\n # remove fine_label and coarse_label columns\r\n new_dataset = dataset.remove_columns(['fine_label', 'coarse_label'])\r\n # add the column for is_flipped\r\n new_dataset = new_dataset.add_column(name=\"is_flipped\", column=np.zeros((len(new_dataset)), dtype=np.uint8))\r\n\r\n return new_dataset\r\n\r\n\r\ndef generate_flipped_data(example, p=0.5):\r\n \"\"\"\r\n A Dataset mapping functions that transforms some of the image up-side-down.\r\n If the probability value (p) is 0.5 approximately half the images will be flipped upside-down\r\n Args:\r\n example: An example from the dataset containing a Python dictionary with \"img\" and \"is_flipped\" key-value pair\r\n p: probability of flipping the image up-side-down, Default 0.5\r\n\r\n Returns:\r\n example: A Dataset object\r\n\r\n \"\"\"\r\n # example['img'] = example['img']\r\n if rng.random() > p: # the flip the image and set is_flipped column to 1\r\n example['img'] = example['img'].transpose(\r\n 1) # ImageOps.flip(example['img']) #example['img'].transpose(Image.FLIP_TOP_BOTTOM)\r\n example['is_flipped'] = 1\r\n\r\n return example\r\n\r\nmy_test = preprocess_data(test_dataset)\r\nmy_test = my_test.map(generate_flipped_data)\r\n```\r\n\r\nThe output now show the function was applied successfully:\r\n``` bash\r\n/home/rafay/anaconda3/envs/pytorch_new/bin/python /home/rafay/Documents/you_only_live_once/upside_down_detector/create_dataset.py\r\nDownloading builder script: 5.61kB [00:00, 3.16MB/s] \r\nDownloading metadata: 4.21kB [00:00, 2.56MB/s] \r\nReusing dataset cifar100 (/home/rafay/.cache/huggingface/datasets/cifar100/cifar100/1.0.0/f365c8b725c23e8f0f8d725c3641234d9331cd2f62919d1381d1baa5b3ba3142)\r\nReusing dataset cifar100 (/home/rafay/.cache/huggingface/datasets/cifar100/cifar100/1.0.0/f365c8b725c23e8f0f8d725c3641234d9331cd2f62919d1381d1baa5b3ba3142)\r\n100%|██████████| 10000/10000 [00:01<00:00, 5149.15ex/s]\r\n```\r\n" ]
"2022-04-07T19:17:25Z"
"2022-04-13T14:01:16Z"
"2022-04-13T14:01:16Z"
NONE
null
null
null
## Describe the bug When transforming/modifying images in an image dataset using the `map` function the PIL images often fail to decode in time for the image transforms, causing errors. Using a debugger it is easy to see what the problem is, the Image decode invocation does not take place and the resulting image passed around is still raw bytes: ``` [{'bytes': b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00 \x00\x00\x00 \x08\x02\x00\x00\x00\xfc\x18\xed\xa3\x00\x00\x08\x02IDATx\x9cEVIs[\xc7\x11\xeemf\xde\x82\x8d\x80\x08\x89"\xb5V\\\xb6\x94(\xe5\x9f\x90\xca5\x7f$\xa7T\xe5\x9f&9\xd9\x8a\\.\xdb\xa4$J\xa4\x00\x02x\xc0{\xb3t\xe7\x00\xca\x99\xd3\\f\xba\xba\xbf\xa5?|\xfa\xf4\xa2\xeb\xba\xedv\xa3f^\xf8\xd5\x0bY\xb6\x10\xb3\xaaDq\xcd\x83\x87\xdf5\xf3gZ\x1a\x04\x0f\xa0fp\xfa\xe0\xd4\x07?\x9dN\xc4\xb1\x99\xfd\xf2\xcb/\x97\x97\x97H\xa2\xaaf\x16\x82\xaf\xeb\xca{\xbf\xd9l.\xdf\x7f\xfa\xcb_\xff&\x88\x08\x00\x80H\xc0\x80@.;\x0f\x8c@#v\xe3\xe5\xfc\xd1\x9f\xee6q\xbf\xdf\xa6\x14\'\x93\xf1\xc3\xe5\xe3\xd1x\x14c\x8c1\xa5\x1c\x9dsM\xd3\xb4\xed\x08\x89SJ)\xa5\xedv\xbb^\xafNO\x97D\x84Hf .... ``` ## Steps to reproduce the bug ```python from datasets import load_dataset, Dataset import numpy as np # seeded NumPy random number generator for reprodducinble results. rng = np.random.default_rng(seed=0) test_dataset = load_dataset('cifar100', split="test") def preprocess_data(dataset): """ Helper function to pre-process HuggingFace Cifar-100 Dataset to remove fine_label and coarse_label columns and add is_flipped column Args: dataset: HuggingFace CIFAR-100 Dataset Object Returns: new_dataset: A Dataset object with "img" and "is_flipped" columns only """ # remove fine_label and coarse_label columns new_dataset = dataset.remove_columns(['fine_label', 'coarse_label']) # add the column for is_flipped new_dataset = new_dataset.add_column(name="is_flipped", column=np.zeros((len(new_dataset)), dtype=np.uint8)) return new_dataset def generate_flipped_data(example, p=0.5): """ A Dataset mapping function that transforms some of the images up-side-down. If the probability value (p) is 0.5 approximately half the images will be flipped upside-down Args: example: An example from the dataset containing a Python dictionary with "img" and "is_flipped" key-value pair p: the probability of flipping the image up-side-down, Default 0.5 Returns: example: A Dataset object """ # example['img'] = example['img'] if rng.random() > p: # the flip the image and set is_flipped column to 1 example['img'] = example['img'].transpose( 1) # ImageOps.flip(example['img']) #example['img'].transpose(Image.FLIP_TOP_BOTTOM) example['is_flipped'] = 1 return example my_test = preprocess_data(test_dataset) my_test = my_test.map(generate_flipped_data) ``` ## Expected results The dataset should be transformed without problems. ## Actual results ``` /home/rafay/anaconda3/envs/pytorch_new/bin/python /home/rafay/Documents/you_only_live_once/upside_down_detector/create_dataset.py Reusing dataset cifar100 (/home/rafay/.cache/huggingface/datasets/cifar100/cifar100/1.0.0/f365c8b725c23e8f0f8d725c3641234d9331cd2f62919d1381d1baa5b3ba3142) Reusing dataset cifar100 (/home/rafay/.cache/huggingface/datasets/cifar100/cifar100/1.0.0/f365c8b725c23e8f0f8d725c3641234d9331cd2f62919d1381d1baa5b3ba3142) 20%|█▉ | 1999/10000 [00:00<00:01, 5560.44ex/s] Traceback (most recent call last): File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2326, in _map_single writer.write(example) File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 441, in write self.write_examples_on_file() File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 399, in write_examples_on_file self.write_batch(batch_examples=batch_examples) File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 492, in write_batch arrays.append(pa.array(typed_sequence)) File "pyarrow/array.pxi", line 230, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 185, in __arrow_array__ out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True)) File "pyarrow/array.pxi", line 316, in pyarrow.lib.array File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Could not convert <PIL.Image.Image image mode=RGB size=32x32 at 0x7F56AEE61DE0> with type Image: did not recognize Python value type when inferring an Arrow data type During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/rafay/Documents/you_only_live_once/upside_down_detector/create_dataset.py", line 55, in <module> my_test = my_test.map(generate_flipped_data) File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1953, in map return self._map_single( File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 519, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 486, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/fingerprint.py", line 458, in wrapper out = func(self, *args, **kwargs) File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2360, in _map_single writer.finalize() File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 522, in finalize self.write_examples_on_file() File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 399, in write_examples_on_file self.write_batch(batch_examples=batch_examples) File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 492, in write_batch arrays.append(pa.array(typed_sequence)) File "pyarrow/array.pxi", line 230, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 185, in __arrow_array__ out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True)) File "pyarrow/array.pxi", line 316, in pyarrow.lib.array File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Could not convert <PIL.Image.Image image mode=RGB size=32x32 at 0x7F56AEE61DE0> with type Image: did not recognize Python value type when inferring an Arrow data type Process finished with exit code 1 ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Linux(Fedora 35) - Python version: 3.10 - PyArrow version: 7.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4124/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4124/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5322
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5322/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5322/comments
https://api.github.com/repos/huggingface/datasets/issues/5322/events
https://github.com/huggingface/datasets/pull/5322
1,471,502,162
PR_kwDODunzps5EEeQP
5,322
Raise error for `.tar` archives in the same way as for `.tar.gz` and `.tgz` in `_get_extraction_protocol`
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-12-01T15:19:28Z"
"2022-12-14T16:37:16Z"
"2022-12-14T16:33:30Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5322.diff", "html_url": "https://github.com/huggingface/datasets/pull/5322", "merged_at": "2022-12-14T16:33:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/5322.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5322" }
Currently `download_and_extract` doesn't throw an error when it is used with files with `.tar` extension in streaming mode because `_get_extraction_protocol` doesn't do it (like it does for `tar.gz` and `tgz`). `_get_extraction_protocol` returns formatted url as if we support tar protocol but we don't. That means that in dataset scripts `.tar` files would be attempted to load and fail during examples generation (after `download_and_extract` execution). So this PR raises error for `tar` files too.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5322/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5322/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5747
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5747/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5747/comments
https://api.github.com/repos/huggingface/datasets/issues/5747/events
https://github.com/huggingface/datasets/pull/5747
1,667,270,412
PR_kwDODunzps5ORtBF
5,747
[WIP] Add Dataset.to_spark
{ "avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4", "events_url": "https://api.github.com/users/maddiedawson/events{/privacy}", "followers_url": "https://api.github.com/users/maddiedawson/followers", "following_url": "https://api.github.com/users/maddiedawson/following{/other_user}", "gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/maddiedawson", "id": 106995444, "login": "maddiedawson", "node_id": "U_kgDOBmCe9A", "organizations_url": "https://api.github.com/users/maddiedawson/orgs", "received_events_url": "https://api.github.com/users/maddiedawson/received_events", "repos_url": "https://api.github.com/users/maddiedawson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions", "type": "User", "url": "https://api.github.com/users/maddiedawson" }
[]
open
false
null
[]
null
[]
"2023-04-13T23:20:03Z"
"2023-05-05T12:31:10Z"
null
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5747.diff", "html_url": "https://github.com/huggingface/datasets/pull/5747", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5747.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5747" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5747/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5747/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5746
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5746/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5746/comments
https://api.github.com/repos/huggingface/datasets/issues/5746/events
https://github.com/huggingface/datasets/pull/5746
1,667,102,459
PR_kwDODunzps5ORIUU
5,746
Fix link in docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/7485661?v=4", "events_url": "https://api.github.com/users/bbbxyz/events{/privacy}", "followers_url": "https://api.github.com/users/bbbxyz/followers", "following_url": "https://api.github.com/users/bbbxyz/following{/other_user}", "gists_url": "https://api.github.com/users/bbbxyz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bbbxyz", "id": 7485661, "login": "bbbxyz", "node_id": "MDQ6VXNlcjc0ODU2NjE=", "organizations_url": "https://api.github.com/users/bbbxyz/orgs", "received_events_url": "https://api.github.com/users/bbbxyz/received_events", "repos_url": "https://api.github.com/users/bbbxyz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bbbxyz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bbbxyz/subscriptions", "type": "User", "url": "https://api.github.com/users/bbbxyz" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006461 / 0.011353 (-0.004892) | 0.004671 / 0.011008 (-0.006337) | 0.097329 / 0.038508 (0.058821) | 0.028380 / 0.023109 (0.005270) | 0.369892 / 0.275898 (0.093994) | 0.398244 / 0.323480 (0.074764) | 0.004795 / 0.007986 (-0.003190) | 0.004866 / 0.004328 (0.000538) | 0.075060 / 0.004250 (0.070809) | 0.035678 / 0.037052 (-0.001374) | 0.372197 / 0.258489 (0.113708) | 0.407509 / 0.293841 (0.113668) | 0.031557 / 0.128546 (-0.096989) | 0.011608 / 0.075646 (-0.064038) | 0.325467 / 0.419271 (-0.093805) | 0.042590 / 0.043533 (-0.000943) | 0.373738 / 0.255139 (0.118599) | 0.395793 / 0.283200 (0.112593) | 0.082335 / 0.141683 (-0.059348) | 1.471582 / 1.452155 (0.019427) | 1.535834 / 1.492716 (0.043117) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.192432 / 0.018006 (0.174426) | 0.404423 / 0.000490 (0.403933) | 0.003252 / 0.000200 (0.003052) | 0.000073 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025312 / 0.037411 (-0.012099) | 0.099964 / 0.014526 (0.085438) | 0.108779 / 0.176557 (-0.067777) | 0.170438 / 0.737135 (-0.566697) | 0.110116 / 0.296338 (-0.186223) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420402 / 0.215209 (0.205193) | 4.179142 / 2.077655 (2.101487) | 1.858114 / 1.504120 (0.353994) | 1.674452 / 1.541195 (0.133257) | 1.697839 / 1.468490 (0.229349) | 0.694707 / 4.584777 (-3.890070) | 3.394321 / 3.745712 (-0.351391) | 1.918437 / 5.269862 (-3.351425) | 1.277954 / 4.565676 (-3.287723) | 0.082357 / 0.424275 (-0.341918) | 0.012206 / 0.007607 (0.004598) | 0.522093 / 0.226044 (0.296049) | 5.239604 / 2.268929 (2.970675) | 2.347764 / 55.444624 (-53.096860) | 1.996864 / 6.876477 (-4.879613) | 2.050820 / 2.142072 (-0.091253) | 0.806110 / 4.805227 (-3.999118) | 0.151061 / 6.500664 (-6.349603) | 0.066438 / 0.075469 (-0.009031) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.211233 / 1.841788 (-0.630554) | 14.054422 / 8.074308 (5.980114) | 14.110141 / 10.191392 (3.918749) | 0.129962 / 0.680424 (-0.550462) | 0.017271 / 0.534201 (-0.516930) | 0.386410 / 0.579283 (-0.192873) | 0.392648 / 0.434364 (-0.041716) | 0.444940 / 0.540337 (-0.095398) | 0.533535 / 1.386936 (-0.853401) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006865 / 0.011353 (-0.004488) | 0.004662 / 0.011008 (-0.006346) | 0.077837 / 0.038508 (0.039329) | 0.028258 / 0.023109 (0.005149) | 0.346136 / 0.275898 (0.070238) | 0.380414 / 0.323480 (0.056934) | 0.005039 / 0.007986 (-0.002947) | 0.004967 / 0.004328 (0.000638) | 0.077774 / 0.004250 (0.073523) | 0.037504 / 0.037052 (0.000452) | 0.341550 / 0.258489 (0.083061) | 0.382494 / 0.293841 (0.088653) | 0.031881 / 0.128546 (-0.096665) | 0.011746 / 0.075646 (-0.063901) | 0.087087 / 0.419271 (-0.332185) | 0.043108 / 0.043533 (-0.000425) | 0.344103 / 0.255139 (0.088964) | 0.366613 / 0.283200 (0.083413) | 0.090399 / 0.141683 (-0.051284) | 1.492675 / 1.452155 (0.040520) | 1.588666 / 1.492716 (0.095950) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191859 / 0.018006 (0.173853) | 0.412514 / 0.000490 (0.412025) | 0.001953 / 0.000200 (0.001753) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025159 / 0.037411 (-0.012252) | 0.100125 / 0.014526 (0.085599) | 0.106000 / 0.176557 (-0.070556) | 0.160710 / 0.737135 (-0.576425) | 0.110449 / 0.296338 (-0.185889) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436636 / 0.215209 (0.221427) | 4.364597 / 2.077655 (2.286942) | 2.077492 / 1.504120 (0.573372) | 1.868248 / 1.541195 (0.327053) | 1.911218 / 1.468490 (0.442728) | 0.700306 / 4.584777 (-3.884471) | 3.385428 / 3.745712 (-0.360284) | 2.965384 / 5.269862 (-2.304478) | 1.522093 / 4.565676 (-3.043583) | 0.082805 / 0.424275 (-0.341470) | 0.012432 / 0.007607 (0.004825) | 0.538478 / 0.226044 (0.312433) | 5.383207 / 2.268929 (3.114278) | 2.525177 / 55.444624 (-52.919447) | 2.179632 / 6.876477 (-4.696845) | 2.280768 / 2.142072 (0.138695) | 0.805869 / 4.805227 (-3.999358) | 0.152716 / 6.500664 (-6.347948) | 0.067848 / 0.075469 (-0.007621) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.318899 / 1.841788 (-0.522889) | 14.416310 / 8.074308 (6.342002) | 14.172804 / 10.191392 (3.981412) | 0.141729 / 0.680424 (-0.538695) | 0.016785 / 0.534201 (-0.517416) | 0.378626 / 0.579283 (-0.200657) | 0.387153 / 0.434364 (-0.047211) | 0.439950 / 0.540337 (-0.100388) | 0.523958 / 1.386936 (-0.862978) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7c3a9b057c476c40d157bd7a5d57f49066239df0 \"CML watermark\")\n" ]
"2023-04-13T20:45:19Z"
"2023-04-14T13:15:38Z"
"2023-04-14T13:08:42Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5746.diff", "html_url": "https://github.com/huggingface/datasets/pull/5746", "merged_at": "2023-04-14T13:08:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/5746.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5746" }
Fixes a broken link in the use_with_pytorch docs
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5746/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5746/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4020
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4020/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4020/comments
https://api.github.com/repos/huggingface/datasets/issues/4020/events
https://github.com/huggingface/datasets/pull/4020
1,180,636,754
PR_kwDODunzps41Am4R
4,020
Replace amazon_polarity data URL
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-03-25T10:50:57Z"
"2022-03-25T15:02:36Z"
"2022-03-25T14:57:41Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4020.diff", "html_url": "https://github.com/huggingface/datasets/pull/4020", "merged_at": "2022-03-25T14:57:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/4020.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4020" }
I replaced the Google Drive URL of the dataset by the FastAI one, since we've had some issues with Google Drive.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4020/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4020/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5006
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5006/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5006/comments
https://api.github.com/repos/huggingface/datasets/issues/5006/events
https://github.com/huggingface/datasets/pull/5006
1,380,968,395
PR_kwDODunzps4_Wm8z
5,006
Revert input_columns change
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Merging this one and I'll check if it fixes the `transformers` CI before doing a patch release" ]
"2022-09-21T13:49:20Z"
"2022-09-21T14:14:33Z"
"2022-09-21T14:11:57Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5006.diff", "html_url": "https://github.com/huggingface/datasets/pull/5006", "merged_at": "2022-09-21T14:11:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/5006.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5006" }
Revert https://github.com/huggingface/datasets/pull/4971 Fix https://github.com/huggingface/datasets/issues/5005
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5006/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5006/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4497
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4497/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4497/comments
https://api.github.com/repos/huggingface/datasets/issues/4497/events
https://github.com/huggingface/datasets/pull/4497
1,271,964,338
PR_kwDODunzps45sYns
4,497
Re-add download_manager module in utils
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for the fix.\r\n\r\nI'm wondering how this fixes backward compatibility...\r\n\r\nExecuting this code:\r\n```python\r\nfrom datasets.utils.download_manager import DownloadMode\r\n```\r\nwe will have\r\n```python\r\nDownloadMode = None\r\n```\r\n\r\nIf afterwards we use something like:\r\n```python\r\nif download_mode == DownloadMode.FORCE_REDOWNLOAD\r\n```\r\nthat will raise an exception.", "It works fine on my side:\r\n```python\r\n>>> from datasets.utils.download_manager import DownloadMode\r\n>>> DownloadMode is not None\r\nTrue\r\n```", "As reported in https://github.com/huggingface/evaluate/pull/143\r\n```python\r\nfrom datasets.utils import DownloadConfig\r\n```\r\nis also missing, I'm re-adding it", "Took the liberty of merging this one, to do a patch release soon. If we think of a better approach we can improve it later" ]
"2022-06-15T09:44:33Z"
"2022-06-15T10:33:28Z"
"2022-06-15T10:23:44Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4497.diff", "html_url": "https://github.com/huggingface/datasets/pull/4497", "merged_at": "2022-06-15T10:23:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/4497.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4497" }
https://github.com/huggingface/datasets/pull/4384 moved `datasets.utils.download_manager` to `datasets.download.download_manager` This breaks `evaluate` which imports `DownloadMode` from `datasets.utils.download_manager` This PR re-adds `datasets.utils.download_manager` without circular imports. We could also show a message that says that accessing it is deprecated, but I think we can do this in a subsequent PR, and just focus on doing a patch release for now
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4497/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4497/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4577
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4577/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4577/comments
https://api.github.com/repos/huggingface/datasets/issues/4577/events
https://github.com/huggingface/datasets/pull/4577
1,285,703,775
PR_kwDODunzps46aTWL
4,577
Add authentication tip to `load_dataset`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-06-27T12:05:34Z"
"2022-07-04T13:13:15Z"
"2022-07-04T13:01:30Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4577.diff", "html_url": "https://github.com/huggingface/datasets/pull/4577", "merged_at": "2022-07-04T13:01:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/4577.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4577" }
Add an authentication tip similar to the one in transformers' `PreTrainedModel.from_pretrained` to `load_dataset`/`load_dataset_builder`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4577/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4577/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2999
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2999/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2999/comments
https://api.github.com/repos/huggingface/datasets/issues/2999/events
https://github.com/huggingface/datasets/pull/2999
1,013,536,933
PR_kwDODunzps4skgCm
2,999
Set trivia_qa writer batch size
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2021-10-01T16:23:26Z"
"2021-10-01T16:34:55Z"
"2021-10-01T16:34:55Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2999.diff", "html_url": "https://github.com/huggingface/datasets/pull/2999", "merged_at": "2021-10-01T16:34:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/2999.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2999" }
Save some RAM when generating trivia_qa
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2999/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2999/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2148
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2148/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2148/comments
https://api.github.com/repos/huggingface/datasets/issues/2148/events
https://github.com/huggingface/datasets/issues/2148
844,700,910
MDU6SXNzdWU4NDQ3MDA5MTA=
2,148
Add configurable options to `seqeval` metric
{ "avatar_url": "https://avatars.githubusercontent.com/u/44571847?v=4", "events_url": "https://api.github.com/users/marrodion/events{/privacy}", "followers_url": "https://api.github.com/users/marrodion/followers", "following_url": "https://api.github.com/users/marrodion/following{/other_user}", "gists_url": "https://api.github.com/users/marrodion/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/marrodion", "id": 44571847, "login": "marrodion", "node_id": "MDQ6VXNlcjQ0NTcxODQ3", "organizations_url": "https://api.github.com/users/marrodion/orgs", "received_events_url": "https://api.github.com/users/marrodion/received_events", "repos_url": "https://api.github.com/users/marrodion/repos", "site_admin": false, "starred_url": "https://api.github.com/users/marrodion/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marrodion/subscriptions", "type": "User", "url": "https://api.github.com/users/marrodion" }
[]
closed
false
null
[]
null
[ "Hi @marrodion. \r\n\r\nThanks for pointing this out. It would be great to incorporate this metric-specific enhancement.\r\n\r\nAnother possibility would be to require the user to input the scheme as a string `mode=\"strict\", scheme=\"IOB2\"` and then dynamically import the corresponding module using Python `importlib`:\r\n```python\r\nif scheme:\r\n scheme = importlib.import_module(f\"seqeval.scheme.{scheme}\")\r\n```\r\n\r\nFeel free to create a Pull Request to make this contribution." ]
"2021-03-30T15:04:06Z"
"2021-04-15T13:49:46Z"
"2021-04-15T13:49:46Z"
CONTRIBUTOR
null
null
null
Right now `load_metric("seqeval")` only works in the default mode of evaluation (equivalent to conll evaluation). However, seqeval library [supports](https://github.com/chakki-works/seqeval#support-features) different evaluation schemes (IOB1, IOB2, etc.), which can be plugged in just by supporting additional kwargs in `Seqeval._compute` https://github.com/huggingface/datasets/blob/85cf7ff920c90ca2e12bedca12b36d2a043c3da2/metrics/seqeval/seqeval.py#L109 Things that would be relevant are, for example, supporting `mode="strict", scheme=IOB2` to count only full entity match as a true positive and omit partial matches. The only problem I see is that the spirit of `metrics` seems to not require additional imports from user. `seqeval` only supports schemes as objects, without any string aliases. It can be solved naively with mapping like `{"IOB2": seqeval.scheme.IOB2}`. Or just left as is and require user to explicitly import scheme from `seqeval` if he wants to configure it past the default implementation. If that makes sense, I am happy to implement the change.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/2148/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2148/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/853
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/853/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/853/comments
https://api.github.com/repos/huggingface/datasets/issues/853/events
https://github.com/huggingface/datasets/issues/853
743,426,583
MDU6SXNzdWU3NDM0MjY1ODM=
853
concatenate_datasets support axis=0 or 1 ?
{ "avatar_url": "https://avatars.githubusercontent.com/u/12437751?v=4", "events_url": "https://api.github.com/users/renqingcolin/events{/privacy}", "followers_url": "https://api.github.com/users/renqingcolin/followers", "following_url": "https://api.github.com/users/renqingcolin/following{/other_user}", "gists_url": "https://api.github.com/users/renqingcolin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/renqingcolin", "id": 12437751, "login": "renqingcolin", "node_id": "MDQ6VXNlcjEyNDM3NzUx", "organizations_url": "https://api.github.com/users/renqingcolin/orgs", "received_events_url": "https://api.github.com/users/renqingcolin/received_events", "repos_url": "https://api.github.com/users/renqingcolin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/renqingcolin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/renqingcolin/subscriptions", "type": "User", "url": "https://api.github.com/users/renqingcolin" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "008672", "default": true, "description": "Extra attention is needed", "id": 1935892884, "name": "help wanted", "node_id": "MDU6TGFiZWwxOTM1ODkyODg0", "url": "https://api.github.com/repos/huggingface/datasets/labels/help%20wanted" }, { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Unfortunately `concatenate_datasets` only supports concatenating the rows, while what you want to achieve is concatenate the columns.\r\nCurrently to add more columns to a dataset, one must use `map`.\r\nWhat you can do is somehting like this:\r\n```python\r\n# suppose you have datasets d1, d2, d3\r\ndef add_columns(example, index):\r\n example.update(d2[index])\r\n example.update(d3[index])\r\n return example\r\n\r\nfull_dataset = d1.map(add_columns, with_indices=True)\r\n```", "Closing this one, feel free to re-open if you have other questions about this issue", "That's not really difficult to add, though, no?\r\nI think it can be done without copy.\r\nMaybe let's add it to the roadmap?", "Actually it's doable but requires to update the `Dataset._data_files` schema to support this.\r\nI'm re-opening this since we may want to add this in the future", "Hi @lhoestq, I would love to help and add this feature if still needed. My plan is to add an axis variable in the `concatenate_datasets` function in `arrow_dataset.py` and when that is set to 1 concatenate columns instead of rows. ", "Hi ! I would love to see this feature implemented as well :) Thank you for proposing your help !\r\n\r\nHere is a few things about the current implementation:\r\n- A dataset object is a wrapper of one `pyarrow.Table` that contains the data\r\n- Pyarrow offers an API that allows to transform Table objects. For example there are functions like `concat_tables`, `Table.rename_columns`, `Table.add_column` etc.\r\n\r\nTherefore adding columns from another dataset is possible thanks to the pyarrow API and in particular `Table.add_column` :) \r\n\r\nHowever this breaks some features we have regarding pickle. A dataset object can be pickled and unpickled without loading all the data in memory. It is useful for multiprocessing for example. Pickling a dataset object is possible thanks to the `Dataset._data_files` which defines the list of arrow files that will be used to form the final Table (basically all the data from each files are concatenated on axis 0).\r\n\r\nTherefore to be able to add columns to a Dataset and still be able to work with it in a multiprocessing setup, we need to extend this last aspect to be able to reconstruct a Table object from multiple arrow files that are combined in both axis 0 and 1. Currently this reconstruction mechanism only supports axis 0.\r\n\r\nI'm sure we can figure something out that enables users to add columns from another dataset while keeping the multiprocessing support.", "@lhoestq, we have two Pull Requests to implement:\r\n- Dataset.add_item: #1870\r\n- Dataset.add_column: #2145\r\nwhich add a single row or column, repectively.\r\n\r\nThe request here is to implement the concatenation of *multiple* rows/columns. Am I right?\r\n\r\nWe should agree on the API:\r\n- `concatenate_datasets` with `axis`?\r\n- other Dataset method name?", "For the API, I like `concatenate_datasets` with `axis` personally :)\r\nFrom a list of `Dataset` objects, it would concatenate them to a new `Dataset` object backed by a `ConcatenationTable`, that is the concatenation of the tables of each input dataset. The concatenation is either on axis=0 (append rows) or on axis=1 (append columns).\r\n\r\nRegarding what we need to implement:\r\nThe axis=0 is already supported and is the current behavior of `concatenate_datasets`.\r\nAlso `add_item` is not needed to implement axis=1 (though it's an awesome addition to this library).\r\n\r\nTo implement axis=1, we either need `add_column` or a `ConcatenationTable` constructor to concatenate tables horizontally.\r\nI have a preference for using a `ConcatenationTable` constructor because this way we can end up with a `ConcatenationTable` with only 1 additional block per table, while `add_column` would add 1 block per new column.\r\n\r\nMaybe we can simply have an equivalent of `ConcatenationTable.from_tables` but for axis=1 ?\r\n`axis` could also be an argument of `ConcatenationTable.from_tables`", "@lhoestq I think I guessed your suggestions in advance... 😉 #2151", "Cool ! Sorry I missed this one ^^\r\nI'm taking a look ;)" ]
"2020-11-16T02:46:23Z"
"2021-04-19T16:07:18Z"
"2021-04-19T16:07:18Z"
NONE
null
null
null
I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png)
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/853/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/853/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4712
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4712/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4712/comments
https://api.github.com/repos/huggingface/datasets/issues/4712/events
https://github.com/huggingface/datasets/pull/4712
1,309,177,302
PR_kwDODunzps47ohdr
4,712
Highlight non-commercial license in amazon_reviews_multi dataset card
{ "avatar_url": "https://avatars.githubusercontent.com/u/108879611?v=4", "events_url": "https://api.github.com/users/sbroadhurst-hf/events{/privacy}", "followers_url": "https://api.github.com/users/sbroadhurst-hf/followers", "following_url": "https://api.github.com/users/sbroadhurst-hf/following{/other_user}", "gists_url": "https://api.github.com/users/sbroadhurst-hf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sbroadhurst-hf", "id": 108879611, "login": "sbroadhurst-hf", "node_id": "U_kgDOBn1e-w", "organizations_url": "https://api.github.com/users/sbroadhurst-hf/orgs", "received_events_url": "https://api.github.com/users/sbroadhurst-hf/received_events", "repos_url": "https://api.github.com/users/sbroadhurst-hf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sbroadhurst-hf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sbroadhurst-hf/subscriptions", "type": "User", "url": "https://api.github.com/users/sbroadhurst-hf" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-07-19T08:36:20Z"
"2022-07-27T16:09:40Z"
"2022-07-27T15:57:41Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4712.diff", "html_url": "https://github.com/huggingface/datasets/pull/4712", "merged_at": "2022-07-27T15:57:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/4712.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4712" }
Highlight that the licence granted by Amazon only covers non-commercial research use.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4712/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4712/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/479
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/479/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/479/comments
https://api.github.com/repos/huggingface/datasets/issues/479/events
https://github.com/huggingface/datasets/pull/479
673,905,407
MDExOlB1bGxSZXF1ZXN0NDYzNjkxMjA0
479
add METEOR metric
{ "avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4", "events_url": "https://api.github.com/users/vegarab/events{/privacy}", "followers_url": "https://api.github.com/users/vegarab/followers", "following_url": "https://api.github.com/users/vegarab/following{/other_user}", "gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vegarab", "id": 24683907, "login": "vegarab", "node_id": "MDQ6VXNlcjI0NjgzOTA3", "organizations_url": "https://api.github.com/users/vegarab/orgs", "received_events_url": "https://api.github.com/users/vegarab/received_events", "repos_url": "https://api.github.com/users/vegarab/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vegarab/subscriptions", "type": "User", "url": "https://api.github.com/users/vegarab" }
[]
closed
false
null
[]
null
[ "Really nice !\r\nThanks for adding this one.\r\n\r\nI noticed that there are some '-' that are left in the description in the middle of some workds. It migh come from copy-pasting the pdf paper. ex: `im-provement`. Could you fix that please ?", "@lhoestq \r\nLinebreaks have been removed! Note that there are still a few compound words that are hyphenated intentionally. ", "I think you just need to rebase from master to fix the CI :)", "Yes I made the mistake of simply merging master into this branch. A rebase seems to be neater :) Although all the commits ended up being added twice. I assume you just squash them into a single one on merge anyways?", "Yes indeed they'll be squashed" ]
"2020-08-05T23:13:00Z"
"2020-08-19T13:39:09Z"
"2020-08-19T13:39:09Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/479.diff", "html_url": "https://github.com/huggingface/datasets/pull/479", "merged_at": "2020-08-19T13:39:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/479.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/479" }
Added the METEOR metric. Can be used like this: ```python import nlp meteor = nlp.load_metric('metrics/meteor') meteor.compute(["some string", "some string"], ["some string", "some similar string"]) # {'meteor': 0.6411637931034483} meteor.add("some string", "some string") meteor.add('some string", "some similar string") meteor.compute() # {'meteor': 0.6411637931034483} ``` Uses [NLTK's implementation](https://www.nltk.org/api/nltk.translate.html#module-nltk.translate.meteor_score), [(source)](https://github.com/nltk/nltk/blob/develop/nltk/translate/meteor_score.py)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/479/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/479/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3756
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3756/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3756/comments
https://api.github.com/repos/huggingface/datasets/issues/3756/events
https://github.com/huggingface/datasets/issues/3756
1,143,273,825
I_kwDODunzps5EJPlh
3,756
Images get decoded when using `map()` with `input_columns` argument on a dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/1430243?v=4", "events_url": "https://api.github.com/users/kklemon/events{/privacy}", "followers_url": "https://api.github.com/users/kklemon/followers", "following_url": "https://api.github.com/users/kklemon/following{/other_user}", "gists_url": "https://api.github.com/users/kklemon/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kklemon", "id": 1430243, "login": "kklemon", "node_id": "MDQ6VXNlcjE0MzAyNDM=", "organizations_url": "https://api.github.com/users/kklemon/orgs", "received_events_url": "https://api.github.com/users/kklemon/received_events", "repos_url": "https://api.github.com/users/kklemon/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kklemon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kklemon/subscriptions", "type": "User", "url": "https://api.github.com/users/kklemon" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
null
[ "Hi! If I'm not mistaken, this behavior is intentional, but I agree it could be more intuitive.\r\n\r\n@albertvillanova Do you remember why you decided not to decode columns in the `Audio` feature PR when `input_columns` is not `None`? IMO we should decode those columns, and we don't even have to use lazy structures here because the user explicitly requires them in the map transform. \r\n\r\ncc @lhoestq for visibility", "I think I excluded to decorate the function when `input_columns` were passed as a quick fix for some non-passing tests: \r\n- https://github.com/huggingface/datasets/pull/2324/commits/9d7c3e8fa53e23ec636859b4407eeec904b1b3f9\r\n\r\nThat PR was quite complex and I decided to focus on the main feature requests, leaving refinements for subsequent PRs.\r\n\r\nNote that when `input_columns` are passed, the signature of the function is effectively changed, while the decorated function expects an item (whether an example or a batch) as first arg (which is not the case when passing `input_columns`.\r\n\r\nI agree we should consider supporting the case when `input_columns` are passed." ]
"2022-02-18T15:35:38Z"
"2022-12-13T16:59:06Z"
"2022-12-13T16:59:06Z"
NONE
null
null
null
## Describe the bug The `datasets.features.Image` feature class decodes image data by default. Expectedly, when indexing a dataset or using the `map()` method, images are returned as PIL Image instances. However, when calling `map()` and setting a specific data column with the `input_columns` argument, the image data is passed as raw byte representation to the mapping function. ## Steps to reproduce the bug ```python from datasets import load_dataset from torchvision import transforms from PIL.Image import Image dataset = load_dataset('mnist', split='train') def transform_all_columns(example): # example['image'] is encoded as PIL Image assert isinstance(example['image'], Image) return example def transform_image_column(image): # image is decoded here and represented as raw bytes assert isinstance(image, Image) return image # single-sample dataset for debugging purposes dev = dataset.select([0]) dev.map(transform_all_columns) dev.map(transform_image_column, input_columns='image') ``` ## Expected results Image data should be passed in decoded form, i.e. as PIL Image objects to the mapping function unless the `decode` attribute on the image feature is set to `False`. ## Actual results The mapping function receives images as raw byte data. ## Environment info - `datasets` version: 1.18.3 - Platform: Linux-5.11.0-49-generic-x86_64-with-glibc2.32 - Python version: 3.8.0b4 - PyArrow version: 7.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3756/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3756/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3138
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3138/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3138/comments
https://api.github.com/repos/huggingface/datasets/issues/3138/events
https://github.com/huggingface/datasets/issues/3138
1,033,379,997
I_kwDODunzps49mCCd
3,138
More fine-grained taxonomy of error types
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
open
false
null
[]
null
[ "related: #4995\r\n" ]
"2021-10-22T09:35:29Z"
"2022-09-20T13:04:42Z"
null
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** Exceptions like `FileNotFoundError` can be raised by different parts of the code, and it's hard to detect which one did **Describe the solution you'd like** Give a specific exception type for every group of similar errors **Describe alternatives you've considered** Rely on the error message, using regex
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3138/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3138/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3749
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3749/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3749/comments
https://api.github.com/repos/huggingface/datasets/issues/3749/events
https://github.com/huggingface/datasets/pull/3749
1,142,156,678
PR_kwDODunzps4zCKqg
3,749
Add tqdm arguments
{ "avatar_url": "https://avatars.githubusercontent.com/u/28087825?v=4", "events_url": "https://api.github.com/users/penguinwang96825/events{/privacy}", "followers_url": "https://api.github.com/users/penguinwang96825/followers", "following_url": "https://api.github.com/users/penguinwang96825/following{/other_user}", "gists_url": "https://api.github.com/users/penguinwang96825/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/penguinwang96825", "id": 28087825, "login": "penguinwang96825", "node_id": "MDQ6VXNlcjI4MDg3ODI1", "organizations_url": "https://api.github.com/users/penguinwang96825/orgs", "received_events_url": "https://api.github.com/users/penguinwang96825/received_events", "repos_url": "https://api.github.com/users/penguinwang96825/repos", "site_admin": false, "starred_url": "https://api.github.com/users/penguinwang96825/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/penguinwang96825/subscriptions", "type": "User", "url": "https://api.github.com/users/penguinwang96825" }
[]
closed
false
null
[]
null
[ "Hi ! Thanks this will be very useful :)\r\n\r\nIt looks like there are some changes in the github diff that are not related to your contribution, can you try fixing this by merging `master` into your PR, or create a new PR from an updated version of `master` ?", "I have already solved the conflict on this latest version. This is my first time sending PR, if there's anything I need to adjust just let me know~", "Thanks, most changes are gone :)\r\nIt still seems to include changes though - do you mind try creating a new branch from upstream/master and create a new PR please ?", "Yeah sure, I'll try to send a new PR today!", "Please forward to [#3850](https://github.com/huggingface/datasets/pull/3850)", "Thanks ! Closing this one in favor of https://github.com/huggingface/datasets/pull/3850/files" ]
"2022-02-18T01:34:46Z"
"2022-03-08T09:38:48Z"
"2022-03-08T09:38:48Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3749.diff", "html_url": "https://github.com/huggingface/datasets/pull/3749", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3749.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3749" }
In this PR, tqdm arguments can be passed to the map() function and such, in order to be more flexible.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3749/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3749/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3134
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3134/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3134/comments
https://api.github.com/repos/huggingface/datasets/issues/3134/events
https://github.com/huggingface/datasets/issues/3134
1,033,251,755
I_kwDODunzps49liur
3,134
Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4", "events_url": "https://api.github.com/users/yananchen1989/events{/privacy}", "followers_url": "https://api.github.com/users/yananchen1989/followers", "following_url": "https://api.github.com/users/yananchen1989/following{/other_user}", "gists_url": "https://api.github.com/users/yananchen1989/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yananchen1989", "id": 26405281, "login": "yananchen1989", "node_id": "MDQ6VXNlcjI2NDA1Mjgx", "organizations_url": "https://api.github.com/users/yananchen1989/orgs", "received_events_url": "https://api.github.com/users/yananchen1989/received_events", "repos_url": "https://api.github.com/users/yananchen1989/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yananchen1989/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yananchen1989/subscriptions", "type": "User", "url": "https://api.github.com/users/yananchen1989" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi,\r\n\r\nDid you try to run the code multiple times (GitHub URLs can be down sometimes for various reasons)? I can access `https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py`, so this code is working without an error on my side. \r\n\r\nAdditionally, can you please run the `datasets-cli env` command because it seems to me that you are using the `datasets` version different from `1.12.1`?", "Same issue when running `metric = datasets.load_metric(\"accuracy\")`.\r\nError info is:\r\n```\r\nmetric = datasets.load_metric(\"accuracy\")\r\nTraceback (most recent call last):\r\n\r\n File \"<ipython-input-2-d25db38b26c5>\", line 1, in <module>\r\n metric = datasets.load_metric(\"accuracy\")\r\n\r\n File \"D:\\anaconda3\\lib\\site-packages\\datasets\\load.py\", line 610, in load_metric\r\n module_path, _ = prepare_module(\r\n\r\n File \"D:\\anaconda3\\lib\\site-packages\\datasets\\load.py\", line 330, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n\r\n File \"D:\\anaconda3\\lib\\site-packages\\datasets\\utils\\file_utils.py\", line 288, in cached_path\r\n output_path = get_from_cache(\r\n\r\n File \"D:\\anaconda3\\lib\\site-packages\\datasets\\utils\\file_utils.py\", line 605, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/accuracy/accuracy.py\r\n```\r\n\r\n\r\n My `datasets-cli env` result is as follows:\r\n- `datasets` version: 1.11.0\r\n- Platform: Windows-10-10.0.19041-SP0\r\n- Python version: 3.8.8\r\n- PyArrow version: 6.0.0\r\n\r\n@yananchen1989 did you find a way to solve this?", "It seems to be able to solve this issue by adding the equivalent `accuracy.py` locally. \r\nchange `metric = datasets.load_metric(\"accuracy\")` to `metric = datasets.load_metric(path = \"./accuracy.py\")`.\r\nCopy `accuracy.py` from browser at [accuracy.py](https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/accuracy/accuracy.py)", "> It seems to be able to solve this issue by adding the equivalent `accuracy.py` locally. change `metric = datasets.load_metric(\"accuracy\")` to `metric = datasets.load_metric(path = \"./accuracy.py\")`. Copy `accuracy.py` from browser at [accuracy.py](https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/accuracy/accuracy.py)\r\n\r\nThis is really a good way" ]
"2021-10-22T07:07:52Z"
"2023-09-14T01:19:45Z"
"2022-01-19T14:02:31Z"
NONE
null
null
null
datasets version: 1.12.1 `metric = datasets.load_metric('rouge')` The error: > ConnectionError Traceback (most recent call last) > <ipython-input-3-dd10a0c5212f> in <module> > ----> 1 metric = datasets.load_metric('rouge') > > /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, script_version, **metric_init_kwargs) > 613 download_config=download_config, > 614 download_mode=download_mode, > --> 615 dataset=False, > 616 ) > 617 metric_cls = import_main_class(module_path, dataset=False) > > /usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, **download_kwargs) > 328 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version) > 329 try: > --> 330 local_path = cached_path(file_path, download_config=download_config) > 331 except FileNotFoundError: > 332 if script_version is not None: > > /usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) > 296 use_etag=download_config.use_etag, > 297 max_retries=download_config.max_retries, > --> 298 use_auth_token=download_config.use_auth_token, > 299 ) > 300 elif os.path.exists(url_or_filename): > > /usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token) > 603 raise FileNotFoundError("Couldn't find file at {}".format(url)) > 604 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") > --> 605 raise ConnectionError("Couldn't reach {}".format(url)) > 606 > 607 # Try a second time > > ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py Is there any remedy to solve the connection issue ?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3134/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3134/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2287
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2287/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2287/comments
https://api.github.com/repos/huggingface/datasets/issues/2287/events
https://github.com/huggingface/datasets/pull/2287
871,063,374
MDExOlB1bGxSZXF1ZXN0NjI2MTQ0MTQ3
2,287
Avoid copying table's record batches
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "Thanks for fixing it. I actually included a similar fix in #2291 along with some updates in tests\r\nI'm closing this one in favor of #2291 if you don't mind.\r\n\r\nThanks again !" ]
"2021-04-29T14:15:01Z"
"2021-04-29T16:34:23Z"
"2021-04-29T16:34:22Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2287.diff", "html_url": "https://github.com/huggingface/datasets/pull/2287", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2287.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2287" }
Fixes #2276
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2287/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2287/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/275
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/275/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/275/comments
https://api.github.com/repos/huggingface/datasets/issues/275/events
https://github.com/huggingface/datasets/issues/275
639,439,052
MDU6SXNzdWU2Mzk0MzkwNTI=
275
NonMatchingChecksumError when loading pubmed dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/48441753?v=4", "events_url": "https://api.github.com/users/DavideStenner/events{/privacy}", "followers_url": "https://api.github.com/users/DavideStenner/followers", "following_url": "https://api.github.com/users/DavideStenner/following{/other_user}", "gists_url": "https://api.github.com/users/DavideStenner/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/DavideStenner", "id": 48441753, "login": "DavideStenner", "node_id": "MDQ6VXNlcjQ4NDQxNzUz", "organizations_url": "https://api.github.com/users/DavideStenner/orgs", "received_events_url": "https://api.github.com/users/DavideStenner/received_events", "repos_url": "https://api.github.com/users/DavideStenner/repos", "site_admin": false, "starred_url": "https://api.github.com/users/DavideStenner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DavideStenner/subscriptions", "type": "User", "url": "https://api.github.com/users/DavideStenner" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
null
[ "For some reason the files are not available for unauthenticated users right now (like the download service of this package). Instead of downloading the right files, it downloads the html of the error.\r\nAccording to the error it should be back again in 24h.\r\n\r\n![image](https://user-images.githubusercontent.com/42851186/84751599-096c6580-afbd-11ea-97f3-ee4aef791711.png)\r\n" ]
"2020-06-16T07:31:51Z"
"2020-06-19T07:37:07Z"
"2020-06-19T07:37:07Z"
NONE
null
null
null
I get this error when i run `nlp.load_dataset('scientific_papers', 'pubmed', split = 'train[:50%]')`. The error is: ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) <ipython-input-2-7742dea167d0> in <module>() ----> 1 df = nlp.load_dataset('scientific_papers', 'pubmed', split = 'train[:50%]') 2 df = pd.DataFrame(df) 3 gc.collect() 3 frames /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 518 download_mode=download_mode, 519 ignore_verifications=ignore_verifications, --> 520 save_infos=save_infos, 521 ) 522 /usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 431 verify_infos = not save_infos and not ignore_verifications 432 self._download_and_prepare( --> 433 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 434 ) 435 # Sync info /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 468 # Checksums verification 469 if verify_infos: --> 470 verify_checksums(self.info.download_checksums, dl_manager.get_recorded_sizes_checksums()) 471 for split_generator in split_generators: 472 if str(split_generator.split_info.name).lower() == "all": /usr/local/lib/python3.6/dist-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums) 34 bad_urls = [url for url in expected_checksums if expected_checksums[url] != recorded_checksums[url]] 35 if len(bad_urls) > 0: ---> 36 raise NonMatchingChecksumError(str(bad_urls)) 37 logger.info("All the checksums matched successfully.") 38 NonMatchingChecksumError: ['https://drive.google.com/uc?id=1b3rmCSIoh6VhD4HKWjI4HOW-cSwcwbeC&export=download', 'https://drive.google.com/uc?id=1lvsqvsFi3W-pE1SqNZI0s8NR9rC1tsja&export=download'] ``` I'm currently working on google colab. That is quite strange because yesterday it was fine.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/275/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/275/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2658
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2658/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2658/comments
https://api.github.com/repos/huggingface/datasets/issues/2658/events
https://github.com/huggingface/datasets/issues/2658
946,139,532
MDU6SXNzdWU5NDYxMzk1MzI=
2,658
Can't pass `sep=None` to load_dataset("csv", ...) to infer the separator via pandas.read_csv
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[]
"2021-07-16T10:05:44Z"
"2021-07-16T12:46:06Z"
"2021-07-16T12:46:06Z"
MEMBER
null
null
null
When doing `load_dataset("csv", sep=None)`, the `sep` passed to `pd.read_csv` is still the default `sep=","` instead, which makes it impossible to make the csv loader infer the separator. Related to https://github.com/huggingface/datasets/pull/2656 cc @SBrandeis
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2658/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2658/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4531
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4531/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4531/comments
https://api.github.com/repos/huggingface/datasets/issues/4531/events
https://github.com/huggingface/datasets/issues/4531
1,277,054,172
I_kwDODunzps5MHkzc
4,531
Dataset Viewer issue for CSV datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4", "events_url": "https://api.github.com/users/merveenoyan/events{/privacy}", "followers_url": "https://api.github.com/users/merveenoyan/followers", "following_url": "https://api.github.com/users/merveenoyan/following{/other_user}", "gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/merveenoyan", "id": 53175384, "login": "merveenoyan", "node_id": "MDQ6VXNlcjUzMTc1Mzg0", "organizations_url": "https://api.github.com/users/merveenoyan/orgs", "received_events_url": "https://api.github.com/users/merveenoyan/received_events", "repos_url": "https://api.github.com/users/merveenoyan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions", "type": "User", "url": "https://api.github.com/users/merveenoyan" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" } ]
null
[ "this should now be fixed", "Confirmed, it's fixed now. Thanks for reporting, and thanks @coyotte508 for fixing it\r\n\r\n<img width=\"1123\" alt=\"Capture d’écran 2022-06-21 à 10 28 05\" src=\"https://user-images.githubusercontent.com/1676121/174753833-1b453a5a-6a90-4717-bca1-1b5fc6b75e4a.png\">\r\n" ]
"2022-06-20T14:56:24Z"
"2022-06-21T08:28:46Z"
"2022-06-21T08:28:27Z"
CONTRIBUTOR
null
null
null
### Link https://huggingface.co/datasets/scikit-learn/breast-cancer-wisconsin ### Description I'm populating CSV datasets [here](https://huggingface.co/scikit-learn) but the viewer is not enabled and it looks for a dataset loading script, the datasets aren't on queue as well. You can replicate the problem by simply uploading any CSV dataset. ### Owner Yes
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4531/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4531/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5991
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5991/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5991/comments
https://api.github.com/repos/huggingface/datasets/issues/5991/events
https://github.com/huggingface/datasets/issues/5991
1,774,456,518
I_kwDODunzps5pxA7G
5,991
`map` with any joblib backend
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
"2023-06-26T10:33:42Z"
"2023-06-26T10:33:42Z"
null
MEMBER
null
null
null
We recently enabled the (experimental) parallel backend switch for data download and extraction but not for `map` yet. Right now we're using our `iflatmap_unordered` implementation for multiprocessing that uses a shared Queue to gather progress updates from the subprocesses and show a progress bar in the main process. If a Queue implementation that would work on any joblib backend by leveraging the filesystem that is shared among workers, we can have `iflatmap_unordered` for joblib and therefore a `map` with any joblib backend with a progress bar ! Note that the Queue doesn't need to be that optimized though since we can choose a small frequency for progress updates (like 1 update per second).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5991/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5991/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4556
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4556/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4556/comments
https://api.github.com/repos/huggingface/datasets/issues/4556/events
https://github.com/huggingface/datasets/issues/4556
1,283,462,881
I_kwDODunzps5MgBbh
4,556
Dataset Viewer issue for conll2003
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" } ]
null
[ "Fixed, thanks." ]
"2022-06-24T08:55:18Z"
"2022-06-24T09:50:39Z"
"2022-06-24T09:50:39Z"
MEMBER
null
null
null
### Link https://huggingface.co/datasets/conll2003/viewer/conll2003/test ### Description Seems like a cache problem with this config / split: ``` Server error Status code: 400 Exception: FileNotFoundError Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/conll2003/__init__.py' ``` ### Owner No
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4556/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4556/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2659
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2659/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2659/comments
https://api.github.com/repos/huggingface/datasets/issues/2659/events
https://github.com/huggingface/datasets/pull/2659
946,155,407
MDExOlB1bGxSZXF1ZXN0NjkxMzcwNzU3
2,659
Allow dataset config kwargs to be None
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2021-07-16T10:25:38Z"
"2021-07-16T12:46:07Z"
"2021-07-16T12:46:07Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2659.diff", "html_url": "https://github.com/huggingface/datasets/pull/2659", "merged_at": "2021-07-16T12:46:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/2659.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2659" }
Close https://github.com/huggingface/datasets/issues/2658 The dataset config kwargs that were set to None we simply ignored. This was an issue when None has some meaning for certain parameters of certain builders, like the `sep` parameter of the "csv" builder that allows to infer to separator. cc @SBrandeis
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2659/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2659/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2980
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2980/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2980/comments
https://api.github.com/repos/huggingface/datasets/issues/2980/events
https://github.com/huggingface/datasets/issues/2980
1,009,873,482
I_kwDODunzps48MXJK
2,980
OpenSLR 25: ASR data for Amharic, Swahili and Wolof
{ "avatar_url": "https://avatars.githubusercontent.com/u/4109253?v=4", "events_url": "https://api.github.com/users/cdleong/events{/privacy}", "followers_url": "https://api.github.com/users/cdleong/followers", "following_url": "https://api.github.com/users/cdleong/following{/other_user}", "gists_url": "https://api.github.com/users/cdleong/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cdleong", "id": 4109253, "login": "cdleong", "node_id": "MDQ6VXNlcjQxMDkyNTM=", "organizations_url": "https://api.github.com/users/cdleong/orgs", "received_events_url": "https://api.github.com/users/cdleong/received_events", "repos_url": "https://api.github.com/users/cdleong/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cdleong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cdleong/subscriptions", "type": "User", "url": "https://api.github.com/users/cdleong" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
[]
null
[ "Whoever handles this just needs to: \r\n\r\n- [ ] fork the HuggingFace Datasets repo\r\n- [ ] update the [existing dataset script](https://github.com/huggingface/datasets/blob/master/datasets/openslr/openslr.py) to add SLR25. Lots of copypasting from other sections of the script should make that easy. \r\nAmharic URL: https://www.openslr.org/resources/25/data_readspeech_am.tar.bz2. \r\nSwahili URL: https://www.openslr.org/resources/25/data_broadcastnews_sw.tar.bz2, \r\nWolof URL: https://www.openslr.org/resources/25/data_readspeech_wo.tar.bz2\r\n- [ ] update the [data card](https://github.com/huggingface/datasets/blob/master/datasets/openslr/README.md) to include information about SLR25. There's lots of other examples to draw from. \r\n- [ ] add the appropriate language tags to the data card as well. https://www.w3.org/International/questions/qa-choosing-language-tags, or just use `sw`, `am`, and `wo` for consistency. \r\n- [ ] make a pull request to merge your changes back into HuggingFace's repo", "... also the example in \"use in datasets library\" should be updated. It currently says \r\n![image](https://user-images.githubusercontent.com/4109253/135115980-8583a44a-cae6-4121-b699-00667020849f.png)\r\nBut you actually have to specify a subset, e.g. \r\n```python\r\ndataset = load_dataset(\"openslr\", \"SLR32\")\r\n```", "![image](https://user-images.githubusercontent.com/4109253/135116070-82d4e732-b7b3-4c5b-bd4e-a40d8ccabb0e.png)\r\n\r\n" ]
"2021-09-28T15:04:36Z"
"2021-09-29T17:25:14Z"
null
CONTRIBUTOR
null
null
null
## Adding a Dataset - **Name:** *SLR25* - **Description:** *Subset 25 from OpenSLR. Other subsets have been added to https://huggingface.co/datasets/openslr, 25 covers Amharic, Swahili and Wolof data* - **Paper:** *https://www.openslr.org/25/ has citations for each of the three subsubsets. * - **Data:** *Currently the three links to the .tar.bz2 files can be found a thttps://www.openslr.org/25/* - **Motivation:** *Increase ASR data for underrepresented African languages. Also, other subsets of OpenSLR speech recognition have been uploaded, so this would be easy.* https://github.com/huggingface/datasets/blob/master/datasets/openslr/openslr.py already has been created for various other OpenSLR subsets, this should be relatively straightforward to do.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2980/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2980/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1418
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1418/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1418/comments
https://api.github.com/repos/huggingface/datasets/issues/1418/events
https://github.com/huggingface/datasets/pull/1418
760,672,320
MDExOlB1bGxSZXF1ZXN0NTM1NDY0NzQ4
1,418
Add arabic dialects
{ "avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4", "events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}", "followers_url": "https://api.github.com/users/mcmillanmajora/followers", "following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}", "gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mcmillanmajora", "id": 26722925, "login": "mcmillanmajora", "node_id": "MDQ6VXNlcjI2NzIyOTI1", "organizations_url": "https://api.github.com/users/mcmillanmajora/orgs", "received_events_url": "https://api.github.com/users/mcmillanmajora/received_events", "repos_url": "https://api.github.com/users/mcmillanmajora/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions", "type": "User", "url": "https://api.github.com/users/mcmillanmajora" }
[]
closed
false
null
[]
null
[ "merging since the CI is fixed on master" ]
"2020-12-09T21:06:07Z"
"2020-12-17T09:40:56Z"
"2020-12-17T09:40:56Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1418.diff", "html_url": "https://github.com/huggingface/datasets/pull/1418", "merged_at": "2020-12-17T09:40:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/1418.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1418" }
Data loading script and dataset card for Dialectal Arabic Resources dataset. Fixed git issues from PR #976
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1418/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1418/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2026
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2026/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2026/comments
https://api.github.com/repos/huggingface/datasets/issues/2026/events
https://github.com/huggingface/datasets/issues/2026
828,194,467
MDU6SXNzdWU4MjgxOTQ0Njc=
2,026
KeyError on using map after renaming a column
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani" }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nActually, the error occurs due to these two lines:\r\n```python\r\nraw_dataset.set_format('torch',columns=['img','label'])\r\nraw_dataset = raw_dataset.rename_column('img','image')\r\n```\r\n`Dataset.rename_column` doesn't update the `_format_columns` attribute, previously defined by `Dataset.set_format`, with a new column name which is why this new column is missing in the output.", "Hi @mariosasko,\n\nThanks for opening a PR on this :)\nWhy does the old name also disappear?", "I just merged a @mariosasko 's PR that fixes this issue.\r\nIf it happens again, feel free to re-open :)" ]
"2021-03-10T18:54:17Z"
"2021-03-11T14:39:34Z"
"2021-03-11T14:38:40Z"
CONTRIBUTOR
null
null
null
Hi, I'm trying to use `cifar10` dataset. I want to rename the `img` feature to `image` in order to make it consistent with `mnist`, which I'm also planning to use. By doing this, I was trying to avoid modifying `prepare_train_features` function. Here is what I try: ```python transform = Compose([ToPILImage(),ToTensor(),Normalize([0.0,0.0,0.0],[1.0,1.0,1.0])]) def prepare_features(examples): images = [] labels = [] print(examples) for example_idx, example in enumerate(examples["image"]): if transform is not None: images.append(transform(examples["image"][example_idx].permute(2,0,1))) else: images.append(examples["image"][example_idx].permute(2,0,1)) labels.append(examples["label"][example_idx]) output = {"label":labels, "image":images} return output raw_dataset = load_dataset('cifar10') raw_dataset.set_format('torch',columns=['img','label']) raw_dataset = raw_dataset.rename_column('img','image') features = datasets.Features({ "image": datasets.Array3D(shape=(3,32,32),dtype="float32"), "label": datasets.features.ClassLabel(names=[ "airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck", ]), }) train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000) ``` The error: ```python --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-54-bf29672c53ee> in <module>() 14 ]), 15 }) ---> 16 train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000) 2 frames /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint) 1287 test_inputs = self[:2] if batched else self[0] 1288 test_indices = [0, 1] if batched else 0 -> 1289 update_data = does_function_return_dict(test_inputs, test_indices) 1290 logger.info("Testing finished, running the mapping function on the dataset") 1291 /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in does_function_return_dict(inputs, indices) 1258 fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns] 1259 processed_inputs = ( -> 1260 function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) 1261 ) 1262 does_return_dict = isinstance(processed_inputs, Mapping) <ipython-input-52-b4dccbafb70d> in prepare_features(examples) 3 labels = [] 4 print(examples) ----> 5 for example_idx, example in enumerate(examples["image"]): 6 if transform is not None: 7 images.append(transform(examples["image"][example_idx].permute(2,0,1))) KeyError: 'image' ``` The print statement inside returns this: ```python {'label': tensor([6, 9])} ``` Apparently, both `img` and `image` do not exist after renaming. Note that this code works fine with `img` everywhere. Notebook: https://colab.research.google.com/drive/1SzESAlz3BnVYrgQeJ838vbMp1OsukiA2?usp=sharing
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2026/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2026/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5757
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5757/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5757/comments
https://api.github.com/repos/huggingface/datasets/issues/5757/events
https://github.com/huggingface/datasets/issues/5757
1,669,910,503
I_kwDODunzps5jiM_n
5,757
Tilde (~) is not supported
{ "avatar_url": "https://avatars.githubusercontent.com/u/2437102?v=4", "events_url": "https://api.github.com/users/eli-osherovich/events{/privacy}", "followers_url": "https://api.github.com/users/eli-osherovich/followers", "following_url": "https://api.github.com/users/eli-osherovich/following{/other_user}", "gists_url": "https://api.github.com/users/eli-osherovich/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eli-osherovich", "id": 2437102, "login": "eli-osherovich", "node_id": "MDQ6VXNlcjI0MzcxMDI=", "organizations_url": "https://api.github.com/users/eli-osherovich/orgs", "received_events_url": "https://api.github.com/users/eli-osherovich/received_events", "repos_url": "https://api.github.com/users/eli-osherovich/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eli-osherovich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eli-osherovich/subscriptions", "type": "User", "url": "https://api.github.com/users/eli-osherovich" }
[]
closed
false
null
[]
null
[]
"2023-04-16T11:48:10Z"
"2023-04-20T15:30:51Z"
"2023-04-20T15:30:51Z"
CONTRIBUTOR
null
null
null
### Describe the bug It seems that `~` is not recognized correctly in local paths. Whenever I try to use it I get an exception ### Steps to reproduce the bug ```python load_dataset("imagefolder", data_dir="~/data/my_dataset") ``` Will generate the following error: ``` EmptyDatasetError: The directory at /path/to/cwd/~/data/datasets/clementine_tagged_per_cam doesn't contain any data files ``` ### Expected behavior Load the dataset. ### Environment info datasets==2.11.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5757/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5757/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/209
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/209/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/209/comments
https://api.github.com/repos/huggingface/datasets/issues/209/events
https://github.com/huggingface/datasets/pull/209
626,405,849
MDExOlB1bGxSZXF1ZXN0NDI0NDAwOTc4
209
Add a Google Drive exception for small files
{ "avatar_url": "https://avatars.githubusercontent.com/u/25703835?v=4", "events_url": "https://api.github.com/users/airKlizz/events{/privacy}", "followers_url": "https://api.github.com/users/airKlizz/followers", "following_url": "https://api.github.com/users/airKlizz/following{/other_user}", "gists_url": "https://api.github.com/users/airKlizz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/airKlizz", "id": 25703835, "login": "airKlizz", "node_id": "MDQ6VXNlcjI1NzAzODM1", "organizations_url": "https://api.github.com/users/airKlizz/orgs", "received_events_url": "https://api.github.com/users/airKlizz/received_events", "repos_url": "https://api.github.com/users/airKlizz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/airKlizz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/airKlizz/subscriptions", "type": "User", "url": "https://api.github.com/users/airKlizz" }
[]
closed
false
null
[]
null
[ "Can you run the style formatting tools to pass the code quality test?\r\n\r\nYou can find all the details in CONTRIBUTING.md: https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-contribute-to-nlp", "Nice ! ", "``make style`` done! Thanks for the approvals." ]
"2020-05-28T10:40:17Z"
"2020-05-28T15:15:04Z"
"2020-05-28T15:15:04Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/209.diff", "html_url": "https://github.com/huggingface/datasets/pull/209", "merged_at": "2020-05-28T15:15:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/209.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/209" }
I tried to use the ``nlp`` library to load personnal datasets. I mainly copy-paste the code for ``multi-news`` dataset because my files are stored on Google Drive. One of my dataset is small (< 25Mo) so it can be verified by Drive without asking the authorization to the user. This makes the download starts directly. Currently the ``nlp`` raises a error: ``ConnectionError: Couldn't reach https://drive.google.com/uc?export=download&id=1DGnbUY9zwiThTdgUvVTSAvSVHoloCgun`` while the url is working. So I just add a new exception as you have already done for ``firebasestorage.googleapis.com`` : ``` elif (response.status_code == 400 and "firebasestorage.googleapis.com" in url) or (response.status_code == 405 and "drive.google.com" in url) ``` I make an example of the error that you can run on [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1ae_JJ9uvUt-9GBh0uGZhjbF5aXkl-BPv?usp=sharing) I avoid the error by adding an exception but there is maybe a proper way to do it. Many thanks :hugs: Best,
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/209/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/209/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6509
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6509/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6509/comments
https://api.github.com/repos/huggingface/datasets/issues/6509/events
https://github.com/huggingface/datasets/pull/6509
2,046,720,869
PR_kwDODunzps5iREyE
6,509
Better cast error when generating dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6509). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "I created `DatatasetGenerationCastError` in `exceptions.py` that inherits from `DatasetGenerationError` (for backward compatibility) that inherits from `DatasetsError`.\r\n\r\nI also added a help message at the end of the error:\r\n\r\n```\r\nPlease either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)\r\n```", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004991 / 0.011353 (-0.006361) | 0.003362 / 0.011008 (-0.007646) | 0.062093 / 0.038508 (0.023585) | 0.051533 / 0.023109 (0.028424) | 0.247508 / 0.275898 (-0.028390) | 0.275593 / 0.323480 (-0.047886) | 0.003828 / 0.007986 (-0.004158) | 0.002573 / 0.004328 (-0.001755) | 0.047727 / 0.004250 (0.043477) | 0.037029 / 0.037052 (-0.000023) | 0.250359 / 0.258489 (-0.008130) | 0.282640 / 0.293841 (-0.011201) | 0.027853 / 0.128546 (-0.100693) | 0.010247 / 0.075646 (-0.065400) | 0.206826 / 0.419271 (-0.212445) | 0.035837 / 0.043533 (-0.007695) | 0.251795 / 0.255139 (-0.003344) | 0.275654 / 0.283200 (-0.007545) | 0.017722 / 0.141683 (-0.123960) | 1.120287 / 1.452155 (-0.331868) | 1.203087 / 1.492716 (-0.289630) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092320 / 0.018006 (0.074314) | 0.300079 / 0.000490 (0.299589) | 0.000211 / 0.000200 (0.000011) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018193 / 0.037411 (-0.019218) | 0.061310 / 0.014526 (0.046784) | 0.072433 / 0.176557 (-0.104124) | 0.119092 / 0.737135 (-0.618043) | 0.074044 / 0.296338 (-0.222294) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297184 / 0.215209 (0.081975) | 2.805197 / 2.077655 (0.727543) | 1.521326 / 1.504120 (0.017206) | 1.374321 / 1.541195 (-0.166874) | 1.388767 / 1.468490 (-0.079723) | 0.571865 / 4.584777 (-4.012912) | 2.385213 / 3.745712 (-1.360499) | 2.726840 / 5.269862 (-2.543021) | 1.725352 / 4.565676 (-2.840325) | 0.063012 / 0.424275 (-0.361263) | 0.004911 / 0.007607 (-0.002697) | 0.336430 / 0.226044 (0.110385) | 3.390616 / 2.268929 (1.121688) | 1.846398 / 55.444624 (-53.598227) | 1.576797 / 6.876477 (-5.299680) | 1.579445 / 2.142072 (-0.562627) | 0.652515 / 4.805227 (-4.152712) | 0.118393 / 6.500664 (-6.382271) | 0.042155 / 0.075469 (-0.033314) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.942269 / 1.841788 (-0.899518) | 11.318258 / 8.074308 (3.243950) | 10.299948 / 10.191392 (0.108556) | 0.136088 / 0.680424 (-0.544336) | 0.013682 / 0.534201 (-0.520519) | 0.287549 / 0.579283 (-0.291734) | 0.258346 / 0.434364 (-0.176018) | 0.337146 / 0.540337 (-0.203191) | 0.443922 / 1.386936 (-0.943014) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005302 / 0.011353 (-0.006051) | 0.003234 / 0.011008 (-0.007774) | 0.049159 / 0.038508 (0.010651) | 0.050459 / 0.023109 (0.027350) | 0.273718 / 0.275898 (-0.002180) | 0.296997 / 0.323480 (-0.026483) | 0.003948 / 0.007986 (-0.004038) | 0.002590 / 0.004328 (-0.001739) | 0.048129 / 0.004250 (0.043879) | 0.039369 / 0.037052 (0.002317) | 0.276469 / 0.258489 (0.017980) | 0.306359 / 0.293841 (0.012519) | 0.028864 / 0.128546 (-0.099682) | 0.010253 / 0.075646 (-0.065394) | 0.058264 / 0.419271 (-0.361008) | 0.032451 / 0.043533 (-0.011082) | 0.277336 / 0.255139 (0.022197) | 0.296137 / 0.283200 (0.012937) | 0.018094 / 0.141683 (-0.123589) | 1.119539 / 1.452155 (-0.332615) | 1.163116 / 1.492716 (-0.329600) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092578 / 0.018006 (0.074572) | 0.300756 / 0.000490 (0.300267) | 0.000222 / 0.000200 (0.000022) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022333 / 0.037411 (-0.015078) | 0.076632 / 0.014526 (0.062107) | 0.087829 / 0.176557 (-0.088727) | 0.127686 / 0.737135 (-0.609449) | 0.091314 / 0.296338 (-0.205024) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297499 / 0.215209 (0.082290) | 2.889775 / 2.077655 (0.812120) | 1.598976 / 1.504120 (0.094856) | 1.478805 / 1.541195 (-0.062389) | 1.481818 / 1.468490 (0.013328) | 0.557972 / 4.584777 (-4.026804) | 2.453248 / 3.745712 (-1.292464) | 2.771823 / 5.269862 (-2.498039) | 1.721527 / 4.565676 (-2.844150) | 0.062786 / 0.424275 (-0.361489) | 0.005298 / 0.007607 (-0.002309) | 0.346660 / 0.226044 (0.120615) | 3.412262 / 2.268929 (1.143334) | 1.940240 / 55.444624 (-53.504384) | 1.654015 / 6.876477 (-5.222461) | 1.652039 / 2.142072 (-0.490034) | 0.636870 / 4.805227 (-4.168357) | 0.116213 / 6.500664 (-6.384451) | 0.040937 / 0.075469 (-0.034532) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.001605 / 1.841788 (-0.840183) | 11.986592 / 8.074308 (3.912284) | 10.231288 / 10.191392 (0.039896) | 0.130242 / 0.680424 (-0.550182) | 0.015764 / 0.534201 (-0.518437) | 0.289257 / 0.579283 (-0.290026) | 0.275996 / 0.434364 (-0.158368) | 0.323089 / 0.540337 (-0.217248) | 0.556383 / 1.386936 (-0.830553) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#773324159ad4afd7931588a710839b76670ddf87 \"CML watermark\")\n" ]
"2023-12-18T13:57:24Z"
"2023-12-18T17:17:54Z"
null
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6509.diff", "html_url": "https://github.com/huggingface/datasets/pull/6509", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6509.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6509" }
I want to improve the error message for datasets like https://huggingface.co/datasets/m-a-p/COIG-CQIA Cc @albertvillanova @severo is this new error ok ? Or should I use a dedicated error class ? New: ```python Traceback (most recent call last): File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1920, in _prepare_split_single writer.write_table(table) File "/Users/quentinlhoest/hf/datasets/src/datasets/arrow_writer.py", line 574, in write_table pa_table = table_cast(pa_table, self._schema) File "/Users/quentinlhoest/hf/datasets/src/datasets/table.py", line 2322, in table_cast return cast_table_to_schema(table, schema) File "/Users/quentinlhoest/hf/datasets/src/datasets/table.py", line 2276, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast instruction: string other: string index: string domain: list<item: string> child 0, item: string output: string task_type: struct<major: list<item: string>, minor: list<item: string>> child 0, major: list<item: string> child 0, item: string child 1, minor: list<item: string> child 0, item: string task_name_in_eng: string input: string to {'answer_from': Value(dtype='string', id=None), 'instruction': Value(dtype='string', id=None), 'human_verified': Value(dtype='bool', id=None), 'domain': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'output': Value(dtype='string', id=None), 'task_type': {'major': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'minor': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'copyright': Value(dtype='string', id=None), 'input': Value(dtype='string', id=None)} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/quentinlhoest/hf/datasets/playground/ttest.py", line 74, in <module> load_dataset("m-a-p/COIG-CQIA") File "/Users/quentinlhoest/hf/datasets/src/datasets/load.py", line 2529, in load_dataset builder_instance.download_and_prepare( File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 936, in download_and_prepare self._download_and_prepare( File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1031, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1791, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1922, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 3 new columns (other, index, task_name_in_eng) and 3 missing columns (answer_from, copyright, human_verified). This happened while the json dataset builder was generating data using hf://datasets/m-a-p/COIG-CQIA/coig_pc/coig_pc_core_sample.json (at revision b7b7ecf290f6515036c7c04bd8537228ac2eb474) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) ``` Previously: ```python Traceback (most recent call last): File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1931, in _prepare_split_single writer.write_table(table) File "/Users/quentinlhoest/hf/datasets/src/datasets/arrow_writer.py", line 574, in write_table pa_table = table_cast(pa_table, self._schema) File "/Users/quentinlhoest/hf/datasets/src/datasets/table.py", line 2295, in table_cast return cast_table_to_schema(table, schema) File "/Users/quentinlhoest/hf/datasets/src/datasets/table.py", line 2253, in cast_table_to_schema raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match") ValueError: Couldn't cast task_type: struct<major: list<item: string>, minor: list<item: string>> child 0, major: list<item: string> child 0, item: string child 1, minor: list<item: string> child 0, item: string other: string instruction: string task_name_in_eng: string domain: list<item: string> child 0, item: string index: string output: string input: string to {'human_verified': Value(dtype='bool', id=None), 'task_type': {'major': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'minor': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'answer_from': Value(dtype='string', id=None), 'copyright': Value(dtype='string', id=None), 'instruction': Value(dtype='string', id=None), 'domain': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'output': Value(dtype='string', id=None), 'input': Value(dtype='string', id=None)} because column names don't match The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Users/quentinlhoest/hf/datasets/playground/ttest.py", line 74, in <module> load_dataset("m-a-p/COIG-CQIA") File "/Users/quentinlhoest/hf/datasets/src/datasets/load.py", line 2529, in load_dataset builder_instance.download_and_prepare( File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 949, in download_and_prepare self._download_and_prepare( File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1044, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1804, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1949, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6509/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6509/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3908
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3908/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3908/comments
https://api.github.com/repos/huggingface/datasets/issues/3908/events
https://github.com/huggingface/datasets/pull/3908
1,168,576,963
PR_kwDODunzps40Z_9F
3,908
Update README.md for SQuAD v2 metric
{ "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sashavor", "id": 14205986, "login": "sashavor", "node_id": "MDQ6VXNlcjE0MjA1OTg2", "organizations_url": "https://api.github.com/users/sashavor/orgs", "received_events_url": "https://api.github.com/users/sashavor/received_events", "repos_url": "https://api.github.com/users/sashavor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "type": "User", "url": "https://api.github.com/users/sashavor" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3908). All of your documentation changes will be reflected on that endpoint." ]
"2022-03-14T15:53:10Z"
"2022-03-15T17:04:11Z"
"2022-03-15T17:04:11Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3908.diff", "html_url": "https://github.com/huggingface/datasets/pull/3908", "merged_at": "2022-03-15T17:04:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/3908.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3908" }
Putting "Values from popular papers" as a subsection of "Output values"
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3908/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3908/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/313
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/313/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/313/comments
https://api.github.com/repos/huggingface/datasets/issues/313/events
https://github.com/huggingface/datasets/pull/313
645,390,088
MDExOlB1bGxSZXF1ZXN0NDM5ODc4MDg5
313
Add MWSC
{ "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghomasHudson", "id": 13795113, "login": "ghomasHudson", "node_id": "MDQ6VXNlcjEzNzk1MTEz", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "type": "User", "url": "https://api.github.com/users/ghomasHudson" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }, { "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }, { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[ "Looks good to me" ]
"2020-06-25T09:22:02Z"
"2020-06-30T08:28:11Z"
"2020-06-30T08:28:11Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/313.diff", "html_url": "https://github.com/huggingface/datasets/pull/313", "merged_at": "2020-06-30T08:28:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/313.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/313" }
Adding the [Modified Winograd Schema Challenge](https://github.com/salesforce/decaNLP/blob/master/local_data/schema.txt) dataset which formed part of the [decaNLP](http://decanlp.com/) benchmark. Not sure how much use people would find for it it outside of the benchmark, but it is general purpose. Code is heavily borrowed from the [decaNLP repo](https://github.com/salesforce/decaNLP/blob/1e9605f246b9e05199b28bde2a2093bc49feeeaa/text/torchtext/datasets/generic.py#L773-L877). There's a few (possibly overly opinionated) design choices I made: - I used the train/test/dev split [buried in the decaNLP code](https://github.com/salesforce/decaNLP/blob/1e9605f246b9e05199b28bde2a2093bc49feeeaa/text/torchtext/datasets/generic.py#L852-L855) - I split out each example into the 2 alternatives. Originally the data uses the format: ``` The city councilmen refused the demonstrators a permit because they [feared/advocated] violence. Who [feared/advocated] violence? councilmen/demonstrators ``` I split into the 2 variants: ``` The city councilmen refused the demonstrators a permit because they feared violence. Who feared violence? councilmen/demonstrators The city councilmen refused the demonstrators a permit because they advocated violence. Who advocated violence? councilmen/demonstrators ``` I can't see any use for having the options combined into a single example (splitting them is [the way decaNLP processes](https://github.com/salesforce/decaNLP/blob/1e9605f246b9e05199b28bde2a2093bc49feeeaa/text/torchtext/datasets/generic.py#L846-L850)) them. You can't train on both versions with them combined, and splitting the examples later would be a pain to do. I think [winogrande.py](https://github.com/huggingface/nlp/blob/master/datasets/winogrande/winogrande.py) presents the data in this way? - I've not used the decaNLP framing (appending the options to the question e.g. `Who feared violence? -- councilmen or demonstrators?`) but left it more generic by adding the options as a new key: `"options":["councilmen","demonstrators"]` This should be an easy thing to change using `map` if needed by a specific application. Dataset is working as-is but if anyone has any thoughts/preferences on the design decisions here I'm definitely open to different choices.
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/313/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/313/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4490
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4490/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4490/comments
https://api.github.com/repos/huggingface/datasets/issues/4490/events
https://github.com/huggingface/datasets/issues/4490
1,270,719,074
I_kwDODunzps5LvaJi
4,490
Use `torch.nested_tensor` for arrays of varying length in torch formatter
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "What's the current behavior?", "Currently, we return a list of Torch tensors if their shapes don't match. If they do, we consolidate them into a single Torch tensor." ]
"2022-06-14T12:19:40Z"
"2023-07-07T13:02:58Z"
null
CONTRIBUTOR
null
null
null
Use `torch.nested_tensor` for arrays of varying length in `TorchFormatter`. The PyTorch API of nested tensors is in the prototype stage, so wait for it to become more mature.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4490/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4490/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1164
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1164/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1164/comments
https://api.github.com/repos/huggingface/datasets/issues/1164/events
https://github.com/huggingface/datasets/pull/1164
757,716,575
MDExOlB1bGxSZXF1ZXN0NTMzMDQyMjA1
1,164
Add DaNe dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/28562991?v=4", "events_url": "https://api.github.com/users/ophelielacroix/events{/privacy}", "followers_url": "https://api.github.com/users/ophelielacroix/followers", "following_url": "https://api.github.com/users/ophelielacroix/following{/other_user}", "gists_url": "https://api.github.com/users/ophelielacroix/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ophelielacroix", "id": 28562991, "login": "ophelielacroix", "node_id": "MDQ6VXNlcjI4NTYyOTkx", "organizations_url": "https://api.github.com/users/ophelielacroix/orgs", "received_events_url": "https://api.github.com/users/ophelielacroix/received_events", "repos_url": "https://api.github.com/users/ophelielacroix/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ophelielacroix/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ophelielacroix/subscriptions", "type": "User", "url": "https://api.github.com/users/ophelielacroix" }
[]
closed
false
null
[]
null
[ "Thanks, this looks great!\r\n\r\nFor the code quality test, it looks like `flake8` is throwing the error, so you can tun `flake8 datasets` locally and fix the errors it points out until it passes" ]
"2020-12-05T16:36:50Z"
"2020-12-08T12:50:18Z"
"2020-12-08T12:49:55Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1164.diff", "html_url": "https://github.com/huggingface/datasets/pull/1164", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1164.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1164" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1164/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1164/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2020
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2020/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2020/comments
https://api.github.com/repos/huggingface/datasets/issues/2020/events
https://github.com/huggingface/datasets/pull/2020
826,961,126
MDExOlB1bGxSZXF1ZXN0NTg4OTE3MjYx
2,020
Remove unnecessary docstart check in conll-like datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[]
"2021-03-10T02:20:16Z"
"2021-03-11T13:33:37Z"
"2021-03-11T13:33:37Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2020.diff", "html_url": "https://github.com/huggingface/datasets/pull/2020", "merged_at": "2021-03-11T13:33:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/2020.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2020" }
Related to this PR: #1998 Additionally, this PR adds the docstart note to the conll2002 dataset card ([link](https://raw.githubusercontent.com/teropa/nlp/master/resources/corpora/conll2002/ned.train) to the raw data with `DOCSTART` lines).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2020/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2020/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5309
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5309/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5309/comments
https://api.github.com/repos/huggingface/datasets/issues/5309/events
https://github.com/huggingface/datasets/pull/5309
1,466,758,987
PR_kwDODunzps5D0g1y
5,309
Close stream in `ArrowWriter.finalize` before inference error
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-11-28T16:59:39Z"
"2022-12-07T12:55:20Z"
"2022-12-07T12:52:15Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5309.diff", "html_url": "https://github.com/huggingface/datasets/pull/5309", "merged_at": "2022-12-07T12:52:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/5309.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5309" }
Ensure the file stream is closed in `ArrowWriter.finalize` before raising the `SchemaInferenceError` to avoid the `PermissionError` on Windows in `incomplete_dir`'s `shutil.rmtree`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5309/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5309/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6328
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6328/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6328/comments
https://api.github.com/repos/huggingface/datasets/issues/6328/events
https://github.com/huggingface/datasets/issues/6328
1,955,857,904
I_kwDODunzps50lAXw
6,328
شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی
{ "avatar_url": "https://avatars.githubusercontent.com/u/147399213?v=4", "events_url": "https://api.github.com/users/shabnam706/events{/privacy}", "followers_url": "https://api.github.com/users/shabnam706/followers", "following_url": "https://api.github.com/users/shabnam706/following{/other_user}", "gists_url": "https://api.github.com/users/shabnam706/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shabnam706", "id": 147399213, "login": "shabnam706", "node_id": "U_kgDOCMkiLQ", "organizations_url": "https://api.github.com/users/shabnam706/orgs", "received_events_url": "https://api.github.com/users/shabnam706/received_events", "repos_url": "https://api.github.com/users/shabnam706/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shabnam706/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shabnam706/subscriptions", "type": "User", "url": "https://api.github.com/users/shabnam706" }
[]
closed
false
null
[]
null
[ "شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی" ]
"2023-10-22T11:07:21Z"
"2023-10-23T09:22:38Z"
"2023-10-23T09:22:38Z"
NONE
null
null
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6328/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6328/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3158
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3158/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3158/comments
https://api.github.com/repos/huggingface/datasets/issues/3158/events
https://github.com/huggingface/datasets/pull/3158
1,035,158,070
PR_kwDODunzps4toGpe
3,158
Fix string encoding for Value type
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "That was fast! \r\n" ]
"2021-10-25T13:44:13Z"
"2021-10-25T14:12:06Z"
"2021-10-25T14:12:05Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3158.diff", "html_url": "https://github.com/huggingface/datasets/pull/3158", "merged_at": "2021-10-25T14:12:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/3158.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3158" }
Some metrics have `string` features but currently it fails if users pass integers instead. Indeed feature encoding that handles the conversion of the user's objects to the right python type is missing a case for `string`, while it already works as expected for integers, floats and booleans Here is an example code that didn't work previously, but that works with this fix: ```python import datasets # Note that 'id' is an integer while the SQuAD metric uses strings predictions = [{'prediction_text': '1976', 'id': 5}] references = [{'answers': {'answer_start': [97], 'text': ['1976']}, 'id': 5}] squad_metric = datasets.load_metric("squad") squad_metric.add_batch(predictions=predictions, references=references) results = squad_metric.compute() # {'exact_match': 100.0, 'f1': 100.0} ``` cc @sgugger @philschmid
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/3158/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3158/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3012
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3012/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3012/comments
https://api.github.com/repos/huggingface/datasets/issues/3012/events
https://github.com/huggingface/datasets/pull/3012
1,014,958,931
PR_kwDODunzps4soRTu
3,012
Replace item with float in metrics
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
"2021-10-04T09:45:28Z"
"2021-10-04T11:30:34Z"
"2021-10-04T11:30:33Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3012.diff", "html_url": "https://github.com/huggingface/datasets/pull/3012", "merged_at": "2021-10-04T11:30:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/3012.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3012" }
As pointed out by @mariosasko in #3001, calling `float()` instad of `.item()` is faster. Moreover, it might avoid potential issues if any of the third-party functions eventually returns a `float` instead of an `np.float64`. Related to #3001.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3012/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3012/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/397
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/397/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/397/comments
https://api.github.com/repos/huggingface/datasets/issues/397/events
https://github.com/huggingface/datasets/pull/397
657,510,856
MDExOlB1bGxSZXF1ZXN0NDQ5NjE1MDA4
397
Add contiguous sharding
{ "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jarednielsen", "id": 4564897, "login": "jarednielsen", "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "repos_url": "https://api.github.com/users/jarednielsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "type": "User", "url": "https://api.github.com/users/jarednielsen" }
[]
closed
false
null
[]
null
[]
"2020-07-15T17:02:58Z"
"2020-07-17T16:59:31Z"
"2020-07-17T16:59:31Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/397.diff", "html_url": "https://github.com/huggingface/datasets/pull/397", "merged_at": "2020-07-17T16:59:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/397.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/397" }
This makes dset.shard() play nice with nlp.concatenate_datasets(). When I originally wrote the shard() method, I was thinking about a distributed training scenario, but https://github.com/huggingface/nlp/pull/389 also uses it for splitting the dataset for distributed preprocessing. Usage: ``` nlp.concatenate_datasets([dset.shard(n, i, contiguous=True) for i in range(n)]) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/397/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/397/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/95
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/95/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/95/comments
https://api.github.com/repos/huggingface/datasets/issues/95/events
https://github.com/huggingface/datasets/pull/95
617,703,037
MDExOlB1bGxSZXF1ZXN0NDE3NTY5NzA4
95
Replace checksums files by Dataset infos json
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "Great! LGTM :-) ", "> Ok, really clean!\r\n> I like the logic (not a huge fan of using `_asdict_inner` but it makes sense).\r\n> I think it's a nice improvement!\r\n> \r\n> How should we update the files in the repo? Run a big job on a server or on somebody's computer who has most of the datasets already downloaded?\r\n\r\nMaybe we can split the updates among us...IMO most datasets run very quickly. \r\nI think I've downloaded 50 datasets and 80% are loaded in <5min, 15% in <1h and then `wmt` which is still downloading (since 12h). \r\nI deleted my cache because the `wmt` downloads require quite a lot of space, so I only have parts of the `wmt` datasets on my computer. \r\n\r\n@mariamabarham I guess you have downloaded most of the datasets no? " ]
"2020-05-13T19:36:16Z"
"2020-05-14T08:58:43Z"
"2020-05-14T08:58:42Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/95.diff", "html_url": "https://github.com/huggingface/datasets/pull/95", "merged_at": "2020-05-14T08:58:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/95.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/95" }
### Better verifications when loading a dataset I replaced the `urls_checksums` directory that used to contain `checksums.txt` and `cached_sizes.txt`, by a single file `dataset_infos.json`. It's just a dict `config_name` -> `DatasetInfo`. It simplifies and improves how verifications of checksums and splits sizes are done, as they're all stored in `DatasetInfo` (one per config). Also, having already access to `DatasetInfo` enables to check disk space before running `download_and_prepare` for a given config. The dataset infos json file is user readable, you can take a look at the squad one that I generated in this PR. ### Renaming According to these changes, I did some renaming: `save_checksums` -> `save_infos` `ignore_checksums` -> `ignore_verifications` for example, when you are creating a dataset you have to run ```nlp-cli test path/to/my/dataset --save_infos --all_configs``` instead of ```nlp-cli test path/to/my/dataset --save_checksums --all_configs``` ### And now, the fun part We'll have to rerun the `nlp-cli test ... --save_infos --all_configs` for all the datasets ----------------- feedback appreciated !
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/95/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/95/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3333
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3333/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3333/comments
https://api.github.com/repos/huggingface/datasets/issues/3333/events
https://github.com/huggingface/datasets/issues/3333
1,065,346,919
I_kwDODunzps4_f-dn
3,333
load JSON files, get the errors
{ "avatar_url": "https://avatars.githubusercontent.com/u/38966558?v=4", "events_url": "https://api.github.com/users/PatricYan/events{/privacy}", "followers_url": "https://api.github.com/users/PatricYan/followers", "following_url": "https://api.github.com/users/PatricYan/following{/other_user}", "gists_url": "https://api.github.com/users/PatricYan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PatricYan", "id": 38966558, "login": "PatricYan", "node_id": "MDQ6VXNlcjM4OTY2NTU4", "organizations_url": "https://api.github.com/users/PatricYan/orgs", "received_events_url": "https://api.github.com/users/PatricYan/received_events", "repos_url": "https://api.github.com/users/PatricYan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PatricYan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PatricYan/subscriptions", "type": "User", "url": "https://api.github.com/users/PatricYan" }
[]
closed
false
null
[]
null
[ "Hi ! The message you're getting is not an error. It simply says that your JSON dataset is being prepared to a location in `/root/.cache/huggingface/datasets`", "> \r\n\r\nbut I want to load local JSON file by command\r\n`python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`\r\n\r\n**squad-retrain-data/train-v2.0.json** is the local JSON file, how to load it and map it to a special structure?", "You can load it with `dataset = datasets.load_dataset('json', data_files=args.dataset)` as you said.\r\nThen if you need to apply additional processing to map it to a special structure, you can use rename columns or use `dataset.map`. For more information, you can check the documentation here: https://huggingface.co/docs/datasets/process.html\r\n\r\nAlso feel free to share your `run.py` code so we can take a look", "```\r\n# Dataset selection\r\n if args.dataset.endswith('.json') or args.dataset.endswith('.jsonl'):\r\n dataset_id = None\r\n # Load from local json/jsonl file\r\n dataset = datasets.load_dataset('json', data_files=args.dataset)\r\n # By default, the \"json\" dataset loader places all examples in the train split,\r\n # so if we want to use a jsonl file for evaluation we need to get the \"train\" split\r\n # from the loaded dataset\r\n eval_split = 'train'\r\n else:\r\n default_datasets = {'qa': ('squad',), 'nli': ('snli',)}\r\n dataset_id = tuple(args.dataset.split(':')) if args.dataset is not None else \\\r\n default_datasets[args.task]\r\n # MNLI has two validation splits (one with matched domains and one with mismatched domains). Most datasets just have one \"validation\" split\r\n eval_split = 'validation_matched' if dataset_id == ('glue', 'mnli') else 'validation'\r\n # Load the raw data\r\n dataset = datasets.load_dataset(*dataset_id)\r\n```\r\n\r\nI want to load JSON squad dataset instead `dataset = datasets.load_dataset('squad')` to retrain the model. \r\n", "If your JSON has the same format as the SQuAD dataset, then you need to pass `field=\"data\"` to `load_dataset`, since the SQuAD format is one big JSON object in which the \"data\" field contains the list of questions and answers.\r\n```python\r\ndataset = datasets.load_dataset('json', data_files=args.dataset, field=\"data\")\r\n```\r\n\r\nLet me know if that helps :)\r\n\r\n", "Yes, code works. but the format is not as expected.\r\n```\r\ndataset = datasets.load_dataset('json', data_files=args.dataset, field=\"data\")\r\n```\r\n```\r\npython3 run.py --do_train --task qa --dataset squad --output_dir ./re_trained_model/\r\n```\r\n************ train_dataset: Dataset({\r\n features: ['id', 'title', 'context', 'question', 'answers'],\r\n num_rows: 87599\r\n})\r\n\r\n\r\n```\r\npython3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/\r\n```\r\n************ train_dataset: Dataset({\r\n features: ['title', 'paragraphs'],\r\n num_rows: 442\r\n})\r\n\r\nI want the JSON to have the same format as before features. https://github.com/huggingface/datasets/blob/master/datasets/squad_v2/squad_v2.py is the script dealing with **squad** but how can I apply it by using JSON? ", "Ok I see, you have the paragraphs so you just need to process them to extract the questions and answers. I think you can process the SQuAD-like data this way:\r\n```python\r\ndef process_squad(articles):\r\n out = {\r\n \"title\": [],\r\n \"context\": [],\r\n \"question\": [],\r\n \"id\": [],\r\n \"answers\": [],\r\n }\r\n for title, paragraphs in zip(articles[\"title\"], articles[\"paragraphs\"]):\r\n for paragraph in paragraphs:\r\n for qa in paragraph[\"qas\"]:\r\n out[\"title\"].append(title)\r\n out[\"context\"].append(paragraph[\"context\"])\r\n out[\"question\"].append(qa[\"question\"])\r\n out[\"id\"].append(qa[\"id\"])\r\n out[\"answers\"].append({\r\n \"answer_start\": [answer[\"answer_start\"] for answer in qa[\"answers\"]],\r\n \"text\": [answer[\"text\"] for answer in qa[\"answers\"]],\r\n })\r\n return out\r\n\r\ndataset = dataset.map(process_squad, batched=True, remove_columns=[\"paragraphs\"])\r\n```\r\n\r\nI adapted the code from [squad.py](https://github.com/huggingface/datasets/blob/master/datasets/squad/squad.py). The code takes as input a batch of articles (title + paragraphs) and gets all the questions and answers from the JSON structure.\r\n\r\nThe output is a dataset with `features: ['answers', 'context', 'id', 'question', 'title']`\r\n\r\nLet me know if that helps !\r\n", "Yes, this works. But how to get the training output during training the squad by **Trainer** \r\nfor example https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/trainer_qa.py \r\nI want the training inputs, labels, outputs for every epoch and step to produce the training dynamic graph", "I think you may need to implement your own Trainer, from the `QuestionAnsweringTrainer` for example.\r\nThis way you can have the flexibility of saving all the inputs/output used at each step", "does there have any function to be overwritten to do this?", "> does there have any function to be overwritten to do this?\r\n\r\nok, I overwrote the compute_loss, thank you.", "Hi, I add one field **example_id**, but I can't see it in the **comput_loss** function, how can I do this? below is the information of inputs\r\n\r\n```\r\n*********************** inputs: {'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0],\r\n [1, 1, 1, ..., 0, 0, 0],\r\n [1, 1, 1, ..., 0, 0, 0],\r\n ...,\r\n [1, 1, 1, ..., 0, 0, 0],\r\n [1, 1, 1, ..., 0, 0, 0],\r\n [1, 1, 1, ..., 0, 0, 0]], device='cuda:0'), 'end_positions': tensor([ 25, 97, 93, 44, 25, 112, 109, 134], device='cuda:0'), 'input_ids': tensor([[ 101, 2054, 2390, ..., 0, 0, 0],\r\n [ 101, 2054, 2515, ..., 0, 0, 0],\r\n [ 101, 2054, 2106, ..., 0, 0, 0],\r\n ...,\r\n [ 101, 2339, 2001, ..., 0, 0, 0],\r\n [ 101, 2054, 2515, ..., 0, 0, 0],\r\n [ 101, 2054, 2003, ..., 0, 0, 0]], device='cuda:0'), 'start_positions': tensor([ 20, 90, 89, 41, 25, 96, 106, 132], device='cuda:0'), 'token_type_ids': tensor([[0, 0, 0, ..., 0, 0, 0],\r\n [0, 0, 0, ..., 0, 0, 0],\r\n [0, 0, 0, ..., 0, 0, 0],\r\n ...,\r\n [0, 0, 0, ..., 0, 0, 0],\r\n [0, 0, 0, ..., 0, 0, 0],\r\n [0, 0, 0, ..., 0, 0, 0]], device='cuda:0')} \r\n```\r\n\r\n```\r\n# This function preprocesses a question answering dataset, tokenizing the question and context text\r\n# and finding the right offsets for the answer spans in the tokenized context (to use as labels).\r\n# Adapted from https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py\r\ndef prepare_train_dataset_qa(examples, tokenizer, max_seq_length=None):\r\n questions = [q.lstrip() for q in examples[\"question\"]]\r\n max_seq_length = tokenizer.model_max_length\r\n # tokenize both questions and the corresponding context\r\n # if the context length is longer than max_length, we split it to several\r\n # chunks of max_length\r\n tokenized_examples = tokenizer(\r\n questions,\r\n examples[\"context\"],\r\n truncation=\"only_second\",\r\n max_length=max_seq_length,\r\n stride=min(max_seq_length // 2, 128),\r\n return_overflowing_tokens=True,\r\n return_offsets_mapping=True,\r\n padding=\"max_length\"\r\n )\r\n\r\n # Since one example might give us several features if it has a long context,\r\n # we need a map from a feature to its corresponding example.\r\n sample_mapping = tokenized_examples.pop(\"overflow_to_sample_mapping\")\r\n # The offset mappings will give us a map from token to character position\r\n # in the original context. This will help us compute the start_positions\r\n # and end_positions to get the final answer string.\r\n offset_mapping = tokenized_examples.pop(\"offset_mapping\")\r\n\r\n tokenized_examples[\"start_positions\"] = []\r\n tokenized_examples[\"end_positions\"] = []\r\n\r\n tokenized_examples[\"example_id\"] = []\r\n\r\n for i, offsets in enumerate(offset_mapping):\r\n input_ids = tokenized_examples[\"input_ids\"][i]\r\n # We will label features not containing the answer the index of the CLS token.\r\n cls_index = input_ids.index(tokenizer.cls_token_id)\r\n sequence_ids = tokenized_examples.sequence_ids(i)\r\n # from the feature idx to sample idx\r\n sample_index = sample_mapping[i]\r\n # get the answer for a feature\r\n answers = examples[\"answers\"][sample_index]\r\n\r\n tokenized_examples[\"example_id\"].append(examples[\"id\"][sample_index])\r\n\r\n if len(answers[\"answer_start\"]) == 0:\r\n tokenized_examples[\"start_positions\"].append(cls_index)\r\n tokenized_examples[\"end_positions\"].append(cls_index)\r\n else:\r\n # Start/end character index of the answer in the text.\r\n start_char = answers[\"answer_start\"][0]\r\n end_char = start_char + len(answers[\"text\"][0])\r\n\r\n # Start token index of the current span in the text.\r\n token_start_index = 0\r\n while sequence_ids[token_start_index] != 1:\r\n token_start_index += 1\r\n\r\n # End token index of the current span in the text.\r\n token_end_index = len(input_ids) - 1\r\n while sequence_ids[token_end_index] != 1:\r\n token_end_index -= 1\r\n\r\n # Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).\r\n if not (offsets[token_start_index][0] <= start_char and\r\n offsets[token_end_index][1] >= end_char):\r\n tokenized_examples[\"start_positions\"].append(cls_index)\r\n tokenized_examples[\"end_positions\"].append(cls_index)\r\n else:\r\n # Otherwise move the token_start_index and token_end_index to the two ends of the answer.\r\n # Note: we could go after the last offset if the answer is the last word (edge case).\r\n while token_start_index < len(offsets) and \\\r\n offsets[token_start_index][0] <= start_char:\r\n token_start_index += 1\r\n tokenized_examples[\"start_positions\"].append(\r\n token_start_index - 1)\r\n while offsets[token_end_index][1] >= end_char:\r\n token_end_index -= 1\r\n tokenized_examples[\"end_positions\"].append(token_end_index + 1)\r\n\r\n return tokenized_examples\r\n```" ]
"2021-11-28T14:29:58Z"
"2021-12-01T09:34:31Z"
"2021-12-01T03:57:48Z"
NONE
null
null
null
Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command `!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/` change the dateset to load json by refering to https://huggingface.co/docs/datasets/loading.html `dataset = datasets.load_dataset('json', data_files=args.dataset)` Errors: `Downloading and preparing dataset json/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/json/default-c1e124ad488911b8/0.0.0/45636811569ec4a6630521c18235dfbbab83b7ab572e3393c5ba68ccabe98264... ` _Originally posted by @yanllearnn in https://github.com/huggingface/datasets/issues/730#issuecomment-981095050_
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3333/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3333/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5731
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5731/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5731/comments
https://api.github.com/repos/huggingface/datasets/issues/5731/events
https://github.com/huggingface/datasets/pull/5731
1,662,012,913
PR_kwDODunzps5N_7Un
5,731
Temporarily pin fsspec
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009735 / 0.011353 (-0.001618) | 0.010410 / 0.011008 (-0.000598) | 0.134986 / 0.038508 (0.096478) | 0.038392 / 0.023109 (0.015283) | 0.414451 / 0.275898 (0.138553) | 0.447775 / 0.323480 (0.124295) | 0.007223 / 0.007986 (-0.000763) | 0.006373 / 0.004328 (0.002045) | 0.102631 / 0.004250 (0.098381) | 0.048516 / 0.037052 (0.011464) | 0.410179 / 0.258489 (0.151690) | 0.467773 / 0.293841 (0.173932) | 0.053163 / 0.128546 (-0.075384) | 0.019801 / 0.075646 (-0.055845) | 0.452708 / 0.419271 (0.033436) | 0.068691 / 0.043533 (0.025159) | 0.405482 / 0.255139 (0.150343) | 0.457669 / 0.283200 (0.174470) | 0.113464 / 0.141683 (-0.028219) | 1.918143 / 1.452155 (0.465988) | 2.033123 / 1.492716 (0.540407) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.274564 / 0.018006 (0.256557) | 0.608855 / 0.000490 (0.608366) | 0.006266 / 0.000200 (0.006066) | 0.000105 / 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033704 / 0.037411 (-0.003708) | 0.130982 / 0.014526 (0.116456) | 0.143862 / 0.176557 (-0.032694) | 0.212622 / 0.737135 (-0.524513) | 0.148899 / 0.296338 (-0.147439) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.670968 / 0.215209 (0.455759) | 6.602911 / 2.077655 (4.525256) | 2.644290 / 1.504120 (1.140171) | 2.268593 / 1.541195 (0.727399) | 2.325393 / 1.468490 (0.856903) | 1.388156 / 4.584777 (-3.196621) | 5.958569 / 3.745712 (2.212857) | 3.310756 / 5.269862 (-1.959106) | 2.390953 / 4.565676 (-2.174724) | 0.147416 / 0.424275 (-0.276859) | 0.015201 / 0.007607 (0.007594) | 0.794109 / 0.226044 (0.568064) | 7.984855 / 2.268929 (5.715926) | 3.382275 / 55.444624 (-52.062349) | 2.676102 / 6.876477 (-4.200375) | 2.846743 / 2.142072 (0.704671) | 1.467523 / 4.805227 (-3.337704) | 0.283184 / 6.500664 (-6.217480) | 0.088655 / 0.075469 (0.013186) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.632765 / 1.841788 (-0.209022) | 19.102473 / 8.074308 (11.028165) | 25.632535 / 10.191392 (15.441143) | 0.255628 / 0.680424 (-0.424795) | 0.034655 / 0.534201 (-0.499546) | 0.564593 / 0.579283 (-0.014690) | 0.668339 / 0.434364 (0.233975) | 0.648414 / 0.540337 (0.108076) | 0.766735 / 1.386936 (-0.620201) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009658 / 0.011353 (-0.001695) | 0.006690 / 0.011008 (-0.004318) | 0.099151 / 0.038508 (0.060643) | 0.037092 / 0.023109 (0.013983) | 0.470354 / 0.275898 (0.194456) | 0.525863 / 0.323480 (0.202383) | 0.007593 / 0.007986 (-0.000393) | 0.006637 / 0.004328 (0.002308) | 0.098782 / 0.004250 (0.094532) | 0.058524 / 0.037052 (0.021471) | 0.502569 / 0.258489 (0.244080) | 0.526410 / 0.293841 (0.232569) | 0.059486 / 0.128546 (-0.069060) | 0.019742 / 0.075646 (-0.055904) | 0.119715 / 0.419271 (-0.299556) | 0.065269 / 0.043533 (0.021736) | 0.483327 / 0.255139 (0.228188) | 0.506148 / 0.283200 (0.222948) | 0.123178 / 0.141683 (-0.018505) | 1.916624 / 1.452155 (0.464470) | 2.051410 / 1.492716 (0.558694) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.286481 / 0.018006 (0.268475) | 0.597300 / 0.000490 (0.596810) | 0.008906 / 0.000200 (0.008706) | 0.000128 / 0.000054 (0.000074) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031406 / 0.037411 (-0.006005) | 0.146748 / 0.014526 (0.132222) | 0.152898 / 0.176557 (-0.023658) | 0.212535 / 0.737135 (-0.524600) | 0.155577 / 0.296338 (-0.140761) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.660989 / 0.215209 (0.445780) | 6.688530 / 2.077655 (4.610875) | 3.039278 / 1.504120 (1.535159) | 2.660357 / 1.541195 (1.119162) | 2.696912 / 1.468490 (1.228422) | 1.259760 / 4.584777 (-3.325017) | 5.922452 / 3.745712 (2.176740) | 5.304200 / 5.269862 (0.034338) | 2.823928 / 4.565676 (-1.741748) | 0.148118 / 0.424275 (-0.276157) | 0.015575 / 0.007607 (0.007968) | 0.794404 / 0.226044 (0.568360) | 8.233651 / 2.268929 (5.964722) | 3.777482 / 55.444624 (-51.667142) | 3.064924 / 6.876477 (-3.811552) | 3.117803 / 2.142072 (0.975731) | 1.479559 / 4.805227 (-3.325668) | 0.254070 / 6.500664 (-6.246594) | 0.086806 / 0.075469 (0.011337) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.735515 / 1.841788 (-0.106273) | 18.934157 / 8.074308 (10.859848) | 22.645248 / 10.191392 (12.453856) | 0.227073 / 0.680424 (-0.453351) | 0.030650 / 0.534201 (-0.503551) | 0.594619 / 0.579283 (0.015336) | 0.653304 / 0.434364 (0.218940) | 0.707484 / 0.540337 (0.167147) | 0.823327 / 1.386936 (-0.563610) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#273392966e434286f4f5ba2ad596730bff11056d \"CML watermark\")\n" ]
"2023-04-11T08:33:15Z"
"2023-04-11T08:57:45Z"
"2023-04-11T08:47:55Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5731.diff", "html_url": "https://github.com/huggingface/datasets/pull/5731", "merged_at": "2023-04-11T08:47:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/5731.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5731" }
Fix #5730.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5731/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5731/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5137
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5137/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5137/comments
https://api.github.com/repos/huggingface/datasets/issues/5137/events
https://github.com/huggingface/datasets/issues/5137
1,414,642,723
I_kwDODunzps5UUbwj
5,137
Align task tags in dataset metadata
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "I removed all the invalid task_ids in datasts without namespace, based on the <s>(internal)</s> types.ts", "(Types.ts is not internal it's public)", "I have opened PRs to fix the task_ids in all datasets within a namespace as well.\r\n\r\nWorking on task_categories...", "For future reference: this fix had some complications\r\n\r\nWhen trying to open a PR to fix the task tags, an exception was thrown if:\r\n- the metadata contained \"languages\" or \"licenses\" (instead of \"language\" or \"license\")\r\n- the metadata contained a non-valid language: `en-US` (instead of `en`), `no` (instead of `'no'`),...\r\n- the metadata contained a non-valid license\r\n- either `task_categories` or `task_ids` was not an array (a dict for each config)\r\n- the metadata contained non-valid tag names\r\n\r\nErrors:\r\n```\r\nValueError: - Error: \"languages\" is deprecated. Use \"language\" instead.\r\n```\r\n```\r\nValueError: - Error: \"licenses\" is deprecated. Use \"license\" instead.\r\n```\r\n```\r\nValueError: - Error: \"language[17]\" must only contain lowercase characters\r\n```\r\n```\r\nValueError: - Error: \"language[0]\" with value \"cz, de, it\" is not valid. It must be an ISO 639-1, 639-2 or 639-3 code (two/three letters), or a special value like \"code\", \"multilingual\". If you want to use BCP-47 identifiers, you can specify them in language_bcp47.\r\n```\r\n```\r\nValueError: - Error: \"task_ids\" must be an array\r\n```", "All Hub datasets are done.", "great job! did you have feedback from Hub users/i.E. repo authors?", "Yes, @julien-c. These are some of the feedbacks:\r\n- Most people just thank for the fix: [cahya/librivox-indonesia](https://huggingface.co/datasets/cahya/librivox-indonesia/discussions/1#6357cd8a292a050ebd705f84), [TurkuNLP/xlsum-fi](https://huggingface.co/datasets/TurkuNLP/xlsum-fi/discussions/1#6357828aa1f8ad1c31bcbe46), [coastalcph/fairlex](https://huggingface.co/datasets/coastalcph/fairlex/discussions/4#6351a527a8e595171ab1aef2)\r\n- Why are we changing their task names? [joelito/lextreme](https://huggingface.co/datasets/joelito/lextreme/discussions/1#6351b576fe367c0d9b12041b)\r\n - I take note of this for the next bulk operation; besides the PR title, we should also add a description to explain the reason for the change and also maybe putting a link to some pertinent GH Issue page\r\n- Some of them ask where to find the list of the supported task values is: [dennlinger/klexikon](https://huggingface.co/datasets/dennlinger/klexikon/discussions/3#6356b3ea80f8cb3ab777ac5c), [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad/discussions/1#635262467e4cc3135fd09f58)\r\n - Currently, the list is here: https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts#L85\r\n - Maybe we could made them more easily accessible\r\n- Some people do not agree about current \"hierarchy\":\r\n - text-scoring: [emrecan/nli_tr_for_simcse](https://huggingface.co/datasets/emrecan/nli_tr_for_simcse/discussions/1#6357c1b128792d8cdd51e9f9) (but referring to [emrecan/nli_tr_for_simcse](https://huggingface.co/datasets/emrecan/nli_tr_for_simcse/discussions/2/files))\r\n - Before \"text-scoring\" was a task_category, with task_ids [\"semantic-similarity-scoring\", \"sentiment-scoring\"]\r\n - Now all three are task_ids [\"text-scoring\", \"semantic-similarity-scoring\", \"sentiment-scoring\"] under the task_category \"text-classification\"\r\n - People complain that their scoring tasks are not classification task\r\n - binary-classification: why don't we have binary-classification? We have multi-class-classification, multi-label-classification and sentiment-classification, but not binary-classification\r\n - symbolic-regression: [yoshitomo-matsubara/srsd-feynman_hard](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_hard/discussions/2#63614194c12a09b8a31457cc), [yoshitomo-matsubara/srsd-feynman_medium](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_medium/discussions/2#6361418aeee0d27f04379e43), [yoshitomo-matsubara/srsd-feynman_easy](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_easy/discussions/2#6361416e00905b1ffb8d0112)\r\n - Why don't we have symbolic-regression task?\r\n\r\nNOTE: I'm editing this comment to add more feedback", "As someone with feedback on the updates (which I highly appreciate seeing included here :D), a few comments from a \"user perspective\": \r\n\r\n* I think the general confusion for me was also surrounding the hierarchy; it doesn't really become super clear (even when using the tagger space) that one is a subset of the other, especially since it seems to be still possible to include fine-grained tasks without the \"parent category\"?\r\n* The datasets explorer still shows tags that are no longer valid (e.g., super specific ones such as `summarization-other-paper-abstract-generation`, but also ones that should be `task_categories`, such as `summarization`). I'm assuming this will be fixed soon, but until then it can confuse people who don't understand why they suddenly can't use seemingly still valid tags anymore.\r\n* As I mentioned to @albertvillanova, having a dedicated page in the docs with explanations (especially wrt the difference between `task_categories` and `task_ids`) would be super helpful. However, I think it would have been sufficient to just include some description in the dataset PRs where you can link to the Github/other discussion on the topic :) That way, I can check myself what changes are expected to happen.\r\n\r\nThanks again for the streamlining process, I personally learned a fair bit about the tagging structure in the meantime!\r\nBest,\r\nDennis", "Thanks to you both for your feedback! super useful! cc'ing @osanseviero too 🙂\r\n\r\n> The datasets explorer still shows tags that are no longer valid\r\n\r\nwait which explorer is that? is it https://huggingface.co/datasets/viewer/ ?\r\n", "Sorry, this one: https://huggingface.co/datasets \r\nAnd then selecting the \"Fine-Grained Tasks\".", "good feedback! we'll improve this", "Super useful feedback, thanks a lot!", "- Some people do not agree about current \"hierarchy\":\r\n - symbolic-regression: [yoshitomo-matsubara/srsd-feynman_hard](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_hard/discussions/2#63614194c12a09b8a31457cc), [yoshitomo-matsubara/srsd-feynman_medium](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_medium/discussions/2#6361418aeee0d27f04379e43), [yoshitomo-matsubara/srsd-feynman_easy](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_easy/discussions/2#6361416e00905b1ffb8d0112)\r\n - Why don't we have symbolic-regression task?", "@albertvillanova \r\nThank you for sharing our voice here!\r\n\r\nYes, we want `symbolic-regression` to be listed as a task. This task has been attracting attention from the machine learning/deep learning community, and unfortunately existing symbolic regression datasets are de-centralized in the community (hosted at individual platforms like author website, github, etc).\r\nIt would be great for the community if Hugging Face can support the task." ]
"2022-10-19T09:41:42Z"
"2022-11-10T05:25:58Z"
"2022-10-25T06:17:00Z"
MEMBER
null
null
null
## Describe Once we have agreed on a common naming for task tags for all open source projects, we should align on them. ## Steps - [x] Align task tags in canonical datasets - [x] task_categories: 4 datasets - [x] task_ids (by @lhoestq) - [x] Open PRs in community datasets - [x] task_categories: 451 datasets - [x] task_ids: 556 datasets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5137/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5137/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6218
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6218/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6218/comments
https://api.github.com/repos/huggingface/datasets/issues/6218/events
https://github.com/huggingface/datasets/pull/6218
1,883,734,000
PR_kwDODunzps5Zqw3Y
6,218
Rename old push_to_hub configs to "default" in dataset_infos
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006529 / 0.011353 (-0.004823) | 0.004010 / 0.011008 (-0.006998) | 0.086258 / 0.038508 (0.047750) | 0.073775 / 0.023109 (0.050666) | 0.307573 / 0.275898 (0.031675) | 0.337091 / 0.323480 (0.013611) | 0.004251 / 0.007986 (-0.003735) | 0.003886 / 0.004328 (-0.000443) | 0.068238 / 0.004250 (0.063987) | 0.057000 / 0.037052 (0.019948) | 0.321751 / 0.258489 (0.063262) | 0.359227 / 0.293841 (0.065386) | 0.030841 / 0.128546 (-0.097705) | 0.008569 / 0.075646 (-0.067078) | 0.299523 / 0.419271 (-0.119748) | 0.052563 / 0.043533 (0.009030) | 0.312806 / 0.255139 (0.057667) | 0.342273 / 0.283200 (0.059074) | 0.025725 / 0.141683 (-0.115958) | 1.479263 / 1.452155 (0.027108) | 1.554975 / 1.492716 (0.062259) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.316328 / 0.018006 (0.298322) | 0.598993 / 0.000490 (0.598503) | 0.004548 / 0.000200 (0.004348) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027399 / 0.037411 (-0.010013) | 0.081683 / 0.014526 (0.067157) | 0.096968 / 0.176557 (-0.079589) | 0.151559 / 0.737135 (-0.585576) | 0.096558 / 0.296338 (-0.199781) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.383117 / 0.215209 (0.167908) | 3.818634 / 2.077655 (1.740979) | 1.878112 / 1.504120 (0.373992) | 1.729031 / 1.541195 (0.187836) | 1.770259 / 1.468490 (0.301769) | 0.484061 / 4.584777 (-4.100716) | 3.596998 / 3.745712 (-0.148715) | 3.246846 / 5.269862 (-2.023016) | 2.019481 / 4.565676 (-2.546195) | 0.057279 / 0.424275 (-0.366996) | 0.007455 / 0.007607 (-0.000152) | 0.465002 / 0.226044 (0.238958) | 4.644669 / 2.268929 (2.375741) | 2.346415 / 55.444624 (-53.098209) | 2.039686 / 6.876477 (-4.836791) | 2.172822 / 2.142072 (0.030750) | 0.582925 / 4.805227 (-4.222302) | 0.134246 / 6.500664 (-6.366418) | 0.060093 / 0.075469 (-0.015376) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.249033 / 1.841788 (-0.592755) | 19.585949 / 8.074308 (11.511641) | 14.100681 / 10.191392 (3.909289) | 0.147138 / 0.680424 (-0.533286) | 0.018307 / 0.534201 (-0.515894) | 0.397939 / 0.579283 (-0.181344) | 0.413916 / 0.434364 (-0.020448) | 0.465688 / 0.540337 (-0.074650) | 0.642140 / 1.386936 (-0.744797) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006627 / 0.011353 (-0.004726) | 0.004173 / 0.011008 (-0.006835) | 0.063850 / 0.038508 (0.025342) | 0.074733 / 0.023109 (0.051623) | 0.398111 / 0.275898 (0.122213) | 0.426344 / 0.323480 (0.102864) | 0.006261 / 0.007986 (-0.001725) | 0.003507 / 0.004328 (-0.000822) | 0.064511 / 0.004250 (0.060260) | 0.056508 / 0.037052 (0.019456) | 0.401750 / 0.258489 (0.143261) | 0.437081 / 0.293841 (0.143240) | 0.031815 / 0.128546 (-0.096732) | 0.008703 / 0.075646 (-0.066943) | 0.071411 / 0.419271 (-0.347861) | 0.048153 / 0.043533 (0.004620) | 0.399221 / 0.255139 (0.144082) | 0.429312 / 0.283200 (0.146112) | 0.022157 / 0.141683 (-0.119526) | 1.485656 / 1.452155 (0.033502) | 1.550967 / 1.492716 (0.058250) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.330575 / 0.018006 (0.312569) | 0.525553 / 0.000490 (0.525064) | 0.004574 / 0.000200 (0.004374) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031871 / 0.037411 (-0.005541) | 0.091819 / 0.014526 (0.077293) | 0.105542 / 0.176557 (-0.071015) | 0.158210 / 0.737135 (-0.578926) | 0.107167 / 0.296338 (-0.189172) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430226 / 0.215209 (0.215017) | 4.293456 / 2.077655 (2.215801) | 2.289538 / 1.504120 (0.785418) | 2.122255 / 1.541195 (0.581060) | 2.181840 / 1.468490 (0.713350) | 0.498529 / 4.584777 (-4.086248) | 3.686636 / 3.745712 (-0.059077) | 3.287279 / 5.269862 (-1.982582) | 2.068397 / 4.565676 (-2.497280) | 0.058775 / 0.424275 (-0.365500) | 0.007583 / 0.007607 (-0.000024) | 0.507165 / 0.226044 (0.281121) | 5.072330 / 2.268929 (2.803401) | 2.796396 / 55.444624 (-52.648228) | 2.409946 / 6.876477 (-4.466531) | 2.657322 / 2.142072 (0.515250) | 0.597744 / 4.805227 (-4.207483) | 0.133803 / 6.500664 (-6.366861) | 0.060231 / 0.075469 (-0.015238) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.333130 / 1.841788 (-0.508658) | 20.545936 / 8.074308 (12.471627) | 14.875020 / 10.191392 (4.683628) | 0.168873 / 0.680424 (-0.511551) | 0.020316 / 0.534201 (-0.513885) | 0.397203 / 0.579283 (-0.182080) | 0.412412 / 0.434364 (-0.021952) | 0.479952 / 0.540337 (-0.060385) | 0.657155 / 1.386936 (-0.729781) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#13fbee4ca8742460e9baab86a89d9100a294df3e \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007885 / 0.011353 (-0.003468) | 0.005221 / 0.011008 (-0.005787) | 0.099457 / 0.038508 (0.060949) | 0.085867 / 0.023109 (0.062758) | 0.359922 / 0.275898 (0.084024) | 0.406479 / 0.323480 (0.082999) | 0.005001 / 0.007986 (-0.002985) | 0.003678 / 0.004328 (-0.000650) | 0.075647 / 0.004250 (0.071396) | 0.064318 / 0.037052 (0.027265) | 0.372180 / 0.258489 (0.113691) | 0.419206 / 0.293841 (0.125365) | 0.040438 / 0.128546 (-0.088108) | 0.010008 / 0.075646 (-0.065638) | 0.340911 / 0.419271 (-0.078360) | 0.063326 / 0.043533 (0.019793) | 0.359015 / 0.255139 (0.103876) | 0.408601 / 0.283200 (0.125402) | 0.029828 / 0.141683 (-0.111855) | 1.767822 / 1.452155 (0.315667) | 1.829079 / 1.492716 (0.336363) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.234455 / 0.018006 (0.216449) | 0.507786 / 0.000490 (0.507297) | 0.004009 / 0.000200 (0.003809) | 0.000101 / 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033374 / 0.037411 (-0.004038) | 0.100817 / 0.014526 (0.086291) | 0.113415 / 0.176557 (-0.063141) | 0.180368 / 0.737135 (-0.556768) | 0.115446 / 0.296338 (-0.180893) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.488976 / 0.215209 (0.273767) | 4.911354 / 2.077655 (2.833699) | 2.623525 / 1.504120 (1.119405) | 2.424400 / 1.541195 (0.883206) | 2.497580 / 1.468490 (1.029089) | 0.561106 / 4.584777 (-4.023671) | 4.265649 / 3.745712 (0.519937) | 3.830267 / 5.269862 (-1.439595) | 2.404727 / 4.565676 (-2.160949) | 0.067303 / 0.424275 (-0.356972) | 0.009177 / 0.007607 (0.001570) | 0.588433 / 0.226044 (0.362388) | 5.871573 / 2.268929 (3.602645) | 3.087845 / 55.444624 (-52.356779) | 2.765381 / 6.876477 (-4.111096) | 3.007863 / 2.142072 (0.865791) | 0.687327 / 4.805227 (-4.117901) | 0.157687 / 6.500664 (-6.342977) | 0.071291 / 0.075469 (-0.004178) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.510931 / 1.841788 (-0.330857) | 22.129590 / 8.074308 (14.055282) | 16.780479 / 10.191392 (6.589087) | 0.168297 / 0.680424 (-0.512127) | 0.021294 / 0.534201 (-0.512907) | 0.464535 / 0.579283 (-0.114748) | 0.480041 / 0.434364 (0.045677) | 0.549185 / 0.540337 (0.008848) | 0.739438 / 1.386936 (-0.647498) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007834 / 0.011353 (-0.003518) | 0.004576 / 0.011008 (-0.006432) | 0.073331 / 0.038508 (0.034823) | 0.084688 / 0.023109 (0.061579) | 0.486367 / 0.275898 (0.210469) | 0.523127 / 0.323480 (0.199647) | 0.006278 / 0.007986 (-0.001708) | 0.003792 / 0.004328 (-0.000537) | 0.075416 / 0.004250 (0.071166) | 0.064053 / 0.037052 (0.027001) | 0.491908 / 0.258489 (0.233419) | 0.529177 / 0.293841 (0.235336) | 0.038483 / 0.128546 (-0.090063) | 0.009560 / 0.075646 (-0.066087) | 0.083431 / 0.419271 (-0.335841) | 0.057114 / 0.043533 (0.013581) | 0.486316 / 0.255139 (0.231177) | 0.512384 / 0.283200 (0.229185) | 0.028452 / 0.141683 (-0.113231) | 1.788886 / 1.452155 (0.336731) | 1.893834 / 1.492716 (0.401118) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.343018 / 0.018006 (0.325011) | 0.513673 / 0.000490 (0.513183) | 0.056778 / 0.000200 (0.056578) | 0.001799 / 0.000054 (0.001745) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038530 / 0.037411 (0.001119) | 0.109286 / 0.014526 (0.094760) | 0.122812 / 0.176557 (-0.053745) | 0.187780 / 0.737135 (-0.549355) | 0.124083 / 0.296338 (-0.172255) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.509839 / 0.215209 (0.294630) | 5.085840 / 2.077655 (3.008186) | 2.746695 / 1.504120 (1.242575) | 2.542283 / 1.541195 (1.001088) | 2.650243 / 1.468490 (1.181753) | 0.592801 / 4.584777 (-3.991976) | 4.316721 / 3.745712 (0.571009) | 3.811672 / 5.269862 (-1.458189) | 2.433982 / 4.565676 (-2.131695) | 0.066861 / 0.424275 (-0.357414) | 0.008633 / 0.007607 (0.001026) | 0.590482 / 0.226044 (0.364437) | 5.923484 / 2.268929 (3.654556) | 3.282293 / 55.444624 (-52.162332) | 2.882716 / 6.876477 (-3.993761) | 3.139581 / 2.142072 (0.997509) | 0.690702 / 4.805227 (-4.114525) | 0.156781 / 6.500664 (-6.343883) | 0.071487 / 0.075469 (-0.003982) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.604557 / 1.841788 (-0.237231) | 24.000026 / 8.074308 (15.925718) | 17.548685 / 10.191392 (7.357293) | 0.174883 / 0.680424 (-0.505541) | 0.023812 / 0.534201 (-0.510389) | 0.473522 / 0.579283 (-0.105761) | 0.494683 / 0.434364 (0.060319) | 0.593352 / 0.540337 (0.053015) | 0.771852 / 1.386936 (-0.615084) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b61c96a806fa97800bc8a66607fb0c78a5d04146 \"CML watermark\")\n", "thanks! i wonder if we should also fix (change config name) all the old `dataset_infos.json` on the Hub?", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006388 / 0.011353 (-0.004965) | 0.003876 / 0.011008 (-0.007132) | 0.083960 / 0.038508 (0.045452) | 0.068328 / 0.023109 (0.045219) | 0.337958 / 0.275898 (0.062060) | 0.370783 / 0.323480 (0.047303) | 0.003925 / 0.007986 (-0.004060) | 0.004221 / 0.004328 (-0.000107) | 0.064198 / 0.004250 (0.059947) | 0.052681 / 0.037052 (0.015629) | 0.348890 / 0.258489 (0.090401) | 0.389038 / 0.293841 (0.095197) | 0.031133 / 0.128546 (-0.097413) | 0.008566 / 0.075646 (-0.067080) | 0.288169 / 0.419271 (-0.131102) | 0.053290 / 0.043533 (0.009757) | 0.344654 / 0.255139 (0.089515) | 0.381287 / 0.283200 (0.098087) | 0.022350 / 0.141683 (-0.119333) | 1.459933 / 1.452155 (0.007778) | 1.543097 / 1.492716 (0.050380) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212592 / 0.018006 (0.194586) | 0.461863 / 0.000490 (0.461373) | 0.003468 / 0.000200 (0.003268) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026849 / 0.037411 (-0.010563) | 0.081059 / 0.014526 (0.066533) | 0.093986 / 0.176557 (-0.082571) | 0.150328 / 0.737135 (-0.586807) | 0.094253 / 0.296338 (-0.202085) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.382198 / 0.215209 (0.166989) | 3.813878 / 2.077655 (1.736224) | 1.855686 / 1.504120 (0.351566) | 1.672995 / 1.541195 (0.131800) | 1.697705 / 1.468490 (0.229215) | 0.479920 / 4.584777 (-4.104857) | 3.608305 / 3.745712 (-0.137407) | 3.216712 / 5.269862 (-2.053149) | 1.984781 / 4.565676 (-2.580896) | 0.056801 / 0.424275 (-0.367475) | 0.007499 / 0.007607 (-0.000108) | 0.454155 / 0.226044 (0.228110) | 4.531147 / 2.268929 (2.262218) | 2.296149 / 55.444624 (-53.148475) | 1.968701 / 6.876477 (-4.907775) | 2.144286 / 2.142072 (0.002213) | 0.599254 / 4.805227 (-4.205973) | 0.138150 / 6.500664 (-6.362514) | 0.060118 / 0.075469 (-0.015351) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.282486 / 1.841788 (-0.559301) | 19.127792 / 8.074308 (11.053483) | 14.116521 / 10.191392 (3.925129) | 0.163792 / 0.680424 (-0.516632) | 0.018116 / 0.534201 (-0.516085) | 0.390789 / 0.579283 (-0.188494) | 0.409241 / 0.434364 (-0.025123) | 0.457824 / 0.540337 (-0.082513) | 0.624390 / 1.386936 (-0.762546) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006637 / 0.011353 (-0.004716) | 0.003932 / 0.011008 (-0.007076) | 0.063456 / 0.038508 (0.024948) | 0.070062 / 0.023109 (0.046953) | 0.410570 / 0.275898 (0.134672) | 0.436700 / 0.323480 (0.113220) | 0.005324 / 0.007986 (-0.002662) | 0.003263 / 0.004328 (-0.001065) | 0.063590 / 0.004250 (0.059340) | 0.054823 / 0.037052 (0.017770) | 0.408720 / 0.258489 (0.150231) | 0.441493 / 0.293841 (0.147652) | 0.031655 / 0.128546 (-0.096891) | 0.008421 / 0.075646 (-0.067225) | 0.070657 / 0.419271 (-0.348614) | 0.047370 / 0.043533 (0.003837) | 0.408217 / 0.255139 (0.153078) | 0.422178 / 0.283200 (0.138978) | 0.022282 / 0.141683 (-0.119401) | 1.511417 / 1.452155 (0.059262) | 1.570337 / 1.492716 (0.077620) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224334 / 0.018006 (0.206327) | 0.447589 / 0.000490 (0.447099) | 0.004227 / 0.000200 (0.004027) | 0.000099 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030797 / 0.037411 (-0.006615) | 0.091276 / 0.014526 (0.076750) | 0.102665 / 0.176557 (-0.073892) | 0.155423 / 0.737135 (-0.581712) | 0.103779 / 0.296338 (-0.192560) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434509 / 0.215209 (0.219300) | 4.328910 / 2.077655 (2.251255) | 2.311424 / 1.504120 (0.807304) | 2.138380 / 1.541195 (0.597185) | 2.196293 / 1.468490 (0.727803) | 0.482123 / 4.584777 (-4.102654) | 3.597870 / 3.745712 (-0.147842) | 3.222426 / 5.269862 (-2.047435) | 1.994467 / 4.565676 (-2.571210) | 0.057517 / 0.424275 (-0.366758) | 0.007336 / 0.007607 (-0.000271) | 0.504968 / 0.226044 (0.278923) | 5.047940 / 2.268929 (2.779012) | 2.824014 / 55.444624 (-52.620610) | 2.457762 / 6.876477 (-4.418714) | 2.606970 / 2.142072 (0.464897) | 0.580758 / 4.805227 (-4.224469) | 0.132584 / 6.500664 (-6.368080) | 0.059258 / 0.075469 (-0.016211) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.354386 / 1.841788 (-0.487402) | 19.738147 / 8.074308 (11.663839) | 14.858001 / 10.191392 (4.666609) | 0.166074 / 0.680424 (-0.514350) | 0.020181 / 0.534201 (-0.514020) | 0.398333 / 0.579283 (-0.180950) | 0.406969 / 0.434364 (-0.027395) | 0.474515 / 0.540337 (-0.065822) | 0.649571 / 1.386936 (-0.737365) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b3ac3b3a9c5f40a29fae71504574cfdeebefe349 \"CML watermark\")\n", "I would say we should delete all `dataset_infos.json` on the Hub...", "@albertvillanova @lhoestq @mariosasko should we really stop supporting it and delete from everywhere?\r\n(bc if not, I've found a bug in updating `dataset_infos.json` with `.push_to_hub` and I'd open a PR to fix it)", "We can only delete them for the datasets without namespace and open PRs for the others, so we need to keep supporting them for now" ]
"2023-09-06T10:40:05Z"
"2023-09-07T08:31:29Z"
"2023-09-06T11:23:56Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6218.diff", "html_url": "https://github.com/huggingface/datasets/pull/6218", "merged_at": "2023-09-06T11:23:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/6218.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6218" }
Fix ```python from datasets import load_dataset_builder b = load_dataset_builder("lambdalabs/pokemon-blip-captions", "default") print(b.info) ``` which should return ``` DatasetInfo( features={'image': Image(decode=True, id=None), 'text': Value(dtype='string', id=None)}, dataset_name='pokemon-blip-captions', config_name='default', version=0.0.0, splits={'train': SplitInfo(name='train', num_bytes=119417410.0, num_examples=833, shard_lengths=None, dataset_name='pokemon-blip-captions')}, download_checksums=None, download_size=99672355, dataset_size=119417410.0, size_in_bytes=219089765.0, ... ) ``` instead of and empty dataset info. The dataset has a dataset_infos.json file with a deprecated config name "lambdalabs--pokemon-blip-captions". We switched those config names to "default" in 2.14, so the builder.info should take this into account.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6218/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6218/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/961
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/961/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/961/comments
https://api.github.com/repos/huggingface/datasets/issues/961/events
https://github.com/huggingface/datasets/issues/961
754,434,398
MDU6SXNzdWU3NTQ0MzQzOTg=
961
sample multiple datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rabeehk", "id": 6278280, "login": "rabeehk", "node_id": "MDQ6VXNlcjYyNzgyODA=", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "repos_url": "https://api.github.com/users/rabeehk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "type": "User", "url": "https://api.github.com/users/rabeehk" }
[]
closed
false
null
[]
null
[ "here I share my dataloader currently for multiple tasks: https://gist.github.com/rabeehkarimimahabadi/39f9444a4fb6f53dcc4fca5d73bf8195 \r\n\r\nI need to train my model distributedly with this dataloader, \"MultiTasksataloader\", currently this does not work in distributed fasion,\r\nto save on memory I tried to use iterative datasets, could you have a look in this dataloader and tell me if this is indeed the case? not sure how to make datasets being iterative to not load them in memory, then I remove the sampler for dataloader, and shard the data per core, could you tell me please how I should implement this case in datasets library? and how do you find my implementation in terms of correctness? thanks \r\n", "Hi @rabeehkarimimahabadi any luck with updating the multi-task data loader to work with distributed training?", "Hi @pushkalkatara yes I solved it back then, here please find my implementation https://github.com/rabeehk/hyperformer/blob/main/hyperformer/data/multitask_sampler.py ", "Thanks @rabeehk for sharing. \r\n\r\nThe sampler basically returns a list of integers to sample from each task's dataset. I was wondering how to use it with two `torch.Dataset` of different tasks. Also, do I need to shard across processes while creating an Iterable Dataset?\r\n", "We now have `interleave_datasets` in the API that allows you to cycle/sample with probabilities (with various stopping strategies) through a list of datasets. However, more specific behavior should be implemented manually." ]
"2020-12-01T14:20:02Z"
"2023-07-20T14:08:57Z"
"2023-07-20T14:08:57Z"
CONTRIBUTOR
null
null
null
Hi I am dealing with multiple datasets, I need to have a dataloader over them with a condition that in each batch data samples are coming from one of the datasets. My main question is: - I need to have a way to sample the datasets first with some weights, lets say 2x dataset1 1x dataset2, could you point me how I can do it sub-questions: - I want to concat sampled datasets and define one dataloader on it, then I need a way to make sure batches come from 1 dataset in each iteration, could you assist me how I can do? - I use iterative-type of datasets, but I need a method of shuffling still since it brings accuracy performance issues if not doing it, thanks for the help.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/961/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/961/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/646
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/646/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/646/comments
https://api.github.com/repos/huggingface/datasets/issues/646/events
https://github.com/huggingface/datasets/pull/646
704,607,371
MDExOlB1bGxSZXF1ZXN0NDg5NTAyMTM3
646
Fix docs typos
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[]
"2020-09-18T19:32:27Z"
"2020-09-21T16:30:54Z"
"2020-09-21T16:14:12Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/646.diff", "html_url": "https://github.com/huggingface/datasets/pull/646", "merged_at": "2020-09-21T16:14:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/646.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/646" }
This PR fixes few typos in the docs and the error in the code snippet in the set_format section in docs/source/torch_tensorflow.rst. `torch.utils.data.Dataloader` expects padded batches so it throws an error due to not being able to stack the unpadded tensors. If we follow the Quick tour from the docs where they add the `truncation=True, padding='max_length'` arguments to the tokenizer before passing data to Dataloader, we can easily fix the issue.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/646/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/646/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6142
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6142/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6142/comments
https://api.github.com/repos/huggingface/datasets/issues/6142/events
https://github.com/huggingface/datasets/issues/6142
1,846,205,216
I_kwDODunzps5uCtsg
6,142
the-stack-dedup fails to generate
{ "avatar_url": "https://avatars.githubusercontent.com/u/45830328?v=4", "events_url": "https://api.github.com/users/michaelroyzen/events{/privacy}", "followers_url": "https://api.github.com/users/michaelroyzen/followers", "following_url": "https://api.github.com/users/michaelroyzen/following{/other_user}", "gists_url": "https://api.github.com/users/michaelroyzen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/michaelroyzen", "id": 45830328, "login": "michaelroyzen", "node_id": "MDQ6VXNlcjQ1ODMwMzI4", "organizations_url": "https://api.github.com/users/michaelroyzen/orgs", "received_events_url": "https://api.github.com/users/michaelroyzen/received_events", "repos_url": "https://api.github.com/users/michaelroyzen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/michaelroyzen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/michaelroyzen/subscriptions", "type": "User", "url": "https://api.github.com/users/michaelroyzen" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[ "@severo ", "It seems that some parquet files have additional columns.\r\n\r\nI ran a scan and found that two files have the additional `__id__` column:\r\n\r\n1. `hf://datasets/bigcode/the-stack-dedup/data/numpy/data-00000-of-00001.parquet`\r\n2. `hf://datasets/bigcode/the-stack-dedup/data/omgrofl/data-00000-of-00001.parquet`\r\n\r\nWe should open a PR to fix those two files", "I opened https://huggingface.co/datasets/bigcode/the-stack-dedup/discussions/31", "The files have been fixed ! I'm closing this one but feel free to re-open if you still have the issue" ]
"2023-08-11T05:10:49Z"
"2023-08-17T09:26:13Z"
"2023-08-17T09:26:13Z"
NONE
null
null
null
### Describe the bug I'm getting an error generating the-stack-dedup with datasets 2.13.1, and with 2.14.4 nothing happens. ### Steps to reproduce the bug My code: ``` import os import datasets as ds MY_CACHE_DIR = "/home/ubuntu/the-stack-dedup-local" MY_TOKEN="my-token" the_stack_ds = ds.load_dataset("bigcode/the-stack-dedup", split="train", download_mode="reuse_cache_if_exists", cache_dir=MY_CACHE_DIR, use_auth_token=MY_TOKEN, num_proc=64) ``` The exception: ``` Generating train split: 233248251 examples [54:31, 57280.00 examples/s] multiprocess.pool.RemoteTraceback: """ Traceback (most recent call last): File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/build er.py", line 1879, in _prepare_split_single for _, table in generator: File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/packa ged_modules/parquet/parquet.py", line 82, in _generate_tables yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table) File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/packa ged_modules/parquet/parquet.py", line 61, in _cast_table pa_table = table_cast(pa_table, self.info.features.arrow_schema) File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/table .py", line 2324, in table_cast return cast_table_to_schema(table, schema) File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/table .py", line 2282, in cast_table_to_schema raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nb ecause column names don't match") ValueError: Couldn't cast hexsha: string size: int64 ext: string lang: string max_stars_repo_path: string max_stars_repo_name: string max_stars_repo_head_hexsha: string max_stars_repo_licenses: list<item: string> child 0, item: string max_stars_count: int64 max_stars_repo_stars_event_min_datetime: string max_stars_repo_stars_event_max_datetime: string max_issues_repo_path: string max_issues_repo_name: string max_issues_repo_head_hexsha: string max_issues_repo_licenses: list<item: string> child 0, item: string max_issues_count: int64 max_issues_repo_issues_event_min_datetime: string max_issues_repo_issues_event_max_datetime: string max_forks_repo_path: string max_forks_repo_name: string max_forks_repo_head_hexsha: string max_forks_repo_licenses: list<item: string> child 0, item: string max_forks_count: int64 max_forks_repo_forks_event_min_datetime: string max_forks_repo_forks_event_max_datetime: string content: string avg_line_length: double max_line_length: int64 alphanum_fraction: double __id__: int64 -- schema metadata -- huggingface: '{"info": {"features": {"hexsha": {"dtype": "string", "_type' + 1979 to {'hexsha': Value(dtype='string', id=None), 'size': Value(dtype='int64', id=None), 'ext': Value(dtype='string', id=None), 'lang': Value(dtype='string', id=None), 'max_stars_repo_path': Value(dtype='string', id=None), 'max_stars_repo_name': Value(dtype='string', id=None), 'max_stars_repo_head_hexsha': Value(dtype='string', id=None), 'max_stars_repo_licenses': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'max_stars_count': Value(dtype='int64', id=None), 'max_stars_repo_stars_event_min_datetime': Value(dtype='string', id=None), 'max_stars_repo_stars_event_max_datetime': Value(dtype='string', id=None), 'max_issues_repo_path': Value(dtype='string', id=None), 'max_issues_repo_name': Value(dtype='string', id=None), 'max_issues_repo_head_hexsha': Value(dtype='string', id=None), 'max_issues_repo_licenses': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'max_issues_count': Value(dtype='int64', id=None), 'max_issues_repo_issues_event_min_datetime': Value(dtype='string', id=None), 'max_issues_repo_issues_event_max_datetime': Value(dtype='string', id=None), 'max_forks_repo_path': Value(dtype='string', id=None), 'max_forks_repo_name': Value(dtype='string', id=None), 'max_forks_repo_head_hexsha': Value(dtype='string', id=None), 'max_forks_repo_licenses': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'max_forks_count': Value(dtype='int64', id=None), 'max_forks_repo_forks_event_min_datetime': Value(dtype='string', id=None), 'max_forks_repo_forks_event_max_datetime': Value(dtype='string', id=None), 'content': Value(dtype='string', id=None), 'avg_line_length': Value(dtype='float64', id=None), 'max_line_length': Value(dtype='int64', id=None), 'alphanum_fraction': Value(dtype='float64', id=None)} because column names don't match The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/ubuntu/.local/lib/python3.10/site-packages/multiprocess/p ool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/utils /py_utils.py", line 1328, in _write_generator_to_queue for i, result in enumerate(func(**kwargs)): File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/build er.py", line 1912, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating th e dataset") from e datasets.builder.DatasetGenerationError: An error occurred while genera ting the dataset """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/ubuntu/download_the_stack.py", line 7, in <module> the_stack_ds = ds.load_dataset("bigcode/the-stack-dedup", split="tr ain", download_mode="reuse_cache_if_exists", cache_dir=MY_CACHE_DIR, us e_auth_token=MY_TOKEN, num_proc=64) File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/load. py", line 1809, in load_dataset builder_instance.download_and_prepare( File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/build er.py", line 909, in download_and_prepare self._download_and_prepare( File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/build er.py", line 1004, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/build er.py", line 1796, in _prepare_split for job_id, done, content in iflatmap_unordered( File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/utils /py_utils.py", line 1354, in iflatmap_unordered [async_result.get(timeout=0.05) for async_result in async_results] File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/utils /py_utils.py", line 1354, in <listcomp> [async_result.get(timeout=0.05) for async_result in async_results] File "/home/ubuntu/.local/lib/python3.10/site-packages/multiprocess/p ool.py", line 774, in get raise self._value datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ``` ### Expected behavior The dataset downloads properly. @lhoestq @loub ### Environment info Datasets 2.13.1, large VM with 2TB RAM, Ubuntu 20.04
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6142/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6142/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4026
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4026/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4026/comments
https://api.github.com/repos/huggingface/datasets/issues/4026/events
https://github.com/huggingface/datasets/pull/4026
1,180,968,774
PR_kwDODunzps41Btcm
4,026
Support streaming xtreme dataset for bucc18 config
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-03-25T16:00:40Z"
"2022-03-25T16:26:50Z"
"2022-03-25T16:21:52Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4026.diff", "html_url": "https://github.com/huggingface/datasets/pull/4026", "merged_at": "2022-03-25T16:21:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/4026.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4026" }
Support streaming xtreme dataset for bucc18 config.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4026/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4026/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4082
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4082/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4082/comments
https://api.github.com/repos/huggingface/datasets/issues/4082/events
https://github.com/huggingface/datasets/pull/4082
1,189,965,845
PR_kwDODunzps41f3fb
4,082
Add chrF(++) Metric Card
{ "avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4", "events_url": "https://api.github.com/users/emibaylor/events{/privacy}", "followers_url": "https://api.github.com/users/emibaylor/followers", "following_url": "https://api.github.com/users/emibaylor/following{/other_user}", "gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/emibaylor", "id": 27527747, "login": "emibaylor", "node_id": "MDQ6VXNlcjI3NTI3NzQ3", "organizations_url": "https://api.github.com/users/emibaylor/orgs", "received_events_url": "https://api.github.com/users/emibaylor/received_events", "repos_url": "https://api.github.com/users/emibaylor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions", "type": "User", "url": "https://api.github.com/users/emibaylor" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-04-01T15:32:12Z"
"2022-04-12T20:43:55Z"
"2022-04-12T20:38:06Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4082.diff", "html_url": "https://github.com/huggingface/datasets/pull/4082", "merged_at": "2022-04-12T20:38:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/4082.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4082" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4082/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4082/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2135
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2135/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2135/comments
https://api.github.com/repos/huggingface/datasets/issues/2135/events
https://github.com/huggingface/datasets/issues/2135
843,246,344
MDU6SXNzdWU4NDMyNDYzNDQ=
2,135
en language data from MLQA dataset is missing
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rabeehk", "id": 6278280, "login": "rabeehk", "node_id": "MDQ6VXNlcjYyNzgyODA=", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "repos_url": "https://api.github.com/users/rabeehk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "type": "User", "url": "https://api.github.com/users/rabeehk" }
[]
closed
false
null
[]
null
[ "Hi ! Indeed only the languages of the `translate-train` data are included...\r\nI can't find a link to download the english train set on https://github.com/facebookresearch/MLQA though, do you know where we can download it ?", "Hi @lhoestq \r\nthank you very much for coming back to me, now I see, you are right, in the link you sent I see split of {split}-context-{context_language}-question-{question_language}.json with context_language=question_language=en, TFDS most probably has extracted english ones from these files as en language files, but translate-train/test do not have en indeed. thanks a lot for the great explanations", "I close the ticket, since I do not see any en existing, they have trained on \"SQuAD V1.1\" instead. Thanks. " ]
"2021-03-29T10:47:50Z"
"2021-03-30T10:20:23Z"
"2021-03-30T10:20:23Z"
CONTRIBUTOR
null
null
null
Hi I need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2135/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2135/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4753
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4753/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4753/comments
https://api.github.com/repos/huggingface/datasets/issues/4753/events
https://github.com/huggingface/datasets/pull/4753
1,319,571,745
PR_kwDODunzps48Ll8G
4,753
Add `language_bcp47` tag
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-07-27T13:31:16Z"
"2022-07-27T14:50:03Z"
"2022-07-27T14:37:56Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4753.diff", "html_url": "https://github.com/huggingface/datasets/pull/4753", "merged_at": "2022-07-27T14:37:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/4753.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4753" }
Following (internal) https://github.com/huggingface/moon-landing/pull/3509, we need to move the bcp47 tags to `language_bcp47` and keep the `language` tag for iso 639 1-2-3 codes. In particular I made sure that all the tags in `languages` are not longer than 3 characters. I moved the rest to `language_bcp47` and fixed some of them. After this PR is merged I think we can simplify the language validation from the DatasetMetadata class (and keep it bare-bone just for the tagging app) PS: the CI is failing because of missing content in dataset cards that are unrelated to this PR
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4753/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4753/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1106
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1106/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1106/comments
https://api.github.com/repos/huggingface/datasets/issues/1106/events
https://github.com/huggingface/datasets/pull/1106
757,027,158
MDExOlB1bGxSZXF1ZXN0NTMyNDcwOTM3
1,106
Add Urdu fake news
{ "avatar_url": "https://avatars.githubusercontent.com/u/44389205?v=4", "events_url": "https://api.github.com/users/chaitnayabasava/events{/privacy}", "followers_url": "https://api.github.com/users/chaitnayabasava/followers", "following_url": "https://api.github.com/users/chaitnayabasava/following{/other_user}", "gists_url": "https://api.github.com/users/chaitnayabasava/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/chaitnayabasava", "id": 44389205, "login": "chaitnayabasava", "node_id": "MDQ6VXNlcjQ0Mzg5MjA1", "organizations_url": "https://api.github.com/users/chaitnayabasava/orgs", "received_events_url": "https://api.github.com/users/chaitnayabasava/received_events", "repos_url": "https://api.github.com/users/chaitnayabasava/repos", "site_admin": false, "starred_url": "https://api.github.com/users/chaitnayabasava/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chaitnayabasava/subscriptions", "type": "User", "url": "https://api.github.com/users/chaitnayabasava" }
[]
closed
false
null
[]
null
[]
"2020-12-04T11:24:14Z"
"2020-12-04T14:21:12Z"
"2020-12-04T14:21:12Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1106.diff", "html_url": "https://github.com/huggingface/datasets/pull/1106", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1106.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1106" }
Added Urdu fake news dataset. More information about the dataset can be found <a href="https://github.com/MaazAmjad/Datasets-for-Urdu-news">here</a>.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1106/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1106/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1895
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1895/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1895/comments
https://api.github.com/repos/huggingface/datasets/issues/1895/events
https://github.com/huggingface/datasets/issues/1895
809,630,271
MDU6SXNzdWU4MDk2MzAyNzE=
1,895
Bug Report: timestamp[ns] not recognized
{ "avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4", "events_url": "https://api.github.com/users/justin-yan/events{/privacy}", "followers_url": "https://api.github.com/users/justin-yan/followers", "following_url": "https://api.github.com/users/justin-yan/following{/other_user}", "gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/justin-yan", "id": 7731709, "login": "justin-yan", "node_id": "MDQ6VXNlcjc3MzE3MDk=", "organizations_url": "https://api.github.com/users/justin-yan/orgs", "received_events_url": "https://api.github.com/users/justin-yan/received_events", "repos_url": "https://api.github.com/users/justin-yan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions", "type": "User", "url": "https://api.github.com/users/justin-yan" }
[]
closed
false
null
[]
null
[ "Thanks for reporting !\r\n\r\nYou're right, `string_to_arrow` should be able to take `\"timestamp[ns]\"` as input and return the right pyarrow timestamp type.\r\nFeel free to suggest a fix for `string_to_arrow` and open a PR if you want to contribute ! This would be very appreciated :)\r\n\r\nTo give you more context:\r\n\r\nAs you may know we define the features types of a dataset using the `Features` object in combination with feature types like `Value`. For example\r\n```python\r\nfeatures = Features({\r\n \"age\": Value(\"int32\")\r\n})\r\n```\r\nHowever under the hood we are actually using pyarrow to store the data, and so we have a mapping between the feature types of `datasets` and the types of pyarrow.\r\n\r\nFor example, the `Value` feature types are created from a pyarrow type with `Value(str(pa_type))`.\r\nHowever it looks like the conversion back to a pyarrow type doesn't work with `\"timestamp[ns]\"`.\r\nThis is the `string_to_arrow` function you highlighted that does this conversion, so we should fix that.\r\n\r\n", "Thanks for the clarification @lhoestq !\r\n\r\nThis may be a little bit of a stupid question, but I wanted to clarify one more thing before I took a stab at this:\r\n\r\nWhen the features get inferred, I believe they already have a pyarrow schema (https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L234).\r\n\r\nWe then convert it to a string (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L778) only to convert it back into the arrow type (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L143, and https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L35). Is there a reason for this round-trip?\r\n\r\nI'll open a PR later to add `timestamp` support to `string_to_arrow`, but I'd be curious to understand since it feels like there may be some opportunities to simplify!", "The objective in terms of design is to make it easy to create Features in a pythonic way. So for example we use a string to define a Value type.\r\nThat's why when inferring the Features from an arrow schema we have to find the right string definitions for Value types. I guess we could also have a constructor `Value.from_arrow_type` to avoid recreating the arrow type, but this could create silent errors if the pyarrow type doesn't have a valid mapping with the string definition. The \"round-trip\" is used to enforce that the ground truth is the string definition, not the pyarrow type, and also as a sanity check.\r\n\r\nLet me know if that makes sense ", "OK I think I understand now:\r\n\r\nFeatures are datasets' internal representation of a schema type, distinct from pyarrow's schema.\r\nValue() corresponds to pyarrow's \"primitive\" types (e.g. `int` or `string`, but not things like `list` or `dict`).\r\n`get_nested_type()` (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L698) and `generate_from_arrow_type()` (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L778) *should* be inverses of each other, and similarly, for the primitive values, `string_to_arrow()` and `Value.__call__` (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L146) should be inverses of each other?\r\n\r\nThanks for taking the time to answer - I just wanted to make sure I understood before opening a PR so I'm not disrupting anything about how the codebase is expected to work!", "Yes you're totally right :)" ]
"2021-02-16T20:38:04Z"
"2021-02-19T18:27:11Z"
"2021-02-19T18:27:11Z"
CONTRIBUTOR
null
null
null
Repro: ``` from datasets import Dataset import pandas as pd import pyarrow df = pd.DataFrame(pd.date_range("2018-01-01", periods=3, freq="H")) pyarrow.Table.from_pandas(df) Dataset.from_pandas(df) # Throws ValueError: Neither timestamp[ns] nor timestamp[ns]_ seems to be a pyarrow data type. ``` The factory function seems to be just "timestamp": https://arrow.apache.org/docs/python/generated/pyarrow.timestamp.html#pyarrow.timestamp It seems like https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L36-L43 could have a little bit of additional structure for handling these cases? I'd be happy to take a shot at opening a PR if I could receive some guidance on whether parsing something like `timestamp[ns]` and resolving it to timestamp('ns') is the goal of this method. Alternatively, if I'm using this incorrectly (e.g. is the expectation that we always provide a schema when timestamps are involved?), that would be very helpful to know as well! ``` $ pip list # only the relevant libraries/versions datasets 1.2.1 pandas 1.0.3 pyarrow 3.0.0 ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1895/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1895/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3977
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3977/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3977/comments
https://api.github.com/repos/huggingface/datasets/issues/3977/events
https://github.com/huggingface/datasets/issues/3977
1,175,049,927
I_kwDODunzps5GCdbH
3,977
Adapt `docs/README.md` for datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4", "events_url": "https://api.github.com/users/qqaatw/events{/privacy}", "followers_url": "https://api.github.com/users/qqaatw/followers", "following_url": "https://api.github.com/users/qqaatw/following{/other_user}", "gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/qqaatw", "id": 24835382, "login": "qqaatw", "node_id": "MDQ6VXNlcjI0ODM1Mzgy", "organizations_url": "https://api.github.com/users/qqaatw/orgs", "received_events_url": "https://api.github.com/users/qqaatw/received_events", "repos_url": "https://api.github.com/users/qqaatw/repos", "site_admin": false, "starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions", "type": "User", "url": "https://api.github.com/users/qqaatw" }
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
null
[]
null
[ "Thanks for reporting @qqaatw.\r\n\r\nYes, we should definitely adapt that file for `datasets`. " ]
"2022-03-21T08:26:49Z"
"2023-02-27T10:32:37Z"
"2023-02-27T10:32:37Z"
CONTRIBUTOR
null
null
null
## Describe the bug Currently `docs/README.md` is a direct copy from `transformers`, we should probably adapt this file for `datasets`.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3977/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3977/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/173
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/173/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/173/comments
https://api.github.com/repos/huggingface/datasets/issues/173/events
https://github.com/huggingface/datasets/pull/173
621,764,932
MDExOlB1bGxSZXF1ZXN0NDIwNzUyNzQy
173
Rm extracted test dirs
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "Thanks for cleaning up the extracted dummy data folders! Instead of changing the file_utils we could also just put these folders under `.gitignore` (or maybe already done?).", "Awesome! I guess you might have to add the changes for the MockDLManager now in a different file though because of my last PR - sorry!" ]
"2020-05-20T13:30:48Z"
"2020-05-22T16:34:36Z"
"2020-05-22T16:34:35Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/173.diff", "html_url": "https://github.com/huggingface/datasets/pull/173", "merged_at": "2020-05-22T16:34:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/173.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/173" }
All the dummy data used for tests were duplicated. For each dataset, we had one zip file but also its extracted directory. I removed all these directories Furthermore instead of extracting next to the dummy_data.zip file, we extract in the temp `cached_dir` used for tests, so that all the extracted directories get removed after testing. Finally there was a bug in the `mock_download_manager` that would let it create directories with invalid names, as in #172. I fixed that by encoding url arguments. I had to rename the dummy data for `scientific_papers` and `cnn_dailymail` (the aws tests don't pass for those 2 in this PR, but they will once aws will be synced, as the local ones do) Let me know if it sounds good to you @patrickvonplaten . I'm still not entirely familiar with the mock downloader
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/173/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/173/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/316
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/316/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/316/comments
https://api.github.com/repos/huggingface/datasets/issues/316/events
https://github.com/huggingface/datasets/pull/316
646,366,450
MDExOlB1bGxSZXF1ZXN0NDQwNjY5NzY5
316
add AG News dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "events_url": "https://api.github.com/users/jxmorris12/events{/privacy}", "followers_url": "https://api.github.com/users/jxmorris12/followers", "following_url": "https://api.github.com/users/jxmorris12/following{/other_user}", "gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jxmorris12", "id": 13238952, "login": "jxmorris12", "node_id": "MDQ6VXNlcjEzMjM4OTUy", "organizations_url": "https://api.github.com/users/jxmorris12/orgs", "received_events_url": "https://api.github.com/users/jxmorris12/received_events", "repos_url": "https://api.github.com/users/jxmorris12/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions", "type": "User", "url": "https://api.github.com/users/jxmorris12" }
[]
closed
false
null
[]
null
[ "Thanks @jxmorris12 for adding this adding. \r\nCan you please add a small description of the PR?" ]
"2020-06-26T16:11:58Z"
"2020-06-30T09:58:08Z"
"2020-06-30T08:31:55Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/316.diff", "html_url": "https://github.com/huggingface/datasets/pull/316", "merged_at": "2020-06-30T08:31:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/316.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/316" }
adds support for the AG-News topic classification dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/316/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/316/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3074
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3074/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3074/comments
https://api.github.com/repos/huggingface/datasets/issues/3074/events
https://github.com/huggingface/datasets/pull/3074
1,025,940,085
PR_kwDODunzps4tLbe-
3,074
add XCSR dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/42788901?v=4", "events_url": "https://api.github.com/users/yangxqiao/events{/privacy}", "followers_url": "https://api.github.com/users/yangxqiao/followers", "following_url": "https://api.github.com/users/yangxqiao/following{/other_user}", "gists_url": "https://api.github.com/users/yangxqiao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yangxqiao", "id": 42788901, "login": "yangxqiao", "node_id": "MDQ6VXNlcjQyNzg4OTAx", "organizations_url": "https://api.github.com/users/yangxqiao/orgs", "received_events_url": "https://api.github.com/users/yangxqiao/received_events", "repos_url": "https://api.github.com/users/yangxqiao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yangxqiao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yangxqiao/subscriptions", "type": "User", "url": "https://api.github.com/users/yangxqiao" }
[]
closed
false
null
[]
null
[ "> Hi ! Thanks for adding this dataset :)\r\n> \r\n> Do you know how the translations were done ? Maybe we can mention that in the dataset card.\r\n> \r\n> The rest looks all good to me :) good job with the dataset script and the dataset card !\r\n> \r\n> Just one thing: we try to have dummy_data.zip files that are as small as possible, however here each zip file is 70KB+. It think we can make them even smaller if we remove unnecessary files in them. In particular in the `ar` dummy data zip file, we don't need the data for all languages, but rather only the `ar` files. Could you try to remove the unnecessary files in the dummy data zip files ?\r\n\r\nHi! \r\n\r\nThank you so much for reviewing this PR. I've updated the README to briefly mention the translations and added a link to the paper, where a detailed description of the translation procedure can be found in the appendix.\r\n\r\nFor the dummy_data.zip files, is it possible to keep all the current files? I tried to remove some of the files, but the removal led to a failure in the local testing. We also think it may be better to keep the current dummy_data.zip files because all the data are useful actually. Thanks a lot!!", "Hi @lhoestq, just a gentle ping on this PR. :D " ]
"2021-10-14T04:39:59Z"
"2021-11-08T13:52:36Z"
"2021-11-08T13:52:36Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3074.diff", "html_url": "https://github.com/huggingface/datasets/pull/3074", "merged_at": "2021-11-08T13:52:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/3074.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3074" }
Hi, I wanted to add the [XCSR ](https://inklab.usc.edu//XCSR/xcsr_datasets) dataset to huggingface! :) I followed the instructions of adding new dataset to huggingface and have all the required files ready now! It would be super helpful if you can take a look and review them. Thanks in advance for your time and help. Look forward to hearing from you and can't wait to add XCSR to huggingface :D
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3074/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3074/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5699
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5699/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5699/comments
https://api.github.com/repos/huggingface/datasets/issues/5699/events
https://github.com/huggingface/datasets/issues/5699
1,652,437,419
I_kwDODunzps5ifjGr
5,699
Issue when wanting to split in memory a cached dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/47528215?v=4", "events_url": "https://api.github.com/users/FrancoisNoyez/events{/privacy}", "followers_url": "https://api.github.com/users/FrancoisNoyez/followers", "following_url": "https://api.github.com/users/FrancoisNoyez/following{/other_user}", "gists_url": "https://api.github.com/users/FrancoisNoyez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/FrancoisNoyez", "id": 47528215, "login": "FrancoisNoyez", "node_id": "MDQ6VXNlcjQ3NTI4MjE1", "organizations_url": "https://api.github.com/users/FrancoisNoyez/orgs", "received_events_url": "https://api.github.com/users/FrancoisNoyez/received_events", "repos_url": "https://api.github.com/users/FrancoisNoyez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/FrancoisNoyez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FrancoisNoyez/subscriptions", "type": "User", "url": "https://api.github.com/users/FrancoisNoyez" }
[]
open
false
null
[]
null
[ "Hi ! Good catch, this is wrong indeed and thanks for opening a PR :)" ]
"2023-04-03T17:00:07Z"
"2023-04-04T16:52:42Z"
null
NONE
null
null
null
### Describe the bug **In the 'train_test_split' method of the Dataset class** (defined datasets/arrow_dataset.py), **if 'self.cache_files' is not empty**, then, **regarding the input parameters 'train_indices_cache_file_name' and 'test_indices_cache_file_name', if they are None**, we modify them to make them not None, to see if we can just provide back / work from cached data. But if we can't provide cached data, we move on with the call to the method, except those two values are not None anymore, which will conflict with the use of the 'keep_in_memory' parameter down the line. Indeed, at some point we end up calling the 'select' method, **and if 'keep_in_memory' is True**, since the value of this method's parameter 'indices_cache_file_name' is now not None anymore, **an exception is raised, whose message is "Please use either 'keep_in_memory' or 'indices_cache_file_name' but not both.".** Because of that, it's impossible to perform a train / test split of a cached dataset while requesting that the result not be cached. Which is inconvenient when one is just performing experiments, with no intention of caching the result. Aside from this being inconvenient, **the code which lead up to that situation seems simply wrong** to me: the input variable should not be modified so as to change the user's intention just to perform a test, if that test can fail and respecting the user's intention is necessary to proceed in that case. To fix this, I suggest to use other variables / other variable names, in order to host the value(s) needed to perform the test, so as not to change the originally input values needed by the rest of the method's code. Also, **I don't see why an exception should be raised when the 'select' method is called with both 'keep_in_memory'=True and 'indices_cache_file_name'!=None**: should the use of 'keep_in_memory' not prevail anyway, specifying that the user does not want to perform caching, and so making irrelevant the value of 'indices_cache_file_name'? This is indeed what happens when we look further in the code, in the '\_select_with_indices_mapping' method: when 'keep_in_memory' is True, then the value of indices_cache_file_name does not matter, the data will be written to a stream buffer anyway. Hence I suggest to remove the raising of exception in those circumstances. Notably, to remove the raising of it in the 'select', '\_select_with_indices_mapping', 'shuffle' and 'map' methods. ### Steps to reproduce the bug ```python import datasets def generate_examples(): for i in range(10): yield {"id": i} dataset_ = datasets.Dataset.from_generator( generate_examples, keep_in_memory=False, ) dataset_.train_test_split( test_size=3, shuffle=False, keep_in_memory=True, train_indices_cache_file_name=None, test_indices_cache_file_name=None, ) ``` ### Expected behavior The result of the above code should be a DatasetDict instance. Instead, we get the following exception stack: ```python --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[3], line 1 ----> 1 dataset_.train_test_split( 2 test_size=3, 3 shuffle=False, 4 keep_in_memory=True, 5 train_indices_cache_file_name=None, 6 test_indices_cache_file_name=None, 7 ) File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:528, in transmit_format.<locals>.wrapper(*args, **kwargs) 521 self_format = { 522 "type": self._format_type, 523 "format_kwargs": self._format_kwargs, 524 "columns": self._format_columns, 525 "output_all_columns": self._output_all_columns, 526 } 527 # apply actual function --> 528 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 529 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 530 # re-apply format to the output File ~/Work/Developments/datasets/src/datasets/fingerprint.py:511, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs) 507 validate_fingerprint(kwargs[fingerprint_name]) 509 # Call actual function --> 511 out = func(dataset, *args, **kwargs) 513 # Update fingerprint of in-place transforms + update in-place history of transforms 515 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:4428, in Dataset.train_test_split(self, test_size, train_size, shuffle, stratify_by_column, seed, generator, keep_in_memory, load_from_cache_file, train_indices_cache_file_name, test_indices_cache_file_name, writer_batch_size, train_new_fingerprint, test_new_fingerprint) 4425 test_indices = permutation[:n_test] 4426 train_indices = permutation[n_test : (n_test + n_train)] -> 4428 train_split = self.select( 4429 indices=train_indices, 4430 keep_in_memory=keep_in_memory, 4431 indices_cache_file_name=train_indices_cache_file_name, 4432 writer_batch_size=writer_batch_size, 4433 new_fingerprint=train_new_fingerprint, 4434 ) 4435 test_split = self.select( 4436 indices=test_indices, 4437 keep_in_memory=keep_in_memory, (...) 4440 new_fingerprint=test_new_fingerprint, 4441 ) 4443 return DatasetDict({"train": train_split, "test": test_split}) File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:528, in transmit_format.<locals>.wrapper(*args, **kwargs) 521 self_format = { 522 "type": self._format_type, 523 "format_kwargs": self._format_kwargs, 524 "columns": self._format_columns, 525 "output_all_columns": self._output_all_columns, 526 } 527 # apply actual function --> 528 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 529 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 530 # re-apply format to the output File ~/Work/Developments/datasets/src/datasets/fingerprint.py:511, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs) 507 validate_fingerprint(kwargs[fingerprint_name]) 509 # Call actual function --> 511 out = func(dataset, *args, **kwargs) 513 # Update fingerprint of in-place transforms + update in-place history of transforms 515 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:3679, in Dataset.select(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint) 3645 """Create a new dataset with rows selected following the list/array of indices. 3646 3647 Args: (...) 3676 ``` 3677 """ 3678 if keep_in_memory and indices_cache_file_name is not None: -> 3679 raise ValueError("Please use either `keep_in_memory` or `indices_cache_file_name` but not both.") 3681 if len(self.list_indexes()) > 0: 3682 raise DatasetTransformationNotAllowedError( 3683 "Using `.select` on a dataset with attached indexes is not allowed. You can first run `.drop_index() to remove your index and then re-add it." 3684 ) ValueError: Please use either `keep_in_memory` or `indices_cache_file_name` but not both. ``` ### Environment info - `datasets` version: 2.11.1.dev0 - Platform: Linux-5.4.236-1-MANJARO-x86_64-with-glibc2.2.5 - Python version: 3.8.12 - Huggingface_hub version: 0.13.3 - PyArrow version: 11.0.0 - Pandas version: 2.0.0 *** *** EDIT: Now with a pull request to fix this [here](https://github.com/huggingface/datasets/pull/5700)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5699/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5699/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3478
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3478/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3478/comments
https://api.github.com/repos/huggingface/datasets/issues/3478/events
https://github.com/huggingface/datasets/pull/3478
1,087,860,180
PR_kwDODunzps4wPMWq
3,478
Extend support for streaming datasets that use os.walk
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "Nice. I'll update the dataset viewer once merged, and test on these four datasets" ]
"2021-12-23T16:42:55Z"
"2021-12-24T10:50:20Z"
"2021-12-24T10:50:19Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3478.diff", "html_url": "https://github.com/huggingface/datasets/pull/3478", "merged_at": "2021-12-24T10:50:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/3478.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3478" }
This PR extends the support in streaming mode for datasets that use `os.walk`, by patching that function. This PR adds support for streaming mode to datasets: 1. autshumato 1. code_x_glue_cd_code_to_text 1. code_x_glue_tc_nl_code_search_adv 1. nchlt CC: @severo
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3478/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3478/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1776
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1776/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1776/comments
https://api.github.com/repos/huggingface/datasets/issues/1776/events
https://github.com/huggingface/datasets/issues/1776
792,755,249
MDU6SXNzdWU3OTI3NTUyNDk=
1,776
[Question & Bug Report] Can we preprocess a dataset on the fly?
{ "avatar_url": "https://avatars.githubusercontent.com/u/14048129?v=4", "events_url": "https://api.github.com/users/shuaihuaiyi/events{/privacy}", "followers_url": "https://api.github.com/users/shuaihuaiyi/followers", "following_url": "https://api.github.com/users/shuaihuaiyi/following{/other_user}", "gists_url": "https://api.github.com/users/shuaihuaiyi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shuaihuaiyi", "id": 14048129, "login": "shuaihuaiyi", "node_id": "MDQ6VXNlcjE0MDQ4MTI5", "organizations_url": "https://api.github.com/users/shuaihuaiyi/orgs", "received_events_url": "https://api.github.com/users/shuaihuaiyi/received_events", "repos_url": "https://api.github.com/users/shuaihuaiyi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shuaihuaiyi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shuaihuaiyi/subscriptions", "type": "User", "url": "https://api.github.com/users/shuaihuaiyi" }
[]
closed
false
null
[]
null
[ "We are very actively working on this. How does your dataset look like in practice (number/size/type of files)?", "It's a text file with many lines (about 1B) of Chinese sentences. I use it to train language model using https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm_wwm.py", "Indeed I will submit a PR in a fez days to enable processing on-the-fly :)\r\nThis can be useful in language modeling for tokenization, padding etc.\r\n", "any update on this issue? ...really look forward to use it ", "Hi @acul3,\r\n\r\nPlease look at the discussion on a related Issue #1825. I think using `set_transform` after building from source should do.", "@gchhablani thank you so much\r\n\r\nwill try look at it" ]
"2021-01-24T09:28:24Z"
"2021-05-20T04:15:58Z"
"2021-05-20T04:15:58Z"
NONE
null
null
null
I know we can use `Datasets.map` to preprocess a dataset, but I'm using it with very large corpus which generates huge cache file (several TB cache from a 400 GB text file). I have no disk large enough to save it. Can we preprocess a dataset on the fly without generating cache? BTW, I tried raising `writer_batch_size`. Seems that argument doesn't have any effect when it's larger than `batch_size`, because you are saving all the batch instantly after it's processed. Please check the following code: https://github.com/huggingface/datasets/blob/0281f9d881f3a55c89aeaa642f1ba23444b64083/src/datasets/arrow_dataset.py#L1532
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1776/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1776/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3573
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3573/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3573/comments
https://api.github.com/repos/huggingface/datasets/issues/3573/events
https://github.com/huggingface/datasets/pull/3573
1,101,157,676
PR_kwDODunzps4w5oE_
3,573
Add Mauve metric
{ "avatar_url": "https://avatars.githubusercontent.com/u/2321244?v=4", "events_url": "https://api.github.com/users/jthickstun/events{/privacy}", "followers_url": "https://api.github.com/users/jthickstun/followers", "following_url": "https://api.github.com/users/jthickstun/following{/other_user}", "gists_url": "https://api.github.com/users/jthickstun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jthickstun", "id": 2321244, "login": "jthickstun", "node_id": "MDQ6VXNlcjIzMjEyNDQ=", "organizations_url": "https://api.github.com/users/jthickstun/orgs", "received_events_url": "https://api.github.com/users/jthickstun/received_events", "repos_url": "https://api.github.com/users/jthickstun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jthickstun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jthickstun/subscriptions", "type": "User", "url": "https://api.github.com/users/jthickstun" }
[]
closed
false
null
[]
null
[ "Hi ! The CI was failing because `mauve-text` wasn't installed. I added it to the CI setup :)\r\n\r\nI also did some minor changes to the script itself, especially to remove `**kwargs` and explicitly mentioned all the supported arguments (this way if someone does a typo with some parameters they get an error)" ]
"2022-01-13T03:52:48Z"
"2022-01-20T15:00:08Z"
"2022-01-20T15:00:08Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3573.diff", "html_url": "https://github.com/huggingface/datasets/pull/3573", "merged_at": "2022-01-20T15:00:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/3573.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3573" }
Add support for the [Mauve](https://github.com/krishnap25/mauve) metric introduced in this [paper](https://arxiv.org/pdf/2102.01454.pdf) (Neurips, 2021).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3573/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3573/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/395
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/395/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/395/comments
https://api.github.com/repos/huggingface/datasets/issues/395/events
https://github.com/huggingface/datasets/issues/395
657,454,983
MDU6SXNzdWU2NTc0NTQ5ODM=
395
Memory issue when doing select
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[]
"2020-07-15T15:43:38Z"
"2020-07-16T08:07:31Z"
"2020-07-16T08:07:31Z"
MEMBER
null
null
null
As noticed in #389, the following code loads the entire wikipedia in memory. ```python import nlp w = nlp.load_dataset("wikipedia", "20200501.en", split="train") w.select([0]) ``` This is caused by [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/arrow_dataset.py#L626) for some reason, that tries to serialize the function with all the wikipedia data with it. It's not the case with `.map` or `.filter`. However functions that are based on `.select` like `.shuffle`, `.shard`, `.train_test_split`, `.sort` are affected.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/395/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/395/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1601
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1601/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1601/comments
https://api.github.com/repos/huggingface/datasets/issues/1601/events
https://github.com/huggingface/datasets/pull/1601
770,758,914
MDExOlB1bGxSZXF1ZXN0NTQyNDQzNDE3
1,601
second update of the id_newspapers_2018
{ "avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4", "events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}", "followers_url": "https://api.github.com/users/cahya-wirawan/followers", "following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}", "gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cahya-wirawan", "id": 7669893, "login": "cahya-wirawan", "node_id": "MDQ6VXNlcjc2Njk4OTM=", "organizations_url": "https://api.github.com/users/cahya-wirawan/orgs", "received_events_url": "https://api.github.com/users/cahya-wirawan/received_events", "repos_url": "https://api.github.com/users/cahya-wirawan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions", "type": "User", "url": "https://api.github.com/users/cahya-wirawan" }
[]
closed
false
null
[]
null
[ "I close this PR, since it based on 1 week old repo. And I will create a new one" ]
"2020-12-18T10:10:20Z"
"2020-12-18T12:15:31Z"
"2020-12-18T12:15:31Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1601.diff", "html_url": "https://github.com/huggingface/datasets/pull/1601", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1601.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1601" }
The feature "url" is currently set wrongly to data["date"], this PR fix it to data["url"]. I added also an additional POC.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1601/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1601/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3109
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3109/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3109/comments
https://api.github.com/repos/huggingface/datasets/issues/3109/events
https://github.com/huggingface/datasets/pull/3109
1,030,543,284
PR_kwDODunzps4tZXmC
3,109
Update BibTeX entry
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
"2021-10-19T16:59:31Z"
"2021-10-19T17:13:28Z"
"2021-10-19T17:13:27Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3109.diff", "html_url": "https://github.com/huggingface/datasets/pull/3109", "merged_at": "2021-10-19T17:13:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/3109.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3109" }
Update BibTeX entry.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3109/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3109/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4484
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4484/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4484/comments
https://api.github.com/repos/huggingface/datasets/issues/4484/events
https://github.com/huggingface/datasets/pull/4484
1,269,383,811
PR_kwDODunzps45jywZ
4,484
Better ImportError message when a dataset script dependency is missing
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Discussed offline with @mariosasko, merging :)", "Fwiw, i think this same issue is occurring on the datasets website page, where preview isn't available due to the `bigbench` import error", "For the preview of BigBench datasets, we're just waiting for bigbench to have a stable version on PyPI, instead of the one hosted on GCS ;)" ]
"2022-06-13T12:44:37Z"
"2022-07-08T14:30:44Z"
"2022-06-13T13:50:47Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4484.diff", "html_url": "https://github.com/huggingface/datasets/pull/4484", "merged_at": "2022-06-13T13:50:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/4484.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4484" }
When a depenency is missing for a dataset script, an ImportError message is shown, with a tip to install the missing dependencies. This message is not ideal at the moment: it may show duplicate dependencies, and is not very readable. I improved it from ``` ImportError: To be able to use bigbench, you need to install the following dependencies['bigbench', 'bigbench', 'bigbench', 'bigbench'] using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz" bigbench bigbench bigbench' for instance' ``` to ``` ImportError: To be able to use bigbench, you need to install the following dependency: bigbench. Please install it using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz"' for instance' ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4484/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4484/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6008
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6008/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6008/comments
https://api.github.com/repos/huggingface/datasets/issues/6008/events
https://github.com/huggingface/datasets/issues/6008
1,789,869,344
I_kwDODunzps5qrz0g
6,008
Dataset.from_generator consistently freezes at ~1000 rows
{ "avatar_url": "https://avatars.githubusercontent.com/u/27695722?v=4", "events_url": "https://api.github.com/users/andreemic/events{/privacy}", "followers_url": "https://api.github.com/users/andreemic/followers", "following_url": "https://api.github.com/users/andreemic/following{/other_user}", "gists_url": "https://api.github.com/users/andreemic/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/andreemic", "id": 27695722, "login": "andreemic", "node_id": "MDQ6VXNlcjI3Njk1NzIy", "organizations_url": "https://api.github.com/users/andreemic/orgs", "received_events_url": "https://api.github.com/users/andreemic/received_events", "repos_url": "https://api.github.com/users/andreemic/repos", "site_admin": false, "starred_url": "https://api.github.com/users/andreemic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andreemic/subscriptions", "type": "User", "url": "https://api.github.com/users/andreemic" }
[]
closed
false
null
[]
null
[ "By default, we write data to disk (so it can be memory-mapped) every 1000 rows/samples. You can control this with the `writer_batch_size` parameter. Also, when working with fixed-size arrays, the `ArrayXD` feature types yield better performance (e.g., in your case, `features=datasets.Features({\"i\": datasets.Array3D(shape=(512,512,3), dtype=\"float32\")})` should be faster).\r\n\r\nOur support for multi-dim arrays could be better, and we plan to improve it as part of https://github.com/huggingface/datasets/issues/5272.", "> By default, we write data to disk (so it can be memory-mapped) every 1000 rows/samples. You can control this with the `writer_batch_size` parameter. Also, when working with fixed-size arrays, the `ArrayXD` feature types yield better performance (e.g., in your case, `features=datasets.Features({\"i\": datasets.Array3D(shape=(512,512,3), dtype=\"float32\")})` should be faster).\r\n> \r\n> Our support for multi-dim arrays could be better, and we plan to improve it as part of #5272.\r\n\r\nThanks for the explanation! The Image array was just for demonstration, I use PIL Images in practice. Does that make a difference? What's the best approach for a dataset with PIL Images as rows?", "It's best to use the `datasets.Image()` feature type for PIL images (to save space) :)" ]
"2023-07-05T16:06:48Z"
"2023-07-10T13:46:39Z"
"2023-07-10T13:46:39Z"
NONE
null
null
null
### Describe the bug Whenever I try to create a dataset which contains images using `Dataset.from_generator`, it freezes around 996 rows. I suppose it has something to do with memory consumption, but there's more memory available. I Somehow it worked a few times but mostly this makes the datasets library much more cumbersome to work with because generators are the easiest way to turn an existing dataset into a Hugging Face dataset. I've let it run in the frozen state for way longer than it can possibly take to load the actual dataset. Let me know if you have ideas how to resolve it! ### Steps to reproduce the bug ```python from datasets import Dataset import numpy as np def gen(): for row in range(10000): yield {"i": np.random.rand(512, 512, 3)} Dataset.from_generator(gen) # -> 90% of the time gets stuck around 1000 rows ``` ### Expected behavior Should continue and go through all the examples yielded by the generator, or at least throw an error or somehow communicate what's going on. ### Environment info - `datasets` version: 2.8.0 - Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 12.0.1 - Pandas version: 1.5.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6008/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6008/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3763
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3763/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3763/comments
https://api.github.com/repos/huggingface/datasets/issues/3763/events
https://github.com/huggingface/datasets/issues/3763
1,145,099,878
I_kwDODunzps5EQNZm
3,763
It's not possible download `20200501.pt` dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/1514798?v=4", "events_url": "https://api.github.com/users/jvanz/events{/privacy}", "followers_url": "https://api.github.com/users/jvanz/followers", "following_url": "https://api.github.com/users/jvanz/following{/other_user}", "gists_url": "https://api.github.com/users/jvanz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jvanz", "id": 1514798, "login": "jvanz", "node_id": "MDQ6VXNlcjE1MTQ3OTg=", "organizations_url": "https://api.github.com/users/jvanz/orgs", "received_events_url": "https://api.github.com/users/jvanz/received_events", "repos_url": "https://api.github.com/users/jvanz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jvanz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jvanz/subscriptions", "type": "User", "url": "https://api.github.com/users/jvanz" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi @jvanz, thanks for reporting.\r\n\r\nPlease note that Wikimedia website does not longer host Wikipedia dumps for so old dates.\r\n\r\nFor a list of accessible dump dates of `pt` Wikipedia, please see: https://dumps.wikimedia.org/ptwiki/\r\n\r\nYou can load for example `20220220` `pt` Wikipedia:\r\n```python\r\ndataset = load_dataset(\"wikipedia\", language=\"pt\", date=\"20220220\", beam_runner=\"DirectRunner\")\r\n```", "> ```python\r\n> dataset = load_dataset(\"wikipedia\", language=\"pt\", date=\"20220220\", beam_runner=\"DirectRunner\")\r\n> ```\r\n\r\nThank you! I did not know that I can do this. I was following the example in the error message when I do not define which language dataset I'm trying to download.\r\n\r\nI've tried something similar changing the date in the `load_dataset` call that I've shared in the bug description. Obviously, it did not work. I need to read the docs more carefully next time. My bad!\r\n\r\nThanks again and sorry for the noise.\r\n\r\n" ]
"2022-02-20T18:34:58Z"
"2022-02-21T12:06:12Z"
"2022-02-21T09:25:06Z"
NONE
null
null
null
## Describe the bug The dataset `20200501.pt` is broken. The available datasets: https://dumps.wikimedia.org/ptwiki/ ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("wikipedia", "20200501.pt", beam_runner='DirectRunner') ``` ## Expected results I expect to download the dataset locally. ## Actual results ``` >>> from datasets import load_dataset >>> dataset = load_dataset("wikipedia", "20200501.pt", beam_runner='DirectRunner') Downloading and preparing dataset wikipedia/20200501.pt to /home/jvanz/.cache/huggingface/datasets/wikipedia/20200501.pt/1.0.0/009f923d9b6dd00c00c8cdc7f408f2b47f45dd4f5fb7982a21f9448f4afbe475... /home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/apache_beam/__init__.py:79: UserWarning: This version of Apache Beam has not been sufficiently tested on Python 3.9. You may encounter bugs or missing features. warnings.warn( 0%| | 0/1 [00:00<?, ?it/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/load.py", line 1702, in load_dataset builder_instance.download_and_prepare( File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare self._download_and_prepare( File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/builder.py", line 1245, in _download_and_prepare super()._download_and_prepare( File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/builder.py", line 661, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/jvanz/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/009f923d9b6dd00c00c8cdc7f408f2b47f45dd4f5fb7982a21f9448f4afbe475/wikipedia.py", line 420, in _split_generators downloaded_files = dl_manager.download_and_extract({"info": info_url}) File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 307, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 195, in download downloaded_path_or_paths = map_nested( File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 260, in map_nested mapped = [ File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 261, in <listcomp> _single_map_nested((function, obj, types, None, True)) File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 196, in _single_map_nested return function(data_struct) File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 216, in _download return cached_path(url_or_filename, download_config=download_config) File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 298, in cached_path output_path = get_from_cache( File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 612, in get_from_cache raise FileNotFoundError(f"Couldn't find file at {url}") FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/ptwiki/20200501/dumpstatus.json ``` ## Environment info ``` - `datasets` version: 1.18.3 - Platform: Linux-5.3.18-150300.59.49-default-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 6.0.1 ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3763/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3763/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1169
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1169/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1169/comments
https://api.github.com/repos/huggingface/datasets/issues/1169/events
https://github.com/huggingface/datasets/pull/1169
757,747,997
MDExOlB1bGxSZXF1ZXN0NTMzMDY5MzAx
1,169
Add Opus fiskmo dataset for Finnish and Swedish for MT task
{ "avatar_url": "https://avatars.githubusercontent.com/u/6419011?v=4", "events_url": "https://api.github.com/users/spatil6/events{/privacy}", "followers_url": "https://api.github.com/users/spatil6/followers", "following_url": "https://api.github.com/users/spatil6/following{/other_user}", "gists_url": "https://api.github.com/users/spatil6/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/spatil6", "id": 6419011, "login": "spatil6", "node_id": "MDQ6VXNlcjY0MTkwMTE=", "organizations_url": "https://api.github.com/users/spatil6/orgs", "received_events_url": "https://api.github.com/users/spatil6/received_events", "repos_url": "https://api.github.com/users/spatil6/repos", "site_admin": false, "starred_url": "https://api.github.com/users/spatil6/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/spatil6/subscriptions", "type": "User", "url": "https://api.github.com/users/spatil6" }
[]
closed
false
null
[]
null
[ "merging since the CI is fixed on master" ]
"2020-12-05T17:56:55Z"
"2020-12-07T11:04:11Z"
"2020-12-07T11:04:11Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1169.diff", "html_url": "https://github.com/huggingface/datasets/pull/1169", "merged_at": "2020-12-07T11:04:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/1169.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1169" }
Adding fiskmo, a massive parallel corpus for Finnish and Swedish. for more info : http://opus.nlpl.eu/fiskmo.php
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1169/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1169/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/566
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/566/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/566/comments
https://api.github.com/repos/huggingface/datasets/issues/566/events
https://github.com/huggingface/datasets/pull/566
691,160,208
MDExOlB1bGxSZXF1ZXN0NDc3OTM2NTIz
566
Remove logger pickling to fix gg colab issues
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2020-09-02T16:16:21Z"
"2020-09-03T16:31:53Z"
"2020-09-03T16:31:52Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/566.diff", "html_url": "https://github.com/huggingface/datasets/pull/566", "merged_at": "2020-09-03T16:31:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/566.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/566" }
A `logger` objects are not picklable in google colab, contrary to `logger` objects in jupyter notebooks or in python shells. It creates some issues in google colab right now. Indeed by calling any `Dataset` method, the fingerprint update pickles the transform function, and as the logger comes with it, it results in an error (full stacktrace [here](http://pastebin.fr/64330)): ```python /usr/local/lib/python3.6/dist-packages/zmq/backend/cython/socket.cpython-36m-x86_64-linux-gnu.so in zmq.backend.cython.socket.Socket.__reduce_cython__() TypeError: no default __reduce__ due to non-trivial __cinit__ ``` To fix that I no longer dump the transform (`_map_single`, `select`, etc.), but the full name only (`nlp.arrow_dataset.Dataset._map_single`, `nlp.arrow_dataset.Dataset.select`, etc.)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/566/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/566/timeline
null
null
true