url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.24B
2.76B
| node_id
stringlengths 18
19
| number
int64 4.35k
7.35k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
3
| milestone
dict | comments
int64 0
49
| created_at
timestamp[ms] | updated_at
timestamp[ms] | closed_at
timestamp[ms] | author_association
stringclasses 4
values | active_lock_reason
null | body
stringlengths 1
47.9k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6411 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6411/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6411/comments | https://api.github.com/repos/huggingface/datasets/issues/6411/events | https://github.com/huggingface/datasets/pull/6411 | 1,992,386,630 | PR_kwDODunzps5fZE9F | 6,411 | Fix dependency conflict within CI build documentation | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-11-14T09:52:51 | 2023-11-14T10:05:59 | 2023-11-14T10:05:35 | MEMBER | null | Manually fix dependency conflict on `typing-extensions` version originated by `apache-beam` + `pydantic` (now a dependency of `huggingface-hub`).
This is a temporary hot fix of our CI build documentation until we stop using `apache-beam`.
Fix #6406. | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6411/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6411/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6411",
"html_url": "https://github.com/huggingface/datasets/pull/6411",
"diff_url": "https://github.com/huggingface/datasets/pull/6411.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6411.patch",
"merged_at": "2023-11-14T10:05:34"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6410 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6410/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6410/comments | https://api.github.com/repos/huggingface/datasets/issues/6410/events | https://github.com/huggingface/datasets/issues/6410 | 1,992,100,209 | I_kwDODunzps52vQlx | 6,410 | Datasets does not load HuggingFace Repository properly | {
"login": "MikeDoes",
"id": 40600201,
"node_id": "MDQ6VXNlcjQwNjAwMjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/40600201?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MikeDoes",
"html_url": "https://github.com/MikeDoes",
"followers_url": "https://api.github.com/users/MikeDoes/followers",
"following_url": "https://api.github.com/users/MikeDoes/following{/other_user}",
"gists_url": "https://api.github.com/users/MikeDoes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MikeDoes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MikeDoes/subscriptions",
"organizations_url": "https://api.github.com/users/MikeDoes/orgs",
"repos_url": "https://api.github.com/users/MikeDoes/repos",
"events_url": "https://api.github.com/users/MikeDoes/events{/privacy}",
"received_events_url": "https://api.github.com/users/MikeDoes/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2023-11-14T06:50:49 | 2023-11-16T06:54:36 | null | NONE | null | ### Describe the bug
Dear Datasets team,
We just have published a dataset on Huggingface:
https://huggingface.co/ai4privacy
However, when trying to read it using the Dataset library we get an error. As I understand jsonl files are compatible, could you please clarify how we can solve the issue? Please let me know and we would be more than happy to adapt the structure of the repository or meta data so it works easier:
```python
from datasets import load_dataset
dataset = load_dataset("ai4privacy/pii-masking-200k")
```
```
Downloading readme: 100%
11.8k/11.8k [00:00<00:00, 512kB/s]
Downloading data files: 100%
1/1 [00:11<00:00, 11.16s/it]
Downloading data: 100%
64.3M/64.3M [00:02<00:00, 32.9MB/s]
Downloading data: 100%
113M/113M [00:03<00:00, 35.0MB/s]
Downloading data: 100%
97.7M/97.7M [00:02<00:00, 46.1MB/s]
Downloading data: 100%
90.8M/90.8M [00:02<00:00, 44.9MB/s]
Downloading data: 100%
7.63k/7.63k [00:00<00:00, 41.0kB/s]
Downloading data: 100%
1.03k/1.03k [00:00<00:00, 9.44kB/s]
Extracting data files: 100%
1/1 [00:00<00:00, 29.26it/s]
Generating train split:
209261/0 [00:05<00:00, 41201.25 examples/s]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1939 )
-> 1940 writer.write_table(table)
1941 num_examples_progress_update += len(table)
8 frames
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in write_table(self, pa_table, writer_batch_size)
571 pa_table = pa_table.combine_chunks()
--> 572 pa_table = table_cast(pa_table, self._schema)
573 if self.embed_local_files:
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in table_cast(table, schema)
2327 if table.schema != schema:
-> 2328 return cast_table_to_schema(table, schema)
2329 elif table.schema.metadata != schema.metadata:
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in cast_table_to_schema(table, schema)
2285 if sorted(table.column_names) != sorted(features):
-> 2286 raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match")
2287 arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
ValueError: Couldn't cast
JOBTYPE: int64
PHONEIMEI: int64
ACCOUNTNAME: int64
VEHICLEVIN: int64
GENDER: int64
CURRENCYCODE: int64
CREDITCARDISSUER: int64
JOBTITLE: int64
SEX: int64
CURRENCYSYMBOL: int64
IP: int64
EYECOLOR: int64
MASKEDNUMBER: int64
SECONDARYADDRESS: int64
JOBAREA: int64
ACCOUNTNUMBER: int64
language: string
BITCOINADDRESS: int64
MAC: int64
SSN: int64
EMAIL: int64
ETHEREUMADDRESS: int64
DOB: int64
VEHICLEVRM: int64
IPV6: int64
AMOUNT: int64
URL: int64
PHONENUMBER: int64
PIN: int64
TIME: int64
CREDITCARDNUMBER: int64
FIRSTNAME: int64
IBAN: int64
BIC: int64
COUNTY: int64
STATE: int64
LASTNAME: int64
ZIPCODE: int64
HEIGHT: int64
ORDINALDIRECTION: int64
MIDDLENAME: int64
STREET: int64
USERNAME: int64
CURRENCY: int64
PREFIX: int64
USERAGENT: int64
CURRENCYNAME: int64
LITECOINADDRESS: int64
CREDITCARDCVV: int64
AGE: int64
CITY: int64
PASSWORD: int64
BUILDINGNUMBER: int64
IPV4: int64
NEARBYGPSCOORDINATE: int64
DATE: int64
COMPANYNAME: int64
to
{'masked_text': Value(dtype='string', id=None), 'unmasked_text': Value(dtype='string', id=None), 'privacy_mask': Value(dtype='string', id=None), 'span_labels': Value(dtype='string', id=None), 'bio_labels': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'tokenised_text': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}
because column names don't match
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
[<ipython-input-2-f1c6811e9c83>](https://localhost:8080/#) in <cell line: 3>()
1 from datasets import load_dataset
2
----> 3 dataset = load_dataset("ai4privacy/pii-masking-200k")
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)
2151
2152 # Download and prepare data
-> 2153 builder_instance.download_and_prepare(
2154 download_config=download_config,
2155 download_mode=download_mode,
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
952 if num_proc is not None:
953 prepare_split_kwargs["num_proc"] = num_proc
--> 954 self._download_and_prepare(
955 dl_manager=dl_manager,
956 verification_mode=verification_mode,
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
1047 try:
1048 # Prepare split will record examples associated to the split
-> 1049 self._prepare_split(split_generator, **prepare_split_kwargs)
1050 except OSError as e:
1051 raise OSError(
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split(self, split_generator, file_format, num_proc, max_shard_size)
1811 job_id = 0
1812 with pbar:
-> 1813 for job_id, done, content in self._prepare_split_single(
1814 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
1815 ):
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1956 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1957 e = e.__context__
-> 1958 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1959
1960 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
```
Thank you and have a great day ahead
### Steps to reproduce the bug
Open Google Colab Notebook:
Run command:
!pip3 install datasets
Run code:
from datasets import load_dataset
dataset = load_dataset("ai4privacy/pii-masking-200k")
### Expected behavior
Download the dataset successfully from HuggingFace to the notebook so that we can start working with it
### Environment info
- `datasets` version: 2.14.6
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.1
- PyArrow version: 9.0.0
- Pandas version: 1.5.3 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6410/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6410/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6409 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6409/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6409/comments | https://api.github.com/repos/huggingface/datasets/issues/6409/events | https://github.com/huggingface/datasets/issues/6409 | 1,991,960,865 | I_kwDODunzps52uukh | 6,409 | using DownloadManager to download from local filesystem and disable_progress_bar, there will be an exception | {
"login": "neiblegy",
"id": 16574677,
"node_id": "MDQ6VXNlcjE2NTc0Njc3",
"avatar_url": "https://avatars.githubusercontent.com/u/16574677?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neiblegy",
"html_url": "https://github.com/neiblegy",
"followers_url": "https://api.github.com/users/neiblegy/followers",
"following_url": "https://api.github.com/users/neiblegy/following{/other_user}",
"gists_url": "https://api.github.com/users/neiblegy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neiblegy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neiblegy/subscriptions",
"organizations_url": "https://api.github.com/users/neiblegy/orgs",
"repos_url": "https://api.github.com/users/neiblegy/repos",
"events_url": "https://api.github.com/users/neiblegy/events{/privacy}",
"received_events_url": "https://api.github.com/users/neiblegy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-11-14T04:21:01 | 2023-11-22T16:42:09 | 2023-11-22T16:42:09 | NONE | null | ### Describe the bug
i'm using datasets.download.download_manager.DownloadManager to download files like "file:///a/b/c.txt", and i disable_progress_bar() to disable bar. there will be an exception as follows:
`AttributeError: 'function' object has no attribute 'close'
Exception ignored in: <function TqdmCallback.__del__ at 0x7fa8683d84c0>
Traceback (most recent call last):
File "/home/protoss.gao/.local/lib/python3.9/site-packages/fsspec/callbacks.py", line 233, in __del__
self.tqdm.close()`
i check your source code in datasets/utils/file_utils.py:348 you define TqdmCallback derive from fsspec.callbacks.TqdmCallback
but in the newest fsspec code [https://github.com/fsspec/filesystem_spec/blob/master/fsspec/callbacks.py](url) , line 146, in this case, _DEFAULT_CALLBACK will take effect, but in line 234, it calls "close()" function which _DEFAULT_CALLBACK don't have such thing.
so i think the class "TqdmCallback" in datasets/utils/file_utils.py may override "__del__" function or report this bug to fsspec.
### Steps to reproduce the bug
as i said
### Expected behavior
no exception
### Environment info
datasets: 2.14.4
python: 3.9
platform: x86_64 | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6409/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6408 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6408/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6408/comments | https://api.github.com/repos/huggingface/datasets/issues/6408/events | https://github.com/huggingface/datasets/issues/6408 | 1,991,902,972 | I_kwDODunzps52ugb8 | 6,408 | `IterableDataset` lost but not keep columns when map function adding columns with names in `remove_columns` | {
"login": "shmily326",
"id": 24571857,
"node_id": "MDQ6VXNlcjI0NTcxODU3",
"avatar_url": "https://avatars.githubusercontent.com/u/24571857?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shmily326",
"html_url": "https://github.com/shmily326",
"followers_url": "https://api.github.com/users/shmily326/followers",
"following_url": "https://api.github.com/users/shmily326/following{/other_user}",
"gists_url": "https://api.github.com/users/shmily326/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shmily326/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shmily326/subscriptions",
"organizations_url": "https://api.github.com/users/shmily326/orgs",
"repos_url": "https://api.github.com/users/shmily326/repos",
"events_url": "https://api.github.com/users/shmily326/events{/privacy}",
"received_events_url": "https://api.github.com/users/shmily326/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2023-11-14T03:12:08 | 2023-11-16T06:24:10 | null | NONE | null | ### Describe the bug
IterableDataset lost but not keep columns when map function adding columns with names in remove_columns,
Dataset not.
May be related to the code below:
https://github.com/huggingface/datasets/blob/06c3ffb8d068b6307b247164b10f7c7311cefed4/src/datasets/iterable_dataset.py#L750-L756
### Steps to reproduce the bug
```python
dataset: IterableDataset = load_dataset("Anthropic/hh-rlhf", streaming=True, split="train")
column_names = list(next(iter(dataset)).keys()) # ['chosen', 'rejected']
# map_fn will return dict {"chosen": xxx, "rejected": xxx, "prompt": xxx, "history": xxxx}
dataset = dataset.map(map_fn, batched=True, remove_columns=column_names)
next(iter(dataset))
# output
# {'prompt': 'xxx, 'history': xxx}
```
```python
# when load_dataset with streaming=False, the column_names are kept:
dataset: Dataset = load_dataset("Anthropic/hh-rlhf", streaming=False, split="train")
column_names = list(next(iter(dataset)).keys()) # ['chosen', 'rejected']
# map_fn will return dict {"chosen": xxx, "rejected": xxx, "prompt": xxx, "history": xxxx}
dataset = dataset.map(map_fn, batched=True, remove_columns=column_names)
next(iter(dataset))
# output
# {'prompt': 'xxx, 'history': xxx, "chosen": xxx, "rejected": xxx}
```
### Expected behavior
IterableDataset keep columns when map function adding columns with names in remove_columns
### Environment info
datasets==2.14.6 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6408/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6407 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6407/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6407/comments | https://api.github.com/repos/huggingface/datasets/issues/6407/events | https://github.com/huggingface/datasets/issues/6407 | 1,991,514,079 | I_kwDODunzps52tBff | 6,407 | Loading the dataset from private S3 bucket gives "TypeError: cannot pickle '_contextvars.Context' object" | {
"login": "eawer",
"id": 1741779,
"node_id": "MDQ6VXNlcjE3NDE3Nzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1741779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eawer",
"html_url": "https://github.com/eawer",
"followers_url": "https://api.github.com/users/eawer/followers",
"following_url": "https://api.github.com/users/eawer/following{/other_user}",
"gists_url": "https://api.github.com/users/eawer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eawer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eawer/subscriptions",
"organizations_url": "https://api.github.com/users/eawer/orgs",
"repos_url": "https://api.github.com/users/eawer/repos",
"events_url": "https://api.github.com/users/eawer/events{/privacy}",
"received_events_url": "https://api.github.com/users/eawer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2023-11-13T21:27:43 | 2024-07-30T12:35:09 | null | NONE | null | ### Describe the bug
I'm trying to read the parquet file from the private s3 bucket using the `load_dataset` function, but I receive `TypeError: cannot pickle '_contextvars.Context' object` error
I'm working on a machine with `~/.aws/credentials` file. I can't give credentials and the path to a file in a private bucket for obvious reasons, but I'll try to give all possible outputs.
### Steps to reproduce the bug
```python
import s3fs
from datasets import load_dataset
from aiobotocore.session import get_session
DATA_PATH = "s3://bucket_name/path/validation.parquet"
fs = s3fs.S3FileSystem(session=get_session())
```
`fs.stat` returns the data, so we can say that fs is working and we have all permissions
```python
fs.stat(DATA_PATH)
# Returns:
# {'ETag': '"123123a-19"',
# 'LastModified': datetime.datetime(2023, 11, 1, 10, 16, 57, tzinfo=tzutc()),
# 'size': 312237170,
# 'name': 'bucket_name/path/validation.parquet',
# 'type': 'file',
# 'StorageClass': 'STANDARD',
# 'VersionId': 'Abc.HtmsC9h.as',
# 'ContentType': 'binary/octet-stream'}
```
```python
fs.storage_options
# Returns:
# {'session': <aiobotocore.session.AioSession at 0x7f9193fa53c0>}
```
```python
ds = load_dataset("parquet", data_files={"train": DATA_PATH}, storage_options=fs.storage_options)
```
<details>
<summary>Returns such error (expandable)</summary>
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[88], line 1
----> 1 ds = load_dataset("parquet", data_files={"train": DATA_PATH}, storage_options=fs.storage_options)
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/load.py:2153, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)
2150 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
2152 # Download and prepare data
-> 2153 builder_instance.download_and_prepare(
2154 download_config=download_config,
2155 download_mode=download_mode,
2156 verification_mode=verification_mode,
2157 try_from_hf_gcs=try_from_hf_gcs,
2158 num_proc=num_proc,
2159 storage_options=storage_options,
2160 )
2162 # Build dataset for splits
2163 keep_in_memory = (
2164 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
2165 )
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/builder.py:954, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
952 if num_proc is not None:
953 prepare_split_kwargs["num_proc"] = num_proc
--> 954 self._download_and_prepare(
955 dl_manager=dl_manager,
956 verification_mode=verification_mode,
957 **prepare_split_kwargs,
958 **download_and_prepare_kwargs,
959 )
960 # Sync info
961 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/builder.py:1027, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
1025 split_dict = SplitDict(dataset_name=self.dataset_name)
1026 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
-> 1027 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
1029 # Checksums verification
1030 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums:
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py:34, in Parquet._split_generators(self, dl_manager)
32 if not self.config.data_files:
33 raise ValueError(f"At least one data file must be specified, but got data_files={self.config.data_files}")
---> 34 data_files = dl_manager.download_and_extract(self.config.data_files)
35 if isinstance(data_files, (str, list, tuple)):
36 files = data_files
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/download/download_manager.py:565, in DownloadManager.download_and_extract(self, url_or_urls)
549 def download_and_extract(self, url_or_urls):
550 """Download and extract given `url_or_urls`.
551
552 Is roughly equivalent to:
(...)
563 extracted_path(s): `str`, extracted paths of given URL(s).
564 """
--> 565 return self.extract(self.download(url_or_urls))
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/download/download_manager.py:420, in DownloadManager.download(self, url_or_urls)
401 def download(self, url_or_urls):
402 """Download given URL(s).
403
404 By default, only one process is used for download. Pass customized `download_config.num_proc` to change this behavior.
(...)
418 ```
419 """
--> 420 download_config = self.download_config.copy()
421 download_config.extract_compressed_file = False
422 if download_config.download_desc is None:
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/download/download_config.py:94, in DownloadConfig.copy(self)
93 def copy(self) -> "DownloadConfig":
---> 94 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/download/download_config.py:94, in <dictcomp>(.0)
93 def copy(self) -> "DownloadConfig":
---> 94 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
[... skipping similar frames: _deepcopy_dict at line 231 (2 times), deepcopy at line 146 (2 times)]
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
[... skipping similar frames: deepcopy at line 146 (1 times)]
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:206, in _deepcopy_list(x, memo, deepcopy)
204 append = y.append
205 for a in x:
--> 206 append(deepcopy(a, memo))
207 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:238, in _deepcopy_method(x, memo)
237 def _deepcopy_method(x, memo): # Copy instance methods
--> 238 return type(x)(x.__func__, deepcopy(x.__self__, memo))
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
[... skipping similar frames: _deepcopy_dict at line 231 (3 times), deepcopy at line 146 (3 times), deepcopy at line 172 (3 times), _reconstruct at line 271 (2 times)]
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
[... skipping similar frames: _deepcopy_dict at line 231 (1 times), deepcopy at line 146 (1 times)]
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:265, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
263 if deep and args:
264 args = (deepcopy(arg, memo) for arg in args)
--> 265 y = func(*args)
266 if deep:
267 memo[id(x)] = y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:264, in <genexpr>(.0)
262 deep = memo is not None
263 if deep and args:
--> 264 args = (deepcopy(arg, memo) for arg in args)
265 y = func(*args)
266 if deep:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:211, in _deepcopy_tuple(x, memo, deepcopy)
210 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):
--> 211 y = [deepcopy(a, memo) for a in x]
212 # We're not going to put the tuple in the memo, but it's still important we
213 # check for it, in case the tuple contains recursive mutable structures.
214 try:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:211, in <listcomp>(.0)
210 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):
--> 211 y = [deepcopy(a, memo) for a in x]
212 # We're not going to put the tuple in the memo, but it's still important we
213 # check for it, in case the tuple contains recursive mutable structures.
214 try:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:211, in _deepcopy_tuple(x, memo, deepcopy)
210 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):
--> 211 y = [deepcopy(a, memo) for a in x]
212 # We're not going to put the tuple in the memo, but it's still important we
213 # check for it, in case the tuple contains recursive mutable structures.
214 try:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:211, in <listcomp>(.0)
210 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):
--> 211 y = [deepcopy(a, memo) for a in x]
212 # We're not going to put the tuple in the memo, but it's still important we
213 # check for it, in case the tuple contains recursive mutable structures.
214 try:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:161, in deepcopy(x, memo, _nil)
159 reductor = getattr(x, "__reduce_ex__", None)
160 if reductor is not None:
--> 161 rv = reductor(4)
162 else:
163 reductor = getattr(x, "__reduce__", None)
TypeError: cannot pickle '_contextvars.Context' object
```
</details>
### Expected behavior
If I choose to load the file from the public bucket with `anon=True` passed - everything works, so I expected loading from the private bucket to work as well
### Environment info
- `datasets` version: 2.14.6
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.10.13
- Huggingface_hub version: 0.19.1
- PyArrow version: 14.0.1
- Pandas version: 1.5.3
- s3fs version: 2023.10.0
- fsspec version: 2023.10.0
- aiobotocore version: 2.7.0 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6407/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6407/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6406 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6406/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6406/comments | https://api.github.com/repos/huggingface/datasets/issues/6406/events | https://github.com/huggingface/datasets/issues/6406 | 1,990,469,045 | I_kwDODunzps52pCW1 | 6,406 | CI Build PR Documentation is broken: ImportError: cannot import name 'TypeAliasType' from 'typing_extensions' | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-11-13T11:36:10 | 2023-11-14T10:05:36 | 2023-11-14T10:05:36 | MEMBER | null | Our CI Build PR Documentation is broken. See: https://github.com/huggingface/datasets/actions/runs/6799554060/job/18486828777?pr=6390
```
ImportError: cannot import name 'TypeAliasType' from 'typing_extensions'
``` | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6406/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6406/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6405 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6405/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6405/comments | https://api.github.com/repos/huggingface/datasets/issues/6405/events | https://github.com/huggingface/datasets/issues/6405 | 1,990,358,743 | I_kwDODunzps52onbX | 6,405 | ConfigNamesError on a simple CSV file | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2023-11-13T10:28:29 | 2023-11-13T20:01:24 | 2023-11-13T20:01:24 | COLLABORATOR | null | See https://huggingface.co/datasets/Nguyendo1999/mmath/discussions/1
```
Error code: ConfigNamesError
Exception: TypeError
Message: __init__() missing 1 required positional argument: 'dtype'
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 65, in compute_config_names_response
for config in sorted(get_dataset_config_names(path=dataset, token=hf_token))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1512, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1489, in dataset_module_factory
return HubDatasetModuleFactoryWithoutScript(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1039, in get_module
dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 468, in from_dataset_card_data
dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 399, in _from_yaml_dict
yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1838, in _from_yaml_list
return cls.from_dict(from_yaml_inner(yaml_data))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1690, in from_dict
obj = generate_from_dict(dic)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1345, in generate_from_dict
return {key: generate_from_dict(value) for key, value in obj.items()}
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1345, in <dictcomp>
return {key: generate_from_dict(value) for key, value in obj.items()}
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1353, in generate_from_dict
return class_type(**{k: v for k, v in obj.items() if k in field_names})
TypeError: __init__() missing 1 required positional argument: 'dtype'
```
This is the CSV file: https://huggingface.co/datasets/Nguyendo1999/mmath/blob/dbcdd7c2c6fc447f852ec136a7532292802bb46f/math_train.csv | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6405/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6404 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6404/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6404/comments | https://api.github.com/repos/huggingface/datasets/issues/6404/events | https://github.com/huggingface/datasets/pull/6404 | 1,990,211,901 | PR_kwDODunzps5fRrJ- | 6,404 | Support pyarrow 14.0.1 and fix vulnerability CVE-2023-47248 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 15 | 2023-11-13T09:15:39 | 2023-11-14T10:29:48 | 2023-11-14T10:23:29 | MEMBER | null | Support `pyarrow` 14.0.1 and fix vulnerability [CVE-2023-47248](https://github.com/advisories/GHSA-5wvp-7f3h-6wmm).
Fix #6396. | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6404/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6404/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6404",
"html_url": "https://github.com/huggingface/datasets/pull/6404",
"diff_url": "https://github.com/huggingface/datasets/pull/6404.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6404.patch",
"merged_at": "2023-11-14T10:23:29"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6403 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6403/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6403/comments | https://api.github.com/repos/huggingface/datasets/issues/6403/events | https://github.com/huggingface/datasets/issues/6403 | 1,990,098,817 | I_kwDODunzps52nn-B | 6,403 | Cannot import datasets on google colab (python 3.10.12) | {
"login": "nabilaannisa",
"id": 15389235,
"node_id": "MDQ6VXNlcjE1Mzg5MjM1",
"avatar_url": "https://avatars.githubusercontent.com/u/15389235?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nabilaannisa",
"html_url": "https://github.com/nabilaannisa",
"followers_url": "https://api.github.com/users/nabilaannisa/followers",
"following_url": "https://api.github.com/users/nabilaannisa/following{/other_user}",
"gists_url": "https://api.github.com/users/nabilaannisa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nabilaannisa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nabilaannisa/subscriptions",
"organizations_url": "https://api.github.com/users/nabilaannisa/orgs",
"repos_url": "https://api.github.com/users/nabilaannisa/repos",
"events_url": "https://api.github.com/users/nabilaannisa/events{/privacy}",
"received_events_url": "https://api.github.com/users/nabilaannisa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-11-13T08:14:43 | 2023-11-16T05:04:22 | 2023-11-16T05:04:21 | NONE | null | ### Describe the bug
I'm trying A full colab demo notebook of zero-shot-distillation from https://github.com/huggingface/transformers/tree/main/examples/research_projects/zero-shot-distillation but i got this type of error when importing datasets on my google colab (python version is 3.10.12)

I found the same problem that have been solved in [#3326 ] but it seem still error on the google colab. I can't try on my local using jupyter notebook because of my laptop resource doesn't fulfill the requirements.
Please can anyone help me solve this problem. Thank you 😅
### Steps to reproduce the bug
Error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-8-b6e092f83978>](https://localhost:8080/#) in <cell line: 1>()
----> 1 from datasets import load_dataset
2
3 # Print all the available datasets
4 from huggingface_hub import list_datasets
5 print([dataset.id for dataset in list_datasets()])
6 frames
[/usr/lib/python3.10/functools.py](https://localhost:8080/#) in update_wrapper(wrapper, wrapped, assigned, updated)
59 # Issue #17482: set __wrapped__ last so we don't inadvertently copy it
60 # from the wrapped function when updating __dict__
---> 61 wrapper.__wrapped__ = wrapped
62 # Return the wrapper so this can be used as a decorator via partial()
63 return wrapper
AttributeError: readonly attribute
```
### Expected behavior
Run success on Google Colab (free)
### Environment info
Windows 11 x64, Google Colab free | {
"login": "nabilaannisa",
"id": 15389235,
"node_id": "MDQ6VXNlcjE1Mzg5MjM1",
"avatar_url": "https://avatars.githubusercontent.com/u/15389235?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nabilaannisa",
"html_url": "https://github.com/nabilaannisa",
"followers_url": "https://api.github.com/users/nabilaannisa/followers",
"following_url": "https://api.github.com/users/nabilaannisa/following{/other_user}",
"gists_url": "https://api.github.com/users/nabilaannisa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nabilaannisa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nabilaannisa/subscriptions",
"organizations_url": "https://api.github.com/users/nabilaannisa/orgs",
"repos_url": "https://api.github.com/users/nabilaannisa/repos",
"events_url": "https://api.github.com/users/nabilaannisa/events{/privacy}",
"received_events_url": "https://api.github.com/users/nabilaannisa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6403/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6402 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6402/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6402/comments | https://api.github.com/repos/huggingface/datasets/issues/6402/events | https://github.com/huggingface/datasets/pull/6402 | 1,989,094,542 | PR_kwDODunzps5fOBdK | 6,402 | Update torch_formatter.py | {
"login": "varunneal",
"id": 32204417,
"node_id": "MDQ6VXNlcjMyMjA0NDE3",
"avatar_url": "https://avatars.githubusercontent.com/u/32204417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/varunneal",
"html_url": "https://github.com/varunneal",
"followers_url": "https://api.github.com/users/varunneal/followers",
"following_url": "https://api.github.com/users/varunneal/following{/other_user}",
"gists_url": "https://api.github.com/users/varunneal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/varunneal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/varunneal/subscriptions",
"organizations_url": "https://api.github.com/users/varunneal/orgs",
"repos_url": "https://api.github.com/users/varunneal/repos",
"events_url": "https://api.github.com/users/varunneal/events{/privacy}",
"received_events_url": "https://api.github.com/users/varunneal/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-11-11T19:40:41 | 2024-03-15T11:31:53 | 2024-03-15T11:25:37 | CONTRIBUTOR | null | Ensure PyTorch images are converted to (C, H, W) instead of (H, W, C). See #6394 for motivation. | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6402/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6402/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6402",
"html_url": "https://github.com/huggingface/datasets/pull/6402",
"diff_url": "https://github.com/huggingface/datasets/pull/6402.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6402.patch",
"merged_at": "2024-03-15T11:25:36"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6401 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6401/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6401/comments | https://api.github.com/repos/huggingface/datasets/issues/6401/events | https://github.com/huggingface/datasets/issues/6401 | 1,988,710,061 | I_kwDODunzps52iU6t | 6,401 | dataset = load_dataset("Hyperspace-Technologies/scp-wiki-text") not working | {
"login": "userbox020",
"id": 47074021,
"node_id": "MDQ6VXNlcjQ3MDc0MDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/47074021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/userbox020",
"html_url": "https://github.com/userbox020",
"followers_url": "https://api.github.com/users/userbox020/followers",
"following_url": "https://api.github.com/users/userbox020/following{/other_user}",
"gists_url": "https://api.github.com/users/userbox020/gists{/gist_id}",
"starred_url": "https://api.github.com/users/userbox020/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/userbox020/subscriptions",
"organizations_url": "https://api.github.com/users/userbox020/orgs",
"repos_url": "https://api.github.com/users/userbox020/repos",
"events_url": "https://api.github.com/users/userbox020/events{/privacy}",
"received_events_url": "https://api.github.com/users/userbox020/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-11-11T04:09:07 | 2023-11-20T17:45:20 | 2023-11-20T17:45:20 | NONE | null | ### Describe the bug
```
(datasets) mruserbox@guru-X99:/media/10TB_HHD/_LLM_DATASETS$ python dataset.py
Downloading readme: 100%|███████████████████████████████████| 360/360 [00:00<00:00, 2.16MB/s]
Downloading data: 100%|█████████████████████████████████| 65.1M/65.1M [00:19<00:00, 3.38MB/s]
Downloading data: 100%|█████████████████████████████████| 6.35k/6.35k [00:00<00:00, 20.7kB/s]
Downloading data: 100%|█████████████████████████████████| 7.29M/7.29M [00:01<00:00, 3.99MB/s]
Downloading data files: 100%|██████████████████████████████████| 3/3 [00:21<00:00, 7.14s/it]
Extracting data files: 100%|█████████████████████████████████| 3/3 [00:00<00:00, 1624.23it/s]
Generating train split: 100%|█████████████| 314294/314294 [00:00<00:00, 668186.58 examples/s]
Generating validation split: 120 examples [00:00, 100422.28 examples/s]
Generating test split: 100%|████████████████| 34922/34922 [00:00<00:00, 754683.41 examples/s]
Traceback (most recent call last):
File "/media/10TB_HHD/_LLM_DATASETS/dataset.py", line 3, in <module>
dataset = load_dataset("Hyperspace-Technologies/scp-wiki-text")
File "/home/mruserbox/miniconda3/envs/datasets/lib/python3.10/site-packages/datasets/load.py", line 2153, in load_dataset
builder_instance.download_and_prepare(
File "/home/mruserbox/miniconda3/envs/datasets/lib/python3.10/site-packages/datasets/builder.py", line 954, in download_and_prepare
self._download_and_prepare(
File "/home/mruserbox/miniconda3/envs/datasets/lib/python3.10/site-packages/datasets/builder.py", line 1067, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/home/mruserbox/miniconda3/envs/datasets/lib/python3.10/site-packages/datasets/utils/info_utils.py", line 93, in verify_splits
raise UnexpectedSplits(str(set(recorded_splits) - set(expected_splits)))
datasets.utils.info_utils.UnexpectedSplits: {'validation'}
```
### Steps to reproduce the bug
Name:
`dataset.py`
Code:
```
from datasets import load_dataset
dataset = load_dataset("Hyperspace-Technologies/scp-wiki-text")
```
### Expected behavior
Run without errors
### Environment info
```
name: datasets
channels:
- defaults
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- bzip2=1.0.8=h7b6447c_0
- ca-certificates=2023.08.22=h06a4308_0
- ld_impl_linux-64=2.38=h1181459_1
- libffi=3.4.4=h6a678d5_0
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- libuuid=1.41.5=h5eee18b_0
- ncurses=6.4=h6a678d5_0
- openssl=3.0.12=h7f8727e_0
- python=3.10.13=h955ad1f_0
- readline=8.2=h5eee18b_0
- setuptools=68.0.0=py310h06a4308_0
- sqlite=3.41.2=h5eee18b_0
- tk=8.6.12=h1ccaba5_0
- wheel=0.41.2=py310h06a4308_0
- xz=5.4.2=h5eee18b_0
- zlib=1.2.13=h5eee18b_0
- pip:
- aiohttp==3.8.6
- aiosignal==1.3.1
- async-timeout==4.0.3
- attrs==23.1.0
- certifi==2023.7.22
- charset-normalizer==3.3.2
- click==8.1.7
- datasets==2.14.6
- dill==0.3.7
- filelock==3.13.1
- frozenlist==1.4.0
- fsspec==2023.10.0
- huggingface-hub==0.19.0
- idna==3.4
- multidict==6.0.4
- multiprocess==0.70.15
- numpy==1.26.1
- openai==0.27.8
- packaging==23.2
- pandas==2.1.3
- pip==23.3.1
- platformdirs==4.0.0
- pyarrow==14.0.1
- python-dateutil==2.8.2
- pytz==2023.3.post1
- pyyaml==6.0.1
- requests==2.31.0
- six==1.16.0
- tomli==2.0.1
- tqdm==4.66.1
- typer==0.9.0
- typing-extensions==4.8.0
- tzdata==2023.3
- urllib3==2.0.7
- xxhash==3.4.1
- yarl==1.9.2
prefix: /home/mruserbox/miniconda3/envs/datasets
``` | {
"login": "userbox020",
"id": 47074021,
"node_id": "MDQ6VXNlcjQ3MDc0MDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/47074021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/userbox020",
"html_url": "https://github.com/userbox020",
"followers_url": "https://api.github.com/users/userbox020/followers",
"following_url": "https://api.github.com/users/userbox020/following{/other_user}",
"gists_url": "https://api.github.com/users/userbox020/gists{/gist_id}",
"starred_url": "https://api.github.com/users/userbox020/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/userbox020/subscriptions",
"organizations_url": "https://api.github.com/users/userbox020/orgs",
"repos_url": "https://api.github.com/users/userbox020/repos",
"events_url": "https://api.github.com/users/userbox020/events{/privacy}",
"received_events_url": "https://api.github.com/users/userbox020/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6401/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6400 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6400/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6400/comments | https://api.github.com/repos/huggingface/datasets/issues/6400/events | https://github.com/huggingface/datasets/issues/6400 | 1,988,571,317 | I_kwDODunzps52hzC1 | 6,400 | Safely load datasets by disabling execution of dataset loading script | {
"login": "irenedea",
"id": 14367635,
"node_id": "MDQ6VXNlcjE0MzY3NjM1",
"avatar_url": "https://avatars.githubusercontent.com/u/14367635?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/irenedea",
"html_url": "https://github.com/irenedea",
"followers_url": "https://api.github.com/users/irenedea/followers",
"following_url": "https://api.github.com/users/irenedea/following{/other_user}",
"gists_url": "https://api.github.com/users/irenedea/gists{/gist_id}",
"starred_url": "https://api.github.com/users/irenedea/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/irenedea/subscriptions",
"organizations_url": "https://api.github.com/users/irenedea/orgs",
"repos_url": "https://api.github.com/users/irenedea/repos",
"events_url": "https://api.github.com/users/irenedea/events{/privacy}",
"received_events_url": "https://api.github.com/users/irenedea/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4 | 2023-11-10T23:48:29 | 2024-06-13T15:56:13 | 2024-06-13T15:56:13 | NONE | null | ### Feature request
Is there a way to disable execution of dataset loading script using `load_dataset`? This is a security vulnerability that could lead to arbitrary code execution.
Any suggested workarounds are welcome as well.
### Motivation
This is a security vulnerability that could lead to arbitrary code execution.
### Your contribution
n/a | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6400/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6399 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6399/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6399/comments | https://api.github.com/repos/huggingface/datasets/issues/6399/events | https://github.com/huggingface/datasets/issues/6399 | 1,988,368,503 | I_kwDODunzps52hBh3 | 6,399 | TypeError: Cannot convert pyarrow.lib.ChunkedArray to pyarrow.lib.Array | {
"login": "y-hwang",
"id": 76236359,
"node_id": "MDQ6VXNlcjc2MjM2MzU5",
"avatar_url": "https://avatars.githubusercontent.com/u/76236359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/y-hwang",
"html_url": "https://github.com/y-hwang",
"followers_url": "https://api.github.com/users/y-hwang/followers",
"following_url": "https://api.github.com/users/y-hwang/following{/other_user}",
"gists_url": "https://api.github.com/users/y-hwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/y-hwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/y-hwang/subscriptions",
"organizations_url": "https://api.github.com/users/y-hwang/orgs",
"repos_url": "https://api.github.com/users/y-hwang/repos",
"events_url": "https://api.github.com/users/y-hwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/y-hwang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2023-11-10T20:48:46 | 2024-06-22T00:13:48 | null | NONE | null | ### Describe the bug
Hi, I am preprocessing a large custom dataset with numpy arrays. I am running into this TypeError during writing in a dataset.map() function. I've tried decreasing writer batch size, but this error persists. This error does not occur for smaller datasets.
Thank you!
### Steps to reproduce the bug
Traceback (most recent call last):
File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1354, in _write_generator_to_queue
for i, result in enumerate(func(**kwargs)):
File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3493, in _map_single
writer.write_batch(batch)
File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/datasets/arrow_writer.py", line 555, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarrow/array.pxi", line 243, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/datasets/arrow_writer.py", line 184, in __arrow_array__
out = numpy_to_pyarrow_listarray(data)
File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/datasets/features/features.py", line 1394, in numpy_to_pyarrow_listarray
values = pa.ListArray.from_arrays(offsets, values)
File "pyarrow/array.pxi", line 2004, in pyarrow.lib.ListArray.from_arrays
TypeError: Cannot convert pyarrow.lib.ChunkedArray to pyarrow.lib.Array
### Expected behavior
Type should not be a ChunkedArray
### Environment info
datasets v2.14.5
arrow v1.2.3
pyarrow v12.0.1 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6399/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6399/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6398 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6398/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6398/comments | https://api.github.com/repos/huggingface/datasets/issues/6398/events | https://github.com/huggingface/datasets/pull/6398 | 1,987,786,446 | PR_kwDODunzps5fJlP7 | 6,398 | Remove redundant condition in builders | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-11-10T14:56:43 | 2023-11-14T10:49:15 | 2023-11-14T10:43:00 | MEMBER | null | Minor refactoring to remove redundant condition. | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6398/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6398",
"html_url": "https://github.com/huggingface/datasets/pull/6398",
"diff_url": "https://github.com/huggingface/datasets/pull/6398.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6398.patch",
"merged_at": "2023-11-14T10:43:00"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6397 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6397/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6397/comments | https://api.github.com/repos/huggingface/datasets/issues/6397/events | https://github.com/huggingface/datasets/issues/6397 | 1,987,622,152 | I_kwDODunzps52eLUI | 6,397 | Raise a different exception for inexisting dataset vs files without known extension | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-11-10T13:22:14 | 2023-11-22T15:12:34 | 2023-11-22T15:12:34 | COLLABORATOR | null | See https://github.com/huggingface/datasets-server/issues/2082#issuecomment-1805716557
We have the same error for:
- https://huggingface.co/datasets/severo/a_dataset_that_does_not_exist: a dataset that does not exist
- https://huggingface.co/datasets/severo/test_files_without_extension: a dataset with files without a known extension
```
>>> import datasets
>>> datasets.get_dataset_config_names('severo/a_dataset_that_does_not_exist')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1508, in dataset_module_factory
raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at /home/slesage/hf/datasets-server/services/worker/severo/a_dataset_that_does_not_exist/a_dataset_that_does_not_exist.py or any data file in the same directory. Couldn't find 'severo/a_dataset_that_does_not_exist' on the Hugging Face Hub either: FileNotFoundError: Dataset 'severo/a_dataset_that_does_not_exist' doesn't exist on the Hub. If the repo is private or gated, make sure to log in with `huggingface-cli login`.
>>> datasets.get_dataset_config_names('severo/test_files_without_extension')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1508, in dataset_module_factory
raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at /home/slesage/hf/datasets-server/services/worker/severo/test_files_without_extension/test_files_without_extension.py or any data file in the same directory. Couldn't find 'severo/test_files_without_extension' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in severo/test_files_without_extension.
```
To differentiate, we must parse the error message (only the end is different). We should have a different exception for these two errors. | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6397/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6397/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6396 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6396/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6396/comments | https://api.github.com/repos/huggingface/datasets/issues/6396/events | https://github.com/huggingface/datasets/issues/6396 | 1,987,308,077 | I_kwDODunzps52c-ot | 6,396 | Issue with pyarrow 14.0.1 | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2023-11-10T10:02:12 | 2023-11-14T10:23:30 | 2023-11-14T10:23:30 | COLLABORATOR | null | See https://github.com/huggingface/datasets-server/pull/2089 for reference
```
from datasets import (Array2D, Dataset, Features)
feature_type = Array2D(shape=(2, 2), dtype="float32")
content = [[0.0, 0.0], [0.0, 0.0]]
features = Features({"col": feature_type})
dataset = Dataset.from_dict({"col": [content]}, features=features)
```
generates
```
/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/features/features.py:648: FutureWarning: pyarrow.PyExtensionType is deprecated and will refuse deserialization by default. Instead, please derive from pyarrow.ExtensionType and implement your own serialization mechanism.
pa.PyExtensionType.__init__(self, self.storage_dtype)
/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/features/features.py:1661: RuntimeWarning: pickle-based deserialization of pyarrow.PyExtensionType subclasses is disabled by default; if you only ingest trusted data files, you may re-enable this using `pyarrow.PyExtensionType.set_auto_load(True)`.
In the future, Python-defined extension subclasses should derive from pyarrow.ExtensionType (not pyarrow.PyExtensionType) and implement their own serialization mechanism.
obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema}
/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/features/features.py:1661: FutureWarning: pyarrow.PyExtensionType is deprecated and will refuse deserialization by default. Instead, please derive from pyarrow.ExtensionType and implement your own serialization mechanism.
obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema}
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 924, in from_dict
return cls(pa_table, info=info, split=split)
File "/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 693, in __init__
inferred_features = Features.from_arrow_schema(arrow_table.schema)
File "/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1661, in from_arrow_schema
obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema}
File "/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1661, in <dictcomp>
obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema}
File "/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1381, in generate_from_arrow_type
return Value(dtype=_arrow_to_datasets_dtype(pa_type))
File "/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 111, in _arrow_to_datasets_dtype
raise ValueError(f"Arrow type {arrow_type} does not have a datasets dtype equivalent.")
ValueError: Arrow type extension<arrow.py_extension_type<pyarrow.lib.UnknownExtensionType>> does not have a datasets dtype equivalent.
``` | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6396/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6396/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6395 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6395/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6395/comments | https://api.github.com/repos/huggingface/datasets/issues/6395/events | https://github.com/huggingface/datasets/issues/6395 | 1,986,484,124 | I_kwDODunzps52Z1ec | 6,395 | Add ability to set lock type | {
"login": "leoleoasd",
"id": 37735580,
"node_id": "MDQ6VXNlcjM3NzM1NTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/37735580?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leoleoasd",
"html_url": "https://github.com/leoleoasd",
"followers_url": "https://api.github.com/users/leoleoasd/followers",
"following_url": "https://api.github.com/users/leoleoasd/following{/other_user}",
"gists_url": "https://api.github.com/users/leoleoasd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leoleoasd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leoleoasd/subscriptions",
"organizations_url": "https://api.github.com/users/leoleoasd/orgs",
"repos_url": "https://api.github.com/users/leoleoasd/repos",
"events_url": "https://api.github.com/users/leoleoasd/events{/privacy}",
"received_events_url": "https://api.github.com/users/leoleoasd/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2023-11-09T22:12:30 | 2023-11-23T18:50:00 | 2023-11-23T18:50:00 | NONE | null | ### Feature request
Allow setting file lock type, maybe from an environment variable
Currently, it only depends on whether fnctl is available:
https://github.com/huggingface/datasets/blob/12ebe695b4748c5a26e08b44ed51955f74f5801d/src/datasets/utils/filelock.py#L463-L470C16
### Motivation
In my environment, flock isn't supported on a network attached drive
### Your contribution
I'll be happy to submit a pr. | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6395/timeline | null | not_planned | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6394 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6394/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6394/comments | https://api.github.com/repos/huggingface/datasets/issues/6394/events | https://github.com/huggingface/datasets/issues/6394 | 1,985,947,116 | I_kwDODunzps52XyXs | 6,394 | TorchFormatter images (H, W, C) instead of (C, H, W) format | {
"login": "Modexus",
"id": 37351874,
"node_id": "MDQ6VXNlcjM3MzUxODc0",
"avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Modexus",
"html_url": "https://github.com/Modexus",
"followers_url": "https://api.github.com/users/Modexus/followers",
"following_url": "https://api.github.com/users/Modexus/following{/other_user}",
"gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Modexus/subscriptions",
"organizations_url": "https://api.github.com/users/Modexus/orgs",
"repos_url": "https://api.github.com/users/Modexus/repos",
"events_url": "https://api.github.com/users/Modexus/events{/privacy}",
"received_events_url": "https://api.github.com/users/Modexus/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 9 | 2023-11-09T16:02:15 | 2024-04-11T12:40:16 | 2024-04-11T12:40:16 | CONTRIBUTOR | null | ### Describe the bug
Using .set_format("torch") leads to images having shape (H, W, C), the same as in numpy.
However, pytorch normally uses (C, H, W) format.
Maybe I'm missing something but this makes the format a lot less useful as I then have to permute it anyways.
If not using the format it is possible to directly use torchvision transforms but any non-transformed value will not be a tensor.
Is there a reason for this choice?
### Steps to reproduce the bug
```python
from datasets import Dataset, Features, Audio, Image
images = ["path/to/image.png"] * 10
features = Features({"image": Image()})
ds = Dataset.from_dict({"image": images}, features=features)
ds = ds.with_format("torch")
ds[0]["image"].shape
```
```python
torch.Size([512, 512, 4])
```
### Expected behavior
```python
from datasets import Dataset, Features, Audio, Image
images = ["path/to/image.png"] * 10
features = Features({"image": Image()})
ds = Dataset.from_dict({"image": images}, features=features)
ds = ds.with_format("torch")
ds[0]["image"].shape
```
```python
torch.Size([4, 512, 512])
```
### Environment info
- `datasets` version: 2.14.6
- Platform: Linux-6.5.9-100.fc37.x86_64-x86_64-with-glibc2.31
- Python version: 3.11.6
- Huggingface_hub version: 0.18.0
- PyArrow version: 14.0.1
- Pandas version: 2.1.2 | {
"login": "Modexus",
"id": 37351874,
"node_id": "MDQ6VXNlcjM3MzUxODc0",
"avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Modexus",
"html_url": "https://github.com/Modexus",
"followers_url": "https://api.github.com/users/Modexus/followers",
"following_url": "https://api.github.com/users/Modexus/following{/other_user}",
"gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Modexus/subscriptions",
"organizations_url": "https://api.github.com/users/Modexus/orgs",
"repos_url": "https://api.github.com/users/Modexus/repos",
"events_url": "https://api.github.com/users/Modexus/events{/privacy}",
"received_events_url": "https://api.github.com/users/Modexus/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6394/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6394/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6393 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6393/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6393/comments | https://api.github.com/repos/huggingface/datasets/issues/6393/events | https://github.com/huggingface/datasets/issues/6393 | 1,984,913,259 | I_kwDODunzps52T19r | 6,393 | Filter occasionally hangs | {
"login": "dakinggg",
"id": 43149077,
"node_id": "MDQ6VXNlcjQzMTQ5MDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/43149077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dakinggg",
"html_url": "https://github.com/dakinggg",
"followers_url": "https://api.github.com/users/dakinggg/followers",
"following_url": "https://api.github.com/users/dakinggg/following{/other_user}",
"gists_url": "https://api.github.com/users/dakinggg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dakinggg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dakinggg/subscriptions",
"organizations_url": "https://api.github.com/users/dakinggg/orgs",
"repos_url": "https://api.github.com/users/dakinggg/repos",
"events_url": "https://api.github.com/users/dakinggg/events{/privacy}",
"received_events_url": "https://api.github.com/users/dakinggg/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 11 | 2023-11-09T06:18:30 | 2024-03-05T16:03:12 | null | NONE | null | ### Describe the bug
A call to `.filter` occasionally hangs (after the filter is complete, according to tqdm)
There is a trace produced
```
Exception ignored in: <function Dataset.__del__ at 0x7efb48130c10>
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/datasets/arrow_dataset.py", line 1366, in __del__
if hasattr(self, "_indices"):
File "/usr/lib/python3/dist-packages/composer/core/engine.py", line 123, in sigterm_handler
sys.exit(128 + signal)
SystemExit: 143
```
but I'm not sure if the trace is actually from `datasets`, or from surrounding code that is trying to clean up after datasets gets stuck.
Unfortunately I can't reproduce this issue anywhere close to reliably. It happens infrequently when using `num_procs > 1`. Anecdotally I started seeing it when using larger datasets (~10M samples).
### Steps to reproduce the bug
N/A see description
### Expected behavior
map/filter calls always complete sucessfully
### Environment info
- `datasets` version: 2.14.6
- Platform: Linux-5.4.0-137-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.17.3
- PyArrow version: 13.0.0
- Pandas version: 2.1.2 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6393/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6392 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6392/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6392/comments | https://api.github.com/repos/huggingface/datasets/issues/6392/events | https://github.com/huggingface/datasets/issues/6392 | 1,984,369,545 | I_kwDODunzps52RxOJ | 6,392 | `push_to_hub` is not robust to hub closing connection | {
"login": "msis",
"id": 577139,
"node_id": "MDQ6VXNlcjU3NzEzOQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/577139?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/msis",
"html_url": "https://github.com/msis",
"followers_url": "https://api.github.com/users/msis/followers",
"following_url": "https://api.github.com/users/msis/following{/other_user}",
"gists_url": "https://api.github.com/users/msis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/msis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/msis/subscriptions",
"organizations_url": "https://api.github.com/users/msis/orgs",
"repos_url": "https://api.github.com/users/msis/repos",
"events_url": "https://api.github.com/users/msis/events{/privacy}",
"received_events_url": "https://api.github.com/users/msis/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 12 | 2023-11-08T20:44:53 | 2023-12-20T07:28:24 | 2023-12-01T17:51:34 | NONE | null | ### Describe the bug
Like to #6172, `push_to_hub` will crash if Hub resets the connection and raise the following error:
```
Pushing dataset shards to the dataset hub: 32%|███▏ | 54/171 [06:38<14:23, 7.38s/it]
Traceback (most recent call last):
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/connectionpool.py", line 715, in urlopen
httplib_response = self._make_request(
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/connectionpool.py", line 467, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/connectionpool.py", line 462, in _make_request
httplib_response = conn.getresponse()
File "/usr/lib/python3.8/http/client.py", line 1348, in getresponse
response.begin()
File "/usr/lib/python3.8/http/client.py", line 316, in begin
version, status, reason = self._read_status()
File "/usr/lib/python3.8/http/client.py", line 285, in _read_status
raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/connectionpool.py", line 799, in urlopen
retries = retries.increment(
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/util/retry.py", line 550, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/packages/six.py", line 769, in reraise
raise value.with_traceback(tb)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/connectionpool.py", line 715, in urlopen
httplib_response = self._make_request(
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/connectionpool.py", line 467, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/connectionpool.py", line 462, in _make_request
httplib_response = conn.getresponse()
File "/usr/lib/python3.8/http/client.py", line 1348, in getresponse
response.begin()
File "/usr/lib/python3.8/http/client.py", line 316, in begin
version, status, reason = self._read_status()
File "/usr/lib/python3.8/http/client.py", line 285, in _read_status
raise RemoteDisconnected("Remote end closed connection without"
urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/_commit_api.py", line 383, in _wrapped_lfs_upload
lfs_upload(operation=operation, lfs_batch_action=batch_action, token=token)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/lfs.py", line 223, in lfs_upload
_upload_multi_part(operation=operation, header=header, chunk_size=chunk_size, upload_url=upload_action["href"])
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/lfs.py", line 319, in _upload_multi_part
else _upload_parts_iteratively(operation=operation, sorted_parts_urls=sorted_parts_urls, chunk_size=chunk_size)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/lfs.py", line 375, in _upload_parts_iteratively
part_upload_res = http_backoff("PUT", part_upload_url, data=fileobj_slice)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_http.py", line 258, in http_backoff
response = session.request(method=method, url=url, **kwargs)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_http.py", line 63, in send
return super().send(request, *args, **kwargs)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/requests/adapters.py", line 501, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: (ProtocolError('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')), '(Request ID: 2bab8c06-b701-4266-aead-fe2e0dc0e3ed)')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "convert_to_hf.py", line 116, in <module>
main()
File "convert_to_hf.py", line 108, in main
audio_dataset.push_to_hub(
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/dataset_dict.py", line 1641, in push_to_hub
repo_id, split, uploaded_size, dataset_nbytes, _, _ = self[split]._push_parquet_shards_to_hub(
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 5308, in _push_parquet_shards_to_hub
_retry(
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 290, in _retry
return func(*func_args, **func_kwargs)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 828, in _inner
return fn(self, *args, **kwargs)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 3221, in upload_file
commit_info = self.create_commit(
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 828, in _inner
return fn(self, *args, **kwargs)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 2695, in create_commit
upload_lfs_files(
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/_commit_api.py", line 393, in upload_lfs_files
_wrapped_lfs_upload(filtered_actions[0])
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/_commit_api.py", line 385, in _wrapped_lfs_upload
raise RuntimeError(f"Error while uploading '{operation.path_in_repo}' to the Hub.") from exc
RuntimeError: Error while uploading 'batch_19/train-00054-of-00171-932beb4082c034bf.parquet' to the Hub.
```
The function should retry if the operations fails, or at least offer a way to recover after such a failure.
Right now, calling the function again will start sending all the parquets files leading to duplicates in the repository, with no guarantee that it will actually be pushed.
Previously, it would crash with an error 400 #4677 .
### Steps to reproduce the bug
Any large dataset pushed the hub:
```py
audio_dataset.push_to_hub(
repo_id="org/dataset",
)
```
### Expected behavior
`push_to_hub` should have an option for max retries or resume.
### Environment info
- `datasets` version: 2.14.6
- Platform: Linux-5.15.0-1044-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.16.4
- PyArrow version: 13.0.0
- Pandas version: 2.0.3 | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6392/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6391 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6391/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6391/comments | https://api.github.com/repos/huggingface/datasets/issues/6391/events | https://github.com/huggingface/datasets/pull/6391 | 1,984,091,776 | PR_kwDODunzps5e9BDO | 6,391 | Webdataset dataset builder | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2023-11-08T17:31:59 | 2024-05-22T16:51:08 | 2023-11-28T16:33:10 | MEMBER | null | Allow `load_dataset` to support the Webdataset format.
It allows users to download/stream data from local files or from the Hugging Face Hub.
Moreover it will enable the Dataset Viewer for Webdataset datasets on HF.
## Implementation details
- I added a new Webdataset builder
- dataset with TAR files are now read using the Webdataset builder
- Basic decoding from `webdataset` is used by default, except unsafe ones like pickle
- HF authentication support is done with `xopen`
## TODOS
- [x] tests
- [x] docs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6391/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6391/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6391",
"html_url": "https://github.com/huggingface/datasets/pull/6391",
"diff_url": "https://github.com/huggingface/datasets/pull/6391.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6391.patch",
"merged_at": "2023-11-28T16:33:10"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6390 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6390/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6390/comments | https://api.github.com/repos/huggingface/datasets/issues/6390/events | https://github.com/huggingface/datasets/pull/6390 | 1,983,725,707 | PR_kwDODunzps5e7xQ3 | 6,390 | handle future deprecation argument | {
"login": "winglian",
"id": 381258,
"node_id": "MDQ6VXNlcjM4MTI1OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/381258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/winglian",
"html_url": "https://github.com/winglian",
"followers_url": "https://api.github.com/users/winglian/followers",
"following_url": "https://api.github.com/users/winglian/following{/other_user}",
"gists_url": "https://api.github.com/users/winglian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/winglian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/winglian/subscriptions",
"organizations_url": "https://api.github.com/users/winglian/orgs",
"repos_url": "https://api.github.com/users/winglian/repos",
"events_url": "https://api.github.com/users/winglian/events{/privacy}",
"received_events_url": "https://api.github.com/users/winglian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-11-08T14:21:25 | 2023-11-21T02:10:24 | 2023-11-14T15:15:59 | CONTRIBUTOR | null | getting this error:
```
/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/datasets/table.py:1387: FutureWarning: promote has been superseded by mode='default'.
return cls._concat_blocks(pa_tables_to_concat_vertically, axis=0)
```
Since datasets supports arrow greater than 8.0.0, we need to handle both cases.
[Arrow v14 docs](https://arrow.apache.org/docs/python/generated/pyarrow.concat_tables.html)
[Arrow v13 docs](https://arrow.apache.org/docs/13.0/python/generated/pyarrow.concat_tables.html) | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6390/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6390/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6390",
"html_url": "https://github.com/huggingface/datasets/pull/6390",
"diff_url": "https://github.com/huggingface/datasets/pull/6390.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6390.patch",
"merged_at": "2023-11-14T15:15:59"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6389 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6389/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6389/comments | https://api.github.com/repos/huggingface/datasets/issues/6389/events | https://github.com/huggingface/datasets/issues/6389 | 1,983,545,744 | I_kwDODunzps52OoGQ | 6,389 | Index 339 out of range for dataset of size 339 <-- save_to_file() | {
"login": "jaggzh",
"id": 20318973,
"node_id": "MDQ6VXNlcjIwMzE4OTcz",
"avatar_url": "https://avatars.githubusercontent.com/u/20318973?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaggzh",
"html_url": "https://github.com/jaggzh",
"followers_url": "https://api.github.com/users/jaggzh/followers",
"following_url": "https://api.github.com/users/jaggzh/following{/other_user}",
"gists_url": "https://api.github.com/users/jaggzh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaggzh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaggzh/subscriptions",
"organizations_url": "https://api.github.com/users/jaggzh/orgs",
"repos_url": "https://api.github.com/users/jaggzh/repos",
"events_url": "https://api.github.com/users/jaggzh/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaggzh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2023-11-08T12:52:09 | 2023-11-24T09:14:13 | null | NONE | null | ### Describe the bug
When saving out some Audio() data.
The data is audio recordings with associated 'sentences'.
(They use the audio 'bytes' approach because they're clips within audio files).
Code is below the traceback (I can't upload the voice audio/text (it's not even me)).
```
Traceback (most recent call last):
File "/mnt/ddrive/prj/voice/voice-training-dataset-create/./dataset.py", line 156, in <module>
create_dataset(args)
File "/mnt/ddrive/prj/voice/voice-training-dataset-create/./dataset.py", line 138, in create_dataset
hf_dataset.save_to_disk(args.outds, max_shard_size='50MB')
File "/home/j/src/py/datasets/src/datasets/arrow_dataset.py", line 1531, in save_to_disk
for kwargs in kwargs_per_job:
File "/home/j/src/py/datasets/src/datasets/arrow_dataset.py", line 1508, in <genexpr>
"shard": self.shard(num_shards=num_shards, index=shard_idx, contiguous=True),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/j/src/py/datasets/src/datasets/arrow_dataset.py", line 4609, in shard
return self.select(
^^^^^^^^^^^^
File "/home/j/src/py/datasets/src/datasets/arrow_dataset.py", line 556, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/j/src/py/datasets/src/datasets/fingerprint.py", line 511, in wrapper
out = func(dataset, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/j/src/py/datasets/src/datasets/arrow_dataset.py", line 3797, in select
return self._select_contiguous(start, length, new_fingerprint=new_fingerprint)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/j/src/py/datasets/src/datasets/arrow_dataset.py", line 556, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/j/src/py/datasets/src/datasets/fingerprint.py", line 511, in wrapper
out = func(dataset, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/j/src/py/datasets/src/datasets/arrow_dataset.py", line 3857, in _select_contiguous
_check_valid_indices_value(start, len(self))
File "/home/j/src/py/datasets/src/datasets/arrow_dataset.py", line 648, in _check_valid_indices_value
raise IndexError(f"Index {index} out of range for dataset of size {size}.")
IndexError: Index 339 out of range for dataset of size 339.
```
### Steps to reproduce the bug
(I had to set the default max batch size down due to a different bug... or maybe it's related: https://github.com/huggingface/datasets/issues/5717)
```python3
#!/usr/bin/env python3
import argparse
import os
from pathlib import Path
import soundfile as sf
import datasets
datasets.config.DEFAULT_MAX_BATCH_SIZE=35
from datasets import Features, Array2D, Value, Dataset, Sequence, Audio
import numpy as np
import librosa
import sys
import soundfile as sf
import io
import logging
logging.basicConfig(level=logging.DEBUG, filename='debug.log', filemode='w',
format='%(name)s - %(levelname)s - %(message)s')
# Define the arguments for the command-line interface
def parse_args():
parser = argparse.ArgumentParser(description="Create a Huggingface dataset from labeled audio files.")
parser.add_argument("--indir_labeled", action="append", help="Directory containing labeled audio files.", required=True)
parser.add_argument("--outds", help="Path to save the dataset file.", required=True)
parser.add_argument("--max_clips", type=int, help="Max count of audio samples to add to the dataset.", default=None)
parser.add_argument("-r", "--sr", type=int, help="Sample rate for the audio files.", default=16000)
parser.add_argument("--no-resample", action="store_true", help="Disable resampling of the audio files.")
parser.add_argument("--max_clip_secs", type=float, help="Max length of audio clips in seconds.", default=3.0)
parser.add_argument("-v", "--verbose", action='count', default=1, help="Increase verbosity")
return parser.parse_args()
# Convert the NumPy arrays to audio bytes in WAV format
def numpy_to_bytes(audio_array, sampling_rate=16000):
with io.BytesIO() as bytes_io:
sf.write(bytes_io, audio_array, samplerate=sampling_rate,
format='wav', subtype='FLOAT') # float32
return bytes_io.getvalue()
# Function to find audio and label files in a directory
def find_audio_label_pairs(indir_labeled):
audio_label_pairs = []
for root, _, files in os.walk(indir_labeled):
for file in files:
if file.endswith(('.mp3', '.wav', '.aac', '.flac')):
audio_path = Path(root) / file
if args.verbose>1:
print(f'File: {audio_path}')
label_path = audio_path.with_suffix('.labels.txt')
if label_path.exists():
if args.verbose>0:
print(f' Pair: {audio_path}')
audio_label_pairs.append((audio_path, label_path))
return audio_label_pairs
def process_audio_label_pair(audio_path, label_path, sampling_rate, no_resample, max_clip_secs):
# Read the label file
with open(label_path, 'r') as label_file:
labels = label_file.readlines()
# Load the full audio file
full_audio, current_sr = sf.read(audio_path)
if not no_resample and current_sr != sampling_rate:
# You can use librosa.resample here if librosa is available
full_audio = librosa.resample(full_audio, orig_sr=current_sr, target_sr=sampling_rate)
audio_segments = []
sentences = []
# Process each label
for label in labels:
start_secs, end_secs, label_text = label.strip().split('\t')
start_sample = int(float(start_secs) * sampling_rate)
end_sample = int(float(end_secs) * sampling_rate)
# Extract segment and truncate or pad to max_clip_secs
audio_segment = full_audio[start_sample:end_sample]
max_samples = int(max_clip_secs * sampling_rate)
if len(audio_segment) > max_samples: # Truncate
audio_segment = audio_segment[:max_samples]
elif len(audio_segment) < max_samples: # Pad
padding = np.zeros(max_samples - len(audio_segment), dtype=audio_segment.dtype)
audio_segment = np.concatenate((audio_segment, padding))
audio_segment = numpy_to_bytes(audio_segment)
audio_data = {
'path': str(audio_path),
'bytes': audio_segment,
}
audio_segments.append(audio_data)
sentences.append(label_text)
return audio_segments, sentences
# Main function to create the dataset
def create_dataset(args):
audio_label_pairs = []
for indir in args.indir_labeled:
audio_label_pairs.extend(find_audio_label_pairs(indir))
# Initialize our dataset data
dataset_data = {
'path': [], # This will be a list of strings
'audio': [], # This will be a list of dictionaries
'sentence': [], # This will be a list of strings
}
# Process each audio-label pair and add the data to the dataset
for audio_path, label_path in audio_label_pairs[:args.max_clips]:
audio_segments, sentences = process_audio_label_pair(audio_path, label_path, args.sr, args.no_resample, args.max_clip_secs)
if audio_segments and sentences:
for audio_data, sentence in zip(audio_segments, sentences):
if args.verbose>1:
print(f'Appending {audio_data["path"]}')
dataset_data['path'].append(audio_data['path'])
dataset_data['audio'].append({
'path': audio_data['path'],
'bytes': audio_data['bytes'],
})
dataset_data['sentence'].append(sentence)
features = Features({
'path': Value('string'), # Path is redundant in common voice set also
'audio': Audio(sampling_rate=16000),
'sentence': Value('string'),
})
hf_dataset = Dataset.from_dict(dataset_data, features=features)
for key in dataset_data:
for i, item in enumerate(dataset_data[key]):
if item is None or (isinstance(item, bytes) and len(item) == 0):
logging.error(f"Invalid {key} at index {i}: {item}")
import ipdb; ipdb.set_trace(context=16); pass
hf_dataset.save_to_disk(args.outds, max_shard_size='50MB')
# try:
# hf_dataset.save_to_disk(args.outds)
# except TypeError as e:
# # If there's a TypeError, log the exception and the dataset data that might have caused it
# logging.exception("An error occurred while saving the dataset.")
# import ipdb; ipdb.set_trace(context=16); pass
# for key in dataset_data:
# logging.debug(f"{key} length: {len(dataset_data[key])}")
# if key == 'audio':
# # Log the first 100 bytes of the audio data to avoid huge log files
# for i, audio in enumerate(dataset_data[key]):
# logging.debug(f"Audio {i}: {audio['bytes'][:100]}")
# raise
# Run the script
if __name__ == "__main__":
args = parse_args()
create_dataset(args)
```
### Expected behavior
It shouldn't fail.
### Environment info
- `datasets` version: 2.14.7.dev0
- Platform: Linux-6.1.0-13-amd64-x86_64-with-glibc2.36
- Python version: 3.11.2
- `huggingface_hub` version: 0.17.3
- PyArrow version: 13.0.0
- Pandas version: 2.1.2
- `fsspec` version: 2023.9.2
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6389/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6389/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6388 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6388/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6388/comments | https://api.github.com/repos/huggingface/datasets/issues/6388/events | https://github.com/huggingface/datasets/issues/6388 | 1,981,136,093 | I_kwDODunzps52Fbzd | 6,388 | How to create 3d medical imgae dataset? | {
"login": "QingYunA",
"id": 41177312,
"node_id": "MDQ6VXNlcjQxMTc3MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/41177312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/QingYunA",
"html_url": "https://github.com/QingYunA",
"followers_url": "https://api.github.com/users/QingYunA/followers",
"following_url": "https://api.github.com/users/QingYunA/following{/other_user}",
"gists_url": "https://api.github.com/users/QingYunA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/QingYunA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/QingYunA/subscriptions",
"organizations_url": "https://api.github.com/users/QingYunA/orgs",
"repos_url": "https://api.github.com/users/QingYunA/repos",
"events_url": "https://api.github.com/users/QingYunA/events{/privacy}",
"received_events_url": "https://api.github.com/users/QingYunA/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2023-11-07T11:27:36 | 2023-11-07T11:28:53 | null | NONE | null | ### Feature request
I am newer to huggingface, after i look up `datasets` docs, I can't find how to create the dataset contains 3d medical image (ends with '.mhd', '.dcm', '.nii')
### Motivation
help us to upload 3d medical dataset to huggingface!
### Your contribution
I'll submit a PR if I find a way to add this feature | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6388/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6387 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6387/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6387/comments | https://api.github.com/repos/huggingface/datasets/issues/6387/events | https://github.com/huggingface/datasets/issues/6387 | 1,980,224,020 | I_kwDODunzps52B9IU | 6,387 | How to load existing downloaded dataset ? | {
"login": "liming-ai",
"id": 73068772,
"node_id": "MDQ6VXNlcjczMDY4Nzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/73068772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liming-ai",
"html_url": "https://github.com/liming-ai",
"followers_url": "https://api.github.com/users/liming-ai/followers",
"following_url": "https://api.github.com/users/liming-ai/following{/other_user}",
"gists_url": "https://api.github.com/users/liming-ai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liming-ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liming-ai/subscriptions",
"organizations_url": "https://api.github.com/users/liming-ai/orgs",
"repos_url": "https://api.github.com/users/liming-ai/repos",
"events_url": "https://api.github.com/users/liming-ai/events{/privacy}",
"received_events_url": "https://api.github.com/users/liming-ai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2023-11-06T22:51:44 | 2023-11-16T18:07:01 | 2023-11-16T18:07:01 | NONE | null | Hi @mariosasko @lhoestq @katielink
Thanks for your contribution and hard work.
### Feature request
First, I download a dataset as normal by:
```
from datasets import load_dataset
dataset = load_dataset('username/data_name', cache_dir='data')
```
The dataset format in `data` directory will be:
```
-data
|-data_name
|-test-00000-of-00001-bf4c733542e35fcb.parquet
|-train-00000-of-00001-2a1df75c6bce91ab.parquet
```
Then I use SCP to clone this dataset into another machine, and then try:
```
from datasets import load_dataset
dataset = load_dataset('data/data_name') # load from local path
```
This leads to re-generating training and validation split for each time, and the disk quota will be duplicated occupation.
How can I just load the dataset without generating and saving these splits again?
### Motivation
I do not want to download the same dataset in two machines, scp is much faster and better than HuggingFace API. I hope we can directly load the downloaded datasets (.parquest)
### Your contribution
Please refer to the feature | {
"login": "liming-ai",
"id": 73068772,
"node_id": "MDQ6VXNlcjczMDY4Nzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/73068772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liming-ai",
"html_url": "https://github.com/liming-ai",
"followers_url": "https://api.github.com/users/liming-ai/followers",
"following_url": "https://api.github.com/users/liming-ai/following{/other_user}",
"gists_url": "https://api.github.com/users/liming-ai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liming-ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liming-ai/subscriptions",
"organizations_url": "https://api.github.com/users/liming-ai/orgs",
"repos_url": "https://api.github.com/users/liming-ai/repos",
"events_url": "https://api.github.com/users/liming-ai/events{/privacy}",
"received_events_url": "https://api.github.com/users/liming-ai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6387/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6386 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6386/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6386/comments | https://api.github.com/repos/huggingface/datasets/issues/6386/events | https://github.com/huggingface/datasets/issues/6386 | 1,979,878,014 | I_kwDODunzps52Aop- | 6,386 | Formatting overhead | {
"login": "d-miketa",
"id": 320321,
"node_id": "MDQ6VXNlcjMyMDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/320321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d-miketa",
"html_url": "https://github.com/d-miketa",
"followers_url": "https://api.github.com/users/d-miketa/followers",
"following_url": "https://api.github.com/users/d-miketa/following{/other_user}",
"gists_url": "https://api.github.com/users/d-miketa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/d-miketa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d-miketa/subscriptions",
"organizations_url": "https://api.github.com/users/d-miketa/orgs",
"repos_url": "https://api.github.com/users/d-miketa/repos",
"events_url": "https://api.github.com/users/d-miketa/events{/privacy}",
"received_events_url": "https://api.github.com/users/d-miketa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-11-06T19:06:38 | 2023-11-06T23:56:12 | 2023-11-06T23:56:12 | NONE | null | ### Describe the bug
Hi! I very recently noticed that my training time is dominated by batch formatting. Using Lightning's profilers, I located the bottleneck within `datasets.formatting.formatting` and then narrowed it down with `line-profiler`. It turns out that almost all of the overhead is due to creating new instances of `self.python_arrow_extractor`. I admit I'm confused why that could be the case - as far as I can tell there's no complex `__init__` logic to execute.

### Steps to reproduce the bug
1. Set up a dataset `ds` with potentially several (4+) columns (not sure if this is necessary, but it did at one point of the investigation make overhead worse)
2. Process it using a custom transform, `ds = ds.with_transform(transform_func)`
3. Decorate this function https://github.com/huggingface/datasets/blob/main/src/datasets/formatting/formatting.py#L512 with `@profile` from https://pypi.org/project/line-profiler/
4. Profile with `$ kernprof -l script_to_profile.py`
### Expected behavior
Batch formatting should have acceptable overhead.
### Environment info
```
datasets=2.14.6
pyarrow=14.0.0
``` | {
"login": "d-miketa",
"id": 320321,
"node_id": "MDQ6VXNlcjMyMDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/320321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d-miketa",
"html_url": "https://github.com/d-miketa",
"followers_url": "https://api.github.com/users/d-miketa/followers",
"following_url": "https://api.github.com/users/d-miketa/following{/other_user}",
"gists_url": "https://api.github.com/users/d-miketa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/d-miketa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d-miketa/subscriptions",
"organizations_url": "https://api.github.com/users/d-miketa/orgs",
"repos_url": "https://api.github.com/users/d-miketa/repos",
"events_url": "https://api.github.com/users/d-miketa/events{/privacy}",
"received_events_url": "https://api.github.com/users/d-miketa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6386/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6385 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6385/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6385/comments | https://api.github.com/repos/huggingface/datasets/issues/6385/events | https://github.com/huggingface/datasets/issues/6385 | 1,979,308,338 | I_kwDODunzps51-dky | 6,385 | Get an error when i try to concatenate the squad dataset with my own dataset | {
"login": "CCDXDX",
"id": 149378500,
"node_id": "U_kgDOCOdVxA",
"avatar_url": "https://avatars.githubusercontent.com/u/149378500?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CCDXDX",
"html_url": "https://github.com/CCDXDX",
"followers_url": "https://api.github.com/users/CCDXDX/followers",
"following_url": "https://api.github.com/users/CCDXDX/following{/other_user}",
"gists_url": "https://api.github.com/users/CCDXDX/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CCDXDX/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CCDXDX/subscriptions",
"organizations_url": "https://api.github.com/users/CCDXDX/orgs",
"repos_url": "https://api.github.com/users/CCDXDX/repos",
"events_url": "https://api.github.com/users/CCDXDX/events{/privacy}",
"received_events_url": "https://api.github.com/users/CCDXDX/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-11-06T14:29:22 | 2023-11-06T16:50:45 | 2023-11-06T16:50:45 | NONE | null | ### Describe the bug
Hello,
I'm new here and I need to concatenate the squad dataset with my own dataset i created. I find the following error when i try to do it: Traceback (most recent call last):
Cell In[9], line 1
concatenated_dataset = concatenate_datasets([train_dataset, dataset1])
File ~\anaconda3\Lib\site-packages\datasets\combine.py:213 in concatenate_datasets
return _concatenate_map_style_datasets(dsets, info=info, split=split, axis=axis)
File ~\anaconda3\Lib\site-packages\datasets\arrow_dataset.py:6002 in _concatenate_map_style_datasets
_check_if_features_can_be_aligned([dset.features for dset in dsets])
File ~\anaconda3\Lib\site-packages\datasets\features\features.py:2122 in _check_if_features_can_be_aligned
raise ValueError(
ValueError: The features can't be aligned because the key answers of features {'id': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'context': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'answers': Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None)} has unexpected type - Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None) (expected either {'answer_start': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'text': Value(dtype='string', id=None)} or Value("null").
### Steps to reproduce the bug
```python
from huggingface_hub import notebook_login
from datasets import load_dataset
notebook_login("mymailadresse", "mypassword")
squad = load_dataset("squad", split="train[:5000]")
squad = squad.train_test_split(test_size=0.2)
dataset1 = squad["train"]
import json
mybase = [
{
"id": "1",
"context": "She lives in Nantes",
"question": "Where does she live?",
"answers": {
"text": "Nantes",
"answer_start": [13],
}
}
]
# Save the data to a JSON file
json_file_path = r"C:\Users\mypath\thefile.json"
with open(json_file_path, "w", encoding= "utf-8") as json_file:
json.dump(mybase, json_file, indent=4)
# Load the JSON file as a dataset
custom_dataset = load_dataset("json", data_files=json_file_path)
# Access the train split
train_dataset = custom_dataset["train"]
from datasets import concatenate_datasets
# Concatenate the datasets
concatenated_dataset = concatenate_datasets([train_dataset, dataset1])
```
### Expected behavior
I would expect the two datasets to be concatenated without error. The len(dataset1) is equal to 4000 and the len(train_dataset) is equal to 1 so I would exepect concatenated_dataset to be created and having lenght 4001.
### Environment info
Python 3.11.4 and using windows
Thank you for your help | {
"login": "CCDXDX",
"id": 149378500,
"node_id": "U_kgDOCOdVxA",
"avatar_url": "https://avatars.githubusercontent.com/u/149378500?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CCDXDX",
"html_url": "https://github.com/CCDXDX",
"followers_url": "https://api.github.com/users/CCDXDX/followers",
"following_url": "https://api.github.com/users/CCDXDX/following{/other_user}",
"gists_url": "https://api.github.com/users/CCDXDX/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CCDXDX/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CCDXDX/subscriptions",
"organizations_url": "https://api.github.com/users/CCDXDX/orgs",
"repos_url": "https://api.github.com/users/CCDXDX/repos",
"events_url": "https://api.github.com/users/CCDXDX/events{/privacy}",
"received_events_url": "https://api.github.com/users/CCDXDX/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6385/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6384 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6384/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6384/comments | https://api.github.com/repos/huggingface/datasets/issues/6384/events | https://github.com/huggingface/datasets/issues/6384 | 1,979,117,069 | I_kwDODunzps519u4N | 6,384 | Load the local dataset folder from other place | {
"login": "OrangeSodahub",
"id": 54439582,
"node_id": "MDQ6VXNlcjU0NDM5NTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/54439582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OrangeSodahub",
"html_url": "https://github.com/OrangeSodahub",
"followers_url": "https://api.github.com/users/OrangeSodahub/followers",
"following_url": "https://api.github.com/users/OrangeSodahub/following{/other_user}",
"gists_url": "https://api.github.com/users/OrangeSodahub/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OrangeSodahub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OrangeSodahub/subscriptions",
"organizations_url": "https://api.github.com/users/OrangeSodahub/orgs",
"repos_url": "https://api.github.com/users/OrangeSodahub/repos",
"events_url": "https://api.github.com/users/OrangeSodahub/events{/privacy}",
"received_events_url": "https://api.github.com/users/OrangeSodahub/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-11-06T13:07:04 | 2023-11-19T05:42:06 | 2023-11-19T05:42:05 | NONE | null | This is from https://github.com/huggingface/diffusers/issues/5573
| {
"login": "OrangeSodahub",
"id": 54439582,
"node_id": "MDQ6VXNlcjU0NDM5NTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/54439582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OrangeSodahub",
"html_url": "https://github.com/OrangeSodahub",
"followers_url": "https://api.github.com/users/OrangeSodahub/followers",
"following_url": "https://api.github.com/users/OrangeSodahub/following{/other_user}",
"gists_url": "https://api.github.com/users/OrangeSodahub/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OrangeSodahub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OrangeSodahub/subscriptions",
"organizations_url": "https://api.github.com/users/OrangeSodahub/orgs",
"repos_url": "https://api.github.com/users/OrangeSodahub/repos",
"events_url": "https://api.github.com/users/OrangeSodahub/events{/privacy}",
"received_events_url": "https://api.github.com/users/OrangeSodahub/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6384/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6384/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6383 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6383/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6383/comments | https://api.github.com/repos/huggingface/datasets/issues/6383/events | https://github.com/huggingface/datasets/issues/6383 | 1,978,189,389 | I_kwDODunzps516MZN | 6,383 | imagenet-1k downloads over and over | {
"login": "seann999",
"id": 6847529,
"node_id": "MDQ6VXNlcjY4NDc1Mjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6847529?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seann999",
"html_url": "https://github.com/seann999",
"followers_url": "https://api.github.com/users/seann999/followers",
"following_url": "https://api.github.com/users/seann999/following{/other_user}",
"gists_url": "https://api.github.com/users/seann999/gists{/gist_id}",
"starred_url": "https://api.github.com/users/seann999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seann999/subscriptions",
"organizations_url": "https://api.github.com/users/seann999/orgs",
"repos_url": "https://api.github.com/users/seann999/repos",
"events_url": "https://api.github.com/users/seann999/events{/privacy}",
"received_events_url": "https://api.github.com/users/seann999/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-11-06T02:58:58 | 2024-06-12T13:15:00 | 2023-11-06T06:02:39 | NONE | null | ### Describe the bug
What could be causing this?
```
$ python3
Python 3.8.13 (default, Mar 28 2022, 11:38:47)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from datasets import load_dataset
>>> load_dataset("imagenet-1k")
Downloading builder script: 100%|██████████| 4.72k/4.72k [00:00<00:00, 7.51MB/s]
Downloading readme: 100%|███████████████████| 85.4k/85.4k [00:00<00:00, 510kB/s]
Downloading extra modules: 100%|████████████| 46.4k/46.4k [00:00<00:00, 300kB/s]
Downloading data: 100%|████████████████████| 29.1G/29.1G [19:36<00:00, 24.8MB/s]
Downloading data: 100%|████████████████████| 29.3G/29.3G [08:38<00:00, 56.5MB/s]
Downloading data: 100%|████████████████████| 29.0G/29.0G [09:26<00:00, 51.2MB/s]
Downloading data: 100%|████████████████████| 29.2G/29.2G [09:38<00:00, 50.6MB/s]
Downloading data: 100%|███████████████████▉| 29.2G/29.2G [09:37<00:00, 44.1MB/s^Downloading data: 0%| | 106M/29.1G [00:05<23:49, 20.3MB/s]
```
### Steps to reproduce the bug
See above commands/code
### Expected behavior
imagenet-1k is downloaded
### Environment info
- `datasets` version: 2.14.6
- Platform: Linux-6.2.0-34-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.15.1
- PyArrow version: 14.0.0
- Pandas version: 1.5.2 | {
"login": "seann999",
"id": 6847529,
"node_id": "MDQ6VXNlcjY4NDc1Mjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6847529?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seann999",
"html_url": "https://github.com/seann999",
"followers_url": "https://api.github.com/users/seann999/followers",
"following_url": "https://api.github.com/users/seann999/following{/other_user}",
"gists_url": "https://api.github.com/users/seann999/gists{/gist_id}",
"starred_url": "https://api.github.com/users/seann999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seann999/subscriptions",
"organizations_url": "https://api.github.com/users/seann999/orgs",
"repos_url": "https://api.github.com/users/seann999/repos",
"events_url": "https://api.github.com/users/seann999/events{/privacy}",
"received_events_url": "https://api.github.com/users/seann999/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6383/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6382 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6382/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6382/comments | https://api.github.com/repos/huggingface/datasets/issues/6382/events | https://github.com/huggingface/datasets/issues/6382 | 1,977,400,799 | I_kwDODunzps513L3f | 6,382 | Add CheXpert dataset for vision | {
"login": "SauravMaheshkar",
"id": 61241031,
"node_id": "MDQ6VXNlcjYxMjQxMDMx",
"avatar_url": "https://avatars.githubusercontent.com/u/61241031?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SauravMaheshkar",
"html_url": "https://github.com/SauravMaheshkar",
"followers_url": "https://api.github.com/users/SauravMaheshkar/followers",
"following_url": "https://api.github.com/users/SauravMaheshkar/following{/other_user}",
"gists_url": "https://api.github.com/users/SauravMaheshkar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SauravMaheshkar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SauravMaheshkar/subscriptions",
"organizations_url": "https://api.github.com/users/SauravMaheshkar/orgs",
"repos_url": "https://api.github.com/users/SauravMaheshkar/repos",
"events_url": "https://api.github.com/users/SauravMaheshkar/events{/privacy}",
"received_events_url": "https://api.github.com/users/SauravMaheshkar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | 3 | 2023-11-04T15:36:11 | 2024-01-10T11:53:52 | null | NONE | null | ### Feature request
### Name
**CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison**
### Paper
https://arxiv.org/abs/1901.07031
### Data
https://stanfordaimi.azurewebsites.net/datasets/8cbd9ed4-2eb9-4565-affc-111cf4f7ebe2
### Motivation
CheXpert is one of the fundamental models in medical image classification and can serve as a viable pre-training dataset for radiology classification or low-scale ablation / exploratory studies.
This could also serve as a good pre-training dataset for Kaggle competitions.
### Your contribution
Would love to make a PR and pre-process / get this into 🤗 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6382/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6381 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6381/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6381/comments | https://api.github.com/repos/huggingface/datasets/issues/6381/events | https://github.com/huggingface/datasets/pull/6381 | 1,975,028,470 | PR_kwDODunzps5eeYty | 6,381 | Add my dataset | {
"login": "keyur536",
"id": 103646675,
"node_id": "U_kgDOBi2F0w",
"avatar_url": "https://avatars.githubusercontent.com/u/103646675?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/keyur536",
"html_url": "https://github.com/keyur536",
"followers_url": "https://api.github.com/users/keyur536/followers",
"following_url": "https://api.github.com/users/keyur536/following{/other_user}",
"gists_url": "https://api.github.com/users/keyur536/gists{/gist_id}",
"starred_url": "https://api.github.com/users/keyur536/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/keyur536/subscriptions",
"organizations_url": "https://api.github.com/users/keyur536/orgs",
"repos_url": "https://api.github.com/users/keyur536/repos",
"events_url": "https://api.github.com/users/keyur536/events{/privacy}",
"received_events_url": "https://api.github.com/users/keyur536/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-11-02T20:59:52 | 2023-11-08T14:37:46 | 2023-11-06T15:50:14 | NONE | null | ## medical data
**Description:**
This dataset, named "medical data," is a collection of text data from various sources, carefully curated and cleaned for use in natural language processing (NLP) tasks. It consists of a diverse range of text, including articles, books, and online content, covering topics from science to literature.
**Citation:**
If applicable, please include a citation for this dataset to give credit to the original sources or contributors.
**Key Features:**
- Language: The text is primarily in English, but it may include content in other languages as well.
- Use Cases: This dataset is suitable for text classification, language modeling, sentiment analysis, and other NLP tasks.
**Usage:**
To access this dataset, use the `load_your_dataset` function provided in the `your_dataset.py` script within this repository. You can specify the dataset split you need, such as "train," "test," or "validation," to get the data for your specific task.
**Contributors:**
- [Keyur Chaudhari]
**Contact:**
If you have any questions or need assistance regarding this dataset, please feel free to contact [[email protected]].
Please note that this dataset is shared under a specific license, which can be found in the [LICENSE](link to your dataset's license) file. Make sure to review and adhere to the terms of the license when using this dataset for your projects.
| {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6381/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6381",
"html_url": "https://github.com/huggingface/datasets/pull/6381",
"diff_url": "https://github.com/huggingface/datasets/pull/6381.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6381.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6380 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6380/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6380/comments | https://api.github.com/repos/huggingface/datasets/issues/6380/events | https://github.com/huggingface/datasets/pull/6380 | 1,974,741,221 | PR_kwDODunzps5edaO6 | 6,380 | Fix for continuation behaviour on broken dataset archives due to starving download connections via HTTP-GET | {
"login": "RuntimeRacer",
"id": 49956579,
"node_id": "MDQ6VXNlcjQ5OTU2NTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/49956579?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RuntimeRacer",
"html_url": "https://github.com/RuntimeRacer",
"followers_url": "https://api.github.com/users/RuntimeRacer/followers",
"following_url": "https://api.github.com/users/RuntimeRacer/following{/other_user}",
"gists_url": "https://api.github.com/users/RuntimeRacer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RuntimeRacer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RuntimeRacer/subscriptions",
"organizations_url": "https://api.github.com/users/RuntimeRacer/orgs",
"repos_url": "https://api.github.com/users/RuntimeRacer/repos",
"events_url": "https://api.github.com/users/RuntimeRacer/events{/privacy}",
"received_events_url": "https://api.github.com/users/RuntimeRacer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2023-11-02T17:28:23 | 2023-11-02T17:31:19 | null | NONE | null | This PR proposes a (slightly hacky) fix for an Issue that can occur when downloading large dataset parts over unstable connections.
The underlying issue is also being discussed in https://github.com/huggingface/datasets/issues/5594.
Issue Symptoms & Behaviour:
- Download of a large archive file during dataset download via HTTP-GET fails.
- An silent net exception (which I was unable to identify) is thrown within the `tqdm` download progress.
- Due to missing exception catch code, the above process just continues processing, assuming `http_get` completed successfully.
- Pending Archive file gets renamed to remove the `.incomplete` extension, despite not all data has been downloaded.
- Also, for reasons I did not investigate, there seems to be no real integrity check for the downloaded files; or it does not detect this problem. This is especially problematic, since the downloader script won't retry downloading this archive after CRC-Checking, even if it is being manually restarted / executed again after running into errors on extraction.
Fix proposal: Adding a retry mechanic for HTTP-GET downloads, which adds the following behaviour:
- Download Progress Thread checks for download size validity in case the HTTP connection starves mid download. If the check fails, a RuntimeError is thrown
- Cache Downloader code with retry mechanic monitors for an exception thrown by the download progress thread, and retries download with updated `resume_size`.
- Cache Downloader will not mark incomplete files which have thrown an exception during download, and exceeded retries, as complete. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6380/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6380/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6380",
"html_url": "https://github.com/huggingface/datasets/pull/6380",
"diff_url": "https://github.com/huggingface/datasets/pull/6380.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6380.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6379 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6379/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6379/comments | https://api.github.com/repos/huggingface/datasets/issues/6379/events | https://github.com/huggingface/datasets/pull/6379 | 1,974,638,850 | PR_kwDODunzps5edDZL | 6,379 | Avoid redundant warning when encoding NumPy array as `Image` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2023-11-02T16:37:58 | 2023-11-06T17:53:27 | 2023-11-02T17:08:07 | COLLABORATOR | null | Avoid a redundant warning in `encode_np_array` by removing the identity check as NumPy `dtype`s can be equal without having identical `id`s.
Additionally, fix "unreachable" checks in `encode_np_array`. | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6379/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6379/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6379",
"html_url": "https://github.com/huggingface/datasets/pull/6379",
"diff_url": "https://github.com/huggingface/datasets/pull/6379.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6379.patch",
"merged_at": "2023-11-02T17:08:07"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6378 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6378/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6378/comments | https://api.github.com/repos/huggingface/datasets/issues/6378/events | https://github.com/huggingface/datasets/pull/6378 | 1,973,942,770 | PR_kwDODunzps5eaqhv | 6,378 | Support pyarrow 14.0.0 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-11-02T10:25:10 | 2023-11-02T15:24:28 | 2023-11-02T15:15:44 | MEMBER | null | Support `pyarrow` 14.0.0.
Fix #6377 and fix #6374 (root cause).
This fix is analog to a previous one:
- #6175 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6378/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6378",
"html_url": "https://github.com/huggingface/datasets/pull/6378",
"diff_url": "https://github.com/huggingface/datasets/pull/6378.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6378.patch",
"merged_at": "2023-11-02T15:15:44"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6377 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6377/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6377/comments | https://api.github.com/repos/huggingface/datasets/issues/6377/events | https://github.com/huggingface/datasets/issues/6377 | 1,973,937,612 | I_kwDODunzps51p-XM | 6,377 | Support pyarrow 14.0.0 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0 | 2023-11-02T10:22:08 | 2023-11-02T15:15:45 | 2023-11-02T15:15:45 | MEMBER | null | Support pyarrow 14.0.0 by fixing the root cause of:
- #6374
and revert:
- #6375 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6377/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6376 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6376/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6376/comments | https://api.github.com/repos/huggingface/datasets/issues/6376/events | https://github.com/huggingface/datasets/issues/6376 | 1,973,927,468 | I_kwDODunzps51p74s | 6,376 | Caching problem when deleting a dataset | {
"login": "clefourrier",
"id": 22726840,
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clefourrier",
"html_url": "https://github.com/clefourrier",
"followers_url": "https://api.github.com/users/clefourrier/followers",
"following_url": "https://api.github.com/users/clefourrier/following{/other_user}",
"gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions",
"organizations_url": "https://api.github.com/users/clefourrier/orgs",
"repos_url": "https://api.github.com/users/clefourrier/repos",
"events_url": "https://api.github.com/users/clefourrier/events{/privacy}",
"received_events_url": "https://api.github.com/users/clefourrier/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-11-02T10:15:58 | 2023-12-04T16:53:34 | 2023-12-04T16:53:33 | MEMBER | null | ### Describe the bug
Pushing a dataset with n + m features to a repo which was deleted, but contained n features, will fail.
### Steps to reproduce the bug
1. Create a dataset with n features per row
2. `dataset.push_to_hub(YOUR_PATH, SPLIT, token=TOKEN)`
3. Go on the hub, delete the repo at `YOUR_PATH`
4. Update your local dataset to have n + m features per row
5. `dataset.push_to_hub(YOUR_PATH, SPLIT, token=TOKEN)` will fail because of a mismatch in features number
### Expected behavior
Step 5 should work or display a message to indicate the cache has not been cleared
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-5.15.0-88-generic-x86_64-with-glibc2.31
- Python version: 3.10.10
- Huggingface_hub version: 0.16.4
- PyArrow version: 11.0.0
- Pandas version: 2.0.0
| {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6376/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6375 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6375/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6375/comments | https://api.github.com/repos/huggingface/datasets/issues/6375/events | https://github.com/huggingface/datasets/pull/6375 | 1,973,877,879 | PR_kwDODunzps5eacao | 6,375 | Temporarily pin pyarrow < 14.0.0 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-11-02T09:48:58 | 2023-11-02T10:22:33 | 2023-11-02T10:11:19 | MEMBER | null | Temporarily pin `pyarrow` < 14.0.0 until permanent solution is found.
Hot fix #6374. | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6375/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6375/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6375",
"html_url": "https://github.com/huggingface/datasets/pull/6375",
"diff_url": "https://github.com/huggingface/datasets/pull/6375.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6375.patch",
"merged_at": "2023-11-02T10:11:19"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6374 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6374/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6374/comments | https://api.github.com/repos/huggingface/datasets/issues/6374/events | https://github.com/huggingface/datasets/issues/6374 | 1,973,857,428 | I_kwDODunzps51pqyU | 6,374 | CI is broken: TypeError: Couldn't cast array | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0 | 2023-11-02T09:37:06 | 2023-11-02T10:11:20 | 2023-11-02T10:11:20 | MEMBER | null | See: https://github.com/huggingface/datasets/actions/runs/6730567226/job/18293518039
```
FAILED tests/test_table.py::test_cast_sliced_fixed_size_array_to_features - TypeError: Couldn't cast array of type
fixed_size_list<item: int32>[3]
to
Sequence(feature=Value(dtype='int64', id=None), length=3, id=None)
``` | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6374/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6374/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6373 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6373/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6373/comments | https://api.github.com/repos/huggingface/datasets/issues/6373/events | https://github.com/huggingface/datasets/pull/6373 | 1,973,349,695 | PR_kwDODunzps5eYsZc | 6,373 | Fix typo in `Dataset.map` docstring | {
"login": "bryant1410",
"id": 3905501,
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryant1410",
"html_url": "https://github.com/bryant1410",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-11-02T01:36:49 | 2023-11-02T15:18:22 | 2023-11-02T10:11:38 | CONTRIBUTOR | null | null | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6373/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6373",
"html_url": "https://github.com/huggingface/datasets/pull/6373",
"diff_url": "https://github.com/huggingface/datasets/pull/6373.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6373.patch",
"merged_at": "2023-11-02T10:11:38"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6372 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6372/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6372/comments | https://api.github.com/repos/huggingface/datasets/issues/6372/events | https://github.com/huggingface/datasets/pull/6372 | 1,972,837,794 | PR_kwDODunzps5eW9kO | 6,372 | do not try to download from HF GCS for generator | {
"login": "yundai424",
"id": 43726198,
"node_id": "MDQ6VXNlcjQzNzI2MTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/43726198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yundai424",
"html_url": "https://github.com/yundai424",
"followers_url": "https://api.github.com/users/yundai424/followers",
"following_url": "https://api.github.com/users/yundai424/following{/other_user}",
"gists_url": "https://api.github.com/users/yundai424/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yundai424/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yundai424/subscriptions",
"organizations_url": "https://api.github.com/users/yundai424/orgs",
"repos_url": "https://api.github.com/users/yundai424/repos",
"events_url": "https://api.github.com/users/yundai424/events{/privacy}",
"received_events_url": "https://api.github.com/users/yundai424/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-11-01T17:57:11 | 2023-11-02T16:02:52 | 2023-11-02T15:52:09 | CONTRIBUTOR | null | attempt to fix https://github.com/huggingface/datasets/issues/6371 | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6372/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6372/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6372",
"html_url": "https://github.com/huggingface/datasets/pull/6372",
"diff_url": "https://github.com/huggingface/datasets/pull/6372.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6372.patch",
"merged_at": "2023-11-02T15:52:09"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6371 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6371/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6371/comments | https://api.github.com/repos/huggingface/datasets/issues/6371/events | https://github.com/huggingface/datasets/issues/6371 | 1,972,807,579 | I_kwDODunzps51lqeb | 6,371 | `Dataset.from_generator` should not try to download from HF GCS | {
"login": "yundai424",
"id": 43726198,
"node_id": "MDQ6VXNlcjQzNzI2MTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/43726198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yundai424",
"html_url": "https://github.com/yundai424",
"followers_url": "https://api.github.com/users/yundai424/followers",
"following_url": "https://api.github.com/users/yundai424/following{/other_user}",
"gists_url": "https://api.github.com/users/yundai424/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yundai424/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yundai424/subscriptions",
"organizations_url": "https://api.github.com/users/yundai424/orgs",
"repos_url": "https://api.github.com/users/yundai424/repos",
"events_url": "https://api.github.com/users/yundai424/events{/privacy}",
"received_events_url": "https://api.github.com/users/yundai424/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-11-01T17:36:17 | 2023-11-02T15:52:10 | 2023-11-02T15:52:10 | CONTRIBUTOR | null | ### Describe the bug
When using [`Dataset.from_generator`](https://github.com/huggingface/datasets/blob/c9c1166e1cf81d38534020f9c167b326585339e5/src/datasets/arrow_dataset.py#L1072) with `streaming=False`, the internal logic will call [`download_and_prepare`](https://github.com/huggingface/datasets/blob/main/src/datasets/io/generator.py#L47) which will attempt to download from HF GCS which is redundant, because user has already provided the generator from which the data should be drawn.
If someone attempts to call `Dataset.from_generator` from an environment that doesn't have external internet access (for example internal production machine) and doesn't set `HF_DATASETS_OFFLINE=1`, this will result in process being stuck at building connection.
### Steps to reproduce the bug
```python
import datasets
def gen():
for _ in range(100):
yield {"text": "dummy text"}
dataset = datasets.Dataset.from_generator(gen)
```
A minimum example executed on any environment that doesn't have access to HF GCS can result in the error
### Expected behavior
`try_from_hf_gcs` should be set to False here https://github.com/huggingface/datasets/blob/c9c1166e1cf81d38534020f9c167b326585339e5/src/datasets/io/generator.py#L51
### Environment info
- `datasets` version: 2.14.4
- Platform: Linux-3.10.0-1160.90.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.10.12
- Huggingface_hub version: 0.17.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.3 | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6371/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6371/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6370 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6370/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6370/comments | https://api.github.com/repos/huggingface/datasets/issues/6370/events | https://github.com/huggingface/datasets/issues/6370 | 1,972,073,909 | I_kwDODunzps51i3W1 | 6,370 | TensorDataset format does not work with Trainer from transformers | {
"login": "jinzzasol",
"id": 49014051,
"node_id": "MDQ6VXNlcjQ5MDE0MDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/49014051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jinzzasol",
"html_url": "https://github.com/jinzzasol",
"followers_url": "https://api.github.com/users/jinzzasol/followers",
"following_url": "https://api.github.com/users/jinzzasol/following{/other_user}",
"gists_url": "https://api.github.com/users/jinzzasol/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jinzzasol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jinzzasol/subscriptions",
"organizations_url": "https://api.github.com/users/jinzzasol/orgs",
"repos_url": "https://api.github.com/users/jinzzasol/repos",
"events_url": "https://api.github.com/users/jinzzasol/events{/privacy}",
"received_events_url": "https://api.github.com/users/jinzzasol/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-11-01T10:09:54 | 2023-11-29T16:31:08 | 2023-11-29T16:31:08 | NONE | null | ### Describe the bug
The model was built to do fine tunning on BERT model for relation extraction.
trainer.train() returns an error message ```TypeError: vars() argument must have __dict__ attribute``` when it has `train_dataset` generated from `torch.utils.data.TensorDataset`
However, in the document, the required data format is `torch.utils.data.TensorDataset`.

Transformers trainer is supposed to accept the train_dataset in the format of torch.utils.data.TensorDataset, but it returns error message *"TypeError: vars() argument must have __dict__ attribute"*
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-30-5df728c929a2> in <cell line: 1>()
----> 1 trainer.train()
2 trainer.evaluate(test_dataset)
9 frames
/usr/local/lib/python3.10/dist-packages/transformers/data/data_collator.py in <listcomp>(.0)
107
108 if not isinstance(features[0], Mapping):
--> 109 features = [vars(f) for f in features]
110 first = features[0]
111 batch = {}
TypeError: vars() argument must have __dict__ attribute
```
### Steps to reproduce the bug
Create train_dataset using `torch.utils.data.TensorDataset`, for instance,
```train_dataset = torch.utils.data.TensorDataset(train_input_ids, train_attention_masks, train_labels)```
Feed this `train_dataset` to your trainer and run trainer.train
```
trainer = Trainer(model,
training_args,
train_dataset=train_dataset,
eval_dataset=dev_dataset,
compute_metrics=compute_metrics,
)
```
### Expected behavior
Trainer should start training
### Environment info
It is running on Google Colab
- `datasets` version: 2.14.6
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.17.3
- PyArrow version: 9.0.0
- Pandas version: 1.5.3 | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6370/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6370/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6369 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6369/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6369/comments | https://api.github.com/repos/huggingface/datasets/issues/6369/events | https://github.com/huggingface/datasets/issues/6369 | 1,971,794,108 | I_kwDODunzps51hzC8 | 6,369 | Multi process map did not load cache file correctly | {
"login": "enze5088",
"id": 14285786,
"node_id": "MDQ6VXNlcjE0Mjg1Nzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14285786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enze5088",
"html_url": "https://github.com/enze5088",
"followers_url": "https://api.github.com/users/enze5088/followers",
"following_url": "https://api.github.com/users/enze5088/following{/other_user}",
"gists_url": "https://api.github.com/users/enze5088/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enze5088/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enze5088/subscriptions",
"organizations_url": "https://api.github.com/users/enze5088/orgs",
"repos_url": "https://api.github.com/users/enze5088/repos",
"events_url": "https://api.github.com/users/enze5088/events{/privacy}",
"received_events_url": "https://api.github.com/users/enze5088/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-11-01T06:36:54 | 2023-11-30T16:04:46 | 2023-11-30T16:04:45 | NONE | null | ### Describe the bug
When I was training model on Multiple GPUs by DDP, the dataset is tokenized multiple times after main process.


Code is modified from [run_clm.py](https://github.com/huggingface/transformers/blob/7d8ff3629b2725ec43ace99c1a6e87ac1978d433/examples/pytorch/language-modeling/run_clm.py#L484)
### Steps to reproduce the bug
```
block_size = data_args.block_size
IGNORE_INDEX = -100
Ignore_Input = False
def tokenize_function(examples):
sources = []
targets = []
for instruction, inputs, output in zip(examples['instruction'], examples['input'], examples['output']):
source = instruction + inputs
target = f"{output}{tokenizer.eos_token}"
sources.append(source)
targets.append(target)
tokenized_sources = tokenizer(sources, return_attention_mask=False)
tokenized_targets = tokenizer(targets, return_attention_mask=False,
add_special_tokens=False
)
all_input_ids = []
all_labels = []
for s, t in zip(tokenized_sources['input_ids'], tokenized_targets['input_ids']):
if len(s) > block_size and Ignore_Input == False:
# print(s)
continue
input_ids = torch.LongTensor(s + t)[:block_size]
if Ignore_Input:
labels = torch.LongTensor([IGNORE_INDEX] * len(s) + t)[:block_size]
else:
labels = input_ids
assert len(input_ids) == len(labels)
all_input_ids.append(input_ids)
all_labels.append(labels)
results = {
'input_ids': all_input_ids,
'labels': all_labels,
}
return results
with training_args.main_process_first(desc="dataset map tokenization ", local=False):
# print('local_rank',training_args.local_rank)
if not data_args.streaming:
tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=data_args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not data_args.overwrite_cache,
desc="Running tokenizer on dataset ",
)
else:
tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
remove_columns=column_names,
desc="Running tokenizer on dataset "
)
```
### Expected behavior
This code should only tokenize the dataset in the main process, and the other processes load the dataset after waiting
### Environment info
transformers == 4.34.1
datasets == 2.14.5 | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6369/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6369/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6368 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6368/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6368/comments | https://api.github.com/repos/huggingface/datasets/issues/6368/events | https://github.com/huggingface/datasets/pull/6368 | 1,971,193,692 | PR_kwDODunzps5eRZwQ | 6,368 | Fix python formatting for complex types in `format_table` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-10-31T19:48:08 | 2023-11-02T14:42:28 | 2023-11-02T14:21:16 | COLLABORATOR | null | Fix #6366 | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6368/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6368",
"html_url": "https://github.com/huggingface/datasets/pull/6368",
"diff_url": "https://github.com/huggingface/datasets/pull/6368.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6368.patch",
"merged_at": "2023-11-02T14:21:16"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6367 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6367/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6367/comments | https://api.github.com/repos/huggingface/datasets/issues/6367/events | https://github.com/huggingface/datasets/pull/6367 | 1,971,015,861 | PR_kwDODunzps5eQy1D | 6,367 | Fix time measuring snippet in docs | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-10-31T17:57:17 | 2023-10-31T18:35:53 | 2023-10-31T18:24:02 | COLLABORATOR | null | Fix https://discuss.huggingface.co/t/attributeerror-enter/60509 | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6367/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6367",
"html_url": "https://github.com/huggingface/datasets/pull/6367",
"diff_url": "https://github.com/huggingface/datasets/pull/6367.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6367.patch",
"merged_at": "2023-10-31T18:24:02"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6366 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6366/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6366/comments | https://api.github.com/repos/huggingface/datasets/issues/6366/events | https://github.com/huggingface/datasets/issues/6366 | 1,970,213,490 | I_kwDODunzps51bxJy | 6,366 | with_format() function returns bytes instead of PIL images even when image column is not part of "columns" | {
"login": "leot13",
"id": 17809020,
"node_id": "MDQ6VXNlcjE3ODA5MDIw",
"avatar_url": "https://avatars.githubusercontent.com/u/17809020?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leot13",
"html_url": "https://github.com/leot13",
"followers_url": "https://api.github.com/users/leot13/followers",
"following_url": "https://api.github.com/users/leot13/following{/other_user}",
"gists_url": "https://api.github.com/users/leot13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leot13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leot13/subscriptions",
"organizations_url": "https://api.github.com/users/leot13/orgs",
"repos_url": "https://api.github.com/users/leot13/repos",
"events_url": "https://api.github.com/users/leot13/events{/privacy}",
"received_events_url": "https://api.github.com/users/leot13/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-10-31T11:10:48 | 2023-11-02T14:21:17 | 2023-11-02T14:21:17 | NONE | null | ### Describe the bug
When using the with_format() function on a dataset containing images, even if the image column is not part of the columns provided in the function, its type will be changed to bytes.
Here is a minimal reproduction of the bug:
https://colab.research.google.com/drive/1hyaOspgyhB41oiR1-tXE3k_gJCdJUQCf?usp=sharing
### Steps to reproduce the bug
1. Load the image dataset
2. apply with_format(columns=["text"])
3. Check the type of images in the "image" column before and after applying with_format
### Expected behavior
The type should stay the same, but it does not
### Environment info
datasets==2.14.6
| {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6366/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6365 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6365/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6365/comments | https://api.github.com/repos/huggingface/datasets/issues/6365/events | https://github.com/huggingface/datasets/issues/6365 | 1,970,140,392 | I_kwDODunzps51bfTo | 6,365 | Parquet size grows exponential for categorical data | {
"login": "aseganti",
"id": 82567957,
"node_id": "MDQ6VXNlcjgyNTY3OTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/82567957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aseganti",
"html_url": "https://github.com/aseganti",
"followers_url": "https://api.github.com/users/aseganti/followers",
"following_url": "https://api.github.com/users/aseganti/following{/other_user}",
"gists_url": "https://api.github.com/users/aseganti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aseganti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aseganti/subscriptions",
"organizations_url": "https://api.github.com/users/aseganti/orgs",
"repos_url": "https://api.github.com/users/aseganti/repos",
"events_url": "https://api.github.com/users/aseganti/events{/privacy}",
"received_events_url": "https://api.github.com/users/aseganti/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-10-31T10:29:02 | 2023-10-31T10:49:17 | 2023-10-31T10:49:17 | NONE | null | ### Describe the bug
It seems that when saving a data frame with a categorical column inside the size can grow exponentially.
This seems to happen because when we save the categorical data to parquet, we are saving the data + all the categories existing in the original data. This happens even when the categories are not present in the original data.
### Steps to reproduce the bug
To reproduce the bug, it is enough to run this script:
```
import pandas as pd
import os
if __name__ == "__main__":
for n in [10, 1e2, 1e3, 1e4, 1e5]:
for n_col in [1, 10, 100, 1000, 10000]:
input = pd.DataFrame([{"{i}": f"{i}_cat" for col in range(n_col)} for i in range(int(n))])
input.iloc[0:100].to_parquet("a.parquet")
for col in input.columns:
input[col] = input[col].astype("category")
input.iloc[0:100].to_parquet("b.parquet")
a_size_mb = os.stat("a.parquet").st_size / (1024 * 1024)
b_size_mb = os.stat("b.parquet").st_size / (1024 * 1024)
print(f"{n} {n_col} {a_size_mb} {b_size_mb} {100*b_size_mb/a_size_mb:.2f}")
```
That produces this output:
<img width="464" alt="Screenshot 2023-10-31 at 11 25 25" src="https://github.com/huggingface/datasets/assets/82567957/2b8a9284-7f9e-4c10-a006-0a27236ebd15">
### Expected behavior
In my opinion either:
1. The two file should have (almost) the same size
2. There should be warning telling the user that such difference in size is possible
### Environment info
Python 3.8.18
pandas==2.0.3
numpy==1.24.4 | {
"login": "aseganti",
"id": 82567957,
"node_id": "MDQ6VXNlcjgyNTY3OTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/82567957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aseganti",
"html_url": "https://github.com/aseganti",
"followers_url": "https://api.github.com/users/aseganti/followers",
"following_url": "https://api.github.com/users/aseganti/following{/other_user}",
"gists_url": "https://api.github.com/users/aseganti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aseganti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aseganti/subscriptions",
"organizations_url": "https://api.github.com/users/aseganti/orgs",
"repos_url": "https://api.github.com/users/aseganti/repos",
"events_url": "https://api.github.com/users/aseganti/events{/privacy}",
"received_events_url": "https://api.github.com/users/aseganti/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6365/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6365/timeline | null | not_planned | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6364 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6364/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6364/comments | https://api.github.com/repos/huggingface/datasets/issues/6364/events | https://github.com/huggingface/datasets/issues/6364 | 1,969,136,106 | I_kwDODunzps51XqHq | 6,364 | ArrowNotImplementedError: Unsupported cast from string to list using function cast_list | {
"login": "divyakrishna-devisetty",
"id": 32887094,
"node_id": "MDQ6VXNlcjMyODg3MDk0",
"avatar_url": "https://avatars.githubusercontent.com/u/32887094?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/divyakrishna-devisetty",
"html_url": "https://github.com/divyakrishna-devisetty",
"followers_url": "https://api.github.com/users/divyakrishna-devisetty/followers",
"following_url": "https://api.github.com/users/divyakrishna-devisetty/following{/other_user}",
"gists_url": "https://api.github.com/users/divyakrishna-devisetty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/divyakrishna-devisetty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/divyakrishna-devisetty/subscriptions",
"organizations_url": "https://api.github.com/users/divyakrishna-devisetty/orgs",
"repos_url": "https://api.github.com/users/divyakrishna-devisetty/repos",
"events_url": "https://api.github.com/users/divyakrishna-devisetty/events{/privacy}",
"received_events_url": "https://api.github.com/users/divyakrishna-devisetty/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-10-30T20:14:01 | 2023-10-31T19:21:23 | 2023-10-31T19:21:23 | NONE | null | Hi,
I am trying to load a local csv dataset(similar to explodinggradients_fiqa) using load_dataset. When I try to pass features, I am facing the mentioned issue.
CSV Data sample(golden_dataset.csv):
Question | Context | answer | groundtruth
"what is abc?" | "abc is this and that" | "abc is this " | "abc is this and that"
```
import csv
# built it based on https://huggingface.co/datasets/explodinggradients/fiqa/viewer/ragas_eval?row=0
mydict = [
{'question' : "what is abc?", 'contexts': ["abc is this and that"], 'answer': "abc is this " , 'groundtruth': ["abc is this and that"]},
{'question' : "what is abc?", 'contexts': ["abc is this and that"], 'answer': "abc is this " , 'groundtruth': ["abc is this and that"]},
{'question' : "what is abc?", 'contexts': ["abc is this and that"], 'answer': "abc is this " , 'groundtruth': ["abc is this and that"]}
]
fields = ['question', 'contexts', 'answer', 'ground_truths']
with open('golden_dataset.csv', 'w', newline='\n') as file:
writer = csv.DictWriter(file, fieldnames = fields)
writer.writeheader()
for row in mydict:
writer.writerow(row)
```
Retrieved dataset:
DatasetDict({
train: Dataset({
features: ['question', 'contexts', 'answer', 'ground_truths'],
num_rows: 1
})
})
Code to reproduce issue:
```
from datasets import load_dataset, Features, Sequence, Value
encode_features = Features(
{
"question": Value(dtype='string', id=0),
"contexts": Sequence(feature=Value(dtype='string', id=1)),
"answer": Value(dtype='string', id=2),
"ground_truths": Sequence(feature=Value(dtype='string',id=3)),
}
)
eval_dataset = load_dataset('csv', data_files='/golden_dataset.csv', features = encode_features )
```
Error trace:
```
---------------------------------------------------------------------------
ArrowNotImplementedError Traceback (most recent call last)
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:1925, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1924 _time = time.time()
-> 1925 for _, table in generator:
1926 if max_shard_size is not None and writer._num_bytes > max_shard_size:
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/packaged_modules/csv/csv.py:192, in Csv._generate_tables(self, files)
189 # Uncomment for debugging (will print the Arrow table size and elements)
190 # logger.warning(f"pa_table: {pa_table} num rows: {pa_table.num_rows}")
191 # logger.warning('\n'.join(str(pa_table.slice(i, 1).to_pydict()) for i in range(pa_table.num_rows)))
--> 192 yield (file_idx, batch_idx), self._cast_table(pa_table)
193 except ValueError as e:
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/packaged_modules/csv/csv.py:167, in Csv._cast_table(self, pa_table)
165 if all(not require_storage_cast(feature) for feature in self.config.features.values()):
166 # cheaper cast
--> 167 pa_table = pa.Table.from_arrays([pa_table[field.name] for field in schema], schema=schema)
168 else:
169 # more expensive cast; allows str <-> int/float or str to Audio for example
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/table.pxi:3781, in pyarrow.lib.Table.from_arrays()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/table.pxi:1449, in pyarrow.lib._sanitize_arrays()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/array.pxi:354, in pyarrow.lib.asarray()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/table.pxi:551, in pyarrow.lib.ChunkedArray.cast()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/compute.py:400, in cast(arr, target_type, safe, options, memory_pool)
399 options = CastOptions.safe(target_type)
--> 400 return call_function("cast", [arr], options, memory_pool)
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/_compute.pyx:572, in pyarrow._compute.call_function()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/_compute.pyx:367, in pyarrow._compute.Function.call()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/error.pxi:121, in pyarrow.lib.check_status()
ArrowNotImplementedError: Unsupported cast from string to list using function cast_list
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
Cell In[57], line 1
----> 1 eval_dataset = load_dataset('csv', data_files='/golden_dataset.csv', features = encode_features )
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/load.py:2153, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)
2150 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
2152 # Download and prepare data
-> 2153 builder_instance.download_and_prepare(
2154 download_config=download_config,
2155 download_mode=download_mode,
2156 verification_mode=verification_mode,
2157 try_from_hf_gcs=try_from_hf_gcs,
2158 num_proc=num_proc,
2159 storage_options=storage_options,
2160 )
2162 # Build dataset for splits
2163 keep_in_memory = (
2164 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
2165 )
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:954, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
952 if num_proc is not None:
953 prepare_split_kwargs["num_proc"] = num_proc
--> 954 self._download_and_prepare(
955 dl_manager=dl_manager,
956 verification_mode=verification_mode,
957 **prepare_split_kwargs,
958 **download_and_prepare_kwargs,
959 )
960 # Sync info
961 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:1049, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
1045 split_dict.add(split_generator.split_info)
1047 try:
1048 # Prepare split will record examples associated to the split
-> 1049 self._prepare_split(split_generator, **prepare_split_kwargs)
1050 except OSError as e:
1051 raise OSError(
1052 "Cannot find data file. "
1053 + (self.manual_download_instructions or "")
1054 + "\nOriginal error:\n"
1055 + str(e)
1056 ) from None
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:1813, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, num_proc, max_shard_size)
1811 job_id = 0
1812 with pbar:
-> 1813 for job_id, done, content in self._prepare_split_single(
1814 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
1815 ):
1816 if done:
1817 result = content
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:1958, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1956 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1957 e = e.__context__
-> 1958 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1960 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
```
Environment Info:
datasets version: 2.14.5
Python version: 3.10.8
PyArrow version: 12.0.1
Pandas version: 2.0.3
I have also tried to load dataset first and then use cast_column, or save_to_disk and load_from_disk. | {
"login": "divyakrishna-devisetty",
"id": 32887094,
"node_id": "MDQ6VXNlcjMyODg3MDk0",
"avatar_url": "https://avatars.githubusercontent.com/u/32887094?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/divyakrishna-devisetty",
"html_url": "https://github.com/divyakrishna-devisetty",
"followers_url": "https://api.github.com/users/divyakrishna-devisetty/followers",
"following_url": "https://api.github.com/users/divyakrishna-devisetty/following{/other_user}",
"gists_url": "https://api.github.com/users/divyakrishna-devisetty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/divyakrishna-devisetty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/divyakrishna-devisetty/subscriptions",
"organizations_url": "https://api.github.com/users/divyakrishna-devisetty/orgs",
"repos_url": "https://api.github.com/users/divyakrishna-devisetty/repos",
"events_url": "https://api.github.com/users/divyakrishna-devisetty/events{/privacy}",
"received_events_url": "https://api.github.com/users/divyakrishna-devisetty/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6364/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6363 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6363/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6363/comments | https://api.github.com/repos/huggingface/datasets/issues/6363/events | https://github.com/huggingface/datasets/issues/6363 | 1,968,891,277 | I_kwDODunzps51WuWN | 6,363 | dataset.transform() hangs indefinitely while finetuning the stable diffusion XL | {
"login": "bhosalems",
"id": 10846405,
"node_id": "MDQ6VXNlcjEwODQ2NDA1",
"avatar_url": "https://avatars.githubusercontent.com/u/10846405?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhosalems",
"html_url": "https://github.com/bhosalems",
"followers_url": "https://api.github.com/users/bhosalems/followers",
"following_url": "https://api.github.com/users/bhosalems/following{/other_user}",
"gists_url": "https://api.github.com/users/bhosalems/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhosalems/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhosalems/subscriptions",
"organizations_url": "https://api.github.com/users/bhosalems/orgs",
"repos_url": "https://api.github.com/users/bhosalems/repos",
"events_url": "https://api.github.com/users/bhosalems/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhosalems/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 7 | 2023-10-30T17:34:05 | 2023-11-22T00:29:21 | 2023-11-22T00:29:21 | NONE | null | ### Describe the bug
Multi-GPU fine-tuning the stable diffusion X by following https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/README_sdxl.md hangs indefinitely.
### Steps to reproduce the bug
accelerate launch train_text_to_image_sdxl.py --pretrained_model_name_or_path=$MODEL_NAME --pretrained_vae_model_name_or_path=$VAE_NAME --dataset_name=$DATASET_NAME --enable_xformers_memory_efficient_attention --resolution=512 --center_crop --random_flip --proportion_empty_prompts=0.2 --train_batch_size=1 --gradient_accumulation_steps=4 --gradient_checkpointing --max_train_steps=10000 --use_8bit_adam --learning_rate=1e-06 --lr_scheduler="constant" --lr_warmup_steps=0 --mixed_precision="fp16" --report_to="wandb" --validation_prompt="a cute Sundar Pichai creature" --validation_epochs 5 --checkpointing_steps=5000 --output_dir="sdxl-pokemon-model"
### Expected behavior
It should start the training as it does for the single GPU training. I opened the issue in diffusers **https://github.com/huggingface/diffusers/issues/5534 but it does seem to be an issue with the Pokemon dataset.
I added some debug prints
```
print("==========HERE3=============")
with accelerator.main_process_first():
print(accelerator.is_main_process)
print("===========Here3.1===========")
if args.max_train_samples is not None:
dataset["train"] = dataset["train"].shuffle(seed=args.seed).select(range(args.max_train_samples))
print("===========Here3.2===========")
# Set the training transforms
train_dataset = dataset["train"].with_transform(preprocess_train)
print("==========HERE4=============")
Corresponding Output
Detected kernel version 5.4.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
10/25/2023 21:18:04 - INFO - main - Distributed environment: MULTI_GPU Backend: nccl
Num processes: 3
Process index: 1
Local process index: 1
Device: cuda:1
Mixed precision type: fp16
10/25/2023 21:18:04 - INFO - main - Distributed environment: MULTI_GPU Backend: nccl
Num processes: 3
Process index: 2
Local process index: 2
Device: cuda:2
Mixed precision type: fp16
10/25/2023 21:18:04 - INFO - main - Distributed environment: MULTI_GPU Backend: nccl
Num processes: 3
Process index: 0
Local process index: 0
Device: cuda:0
Mixed precision type: fp16
You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
{‘variance_type’, ‘clip_sample_range’, ‘thresholding’, ‘dynamic_thresholding_ratio’} was not found in config. Values will be initialized to default values.
{‘attention_type’, ‘reverse_transformer_layers_per_block’, ‘dropout’} was not found in config. Values will be initialized to default values.
==========HERE1=============
==========HERE1=============
==========HERE1=============
==========HERE2=============
==========HERE2=============
==========HERE2=============
==========HERE3=============
True
===========Here3.1===========
===========Here3.2===========
==========HERE3=============
==========HERE3=========
```
### Environment info
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 2_kmp_llvm conda-forge
absl-py 2.0.0 pypi_0 pypi
accelerate 0.24.0 pypi_0 pypi
aiohttp 3.8.6 pypi_0 pypi
aiosignal 1.3.1 pypi_0 pypi
appdirs 1.4.4 pyh9f0ad1d_0 conda-forge
async-timeout 4.0.3 pypi_0 pypi
attrs 23.1.0 pypi_0 pypi
bitsandbytes 0.41.1 pypi_0 pypi
blas 1.0 mkl
blessings 1.7 py39h06a4308_1002
brotli-python 1.0.9 py39h6a678d5_7
bzip2 1.0.8 h7b6447c_0
ca-certificates 2023.08.22 h06a4308_0
cachetools 5.3.2 pypi_0 pypi
certifi 2023.7.22 py39h06a4308_0
cffi 1.15.1 py39h5eee18b_3
charset-normalizer 2.0.4 pyhd3eb1b0_0
click 8.1.7 unix_pyh707e725_0 conda-forge
cryptography 41.0.3 py39hdda0065_0
cuda-cudart 11.7.99 0 nvidia
cuda-cupti 11.7.101 0 nvidia
cuda-libraries 11.7.1 0 nvidia
cuda-nvrtc 11.7.99 0 nvidia
cuda-nvtx 11.7.91 0 nvidia
cuda-runtime 11.7.1 0 nvidia
datasets 2.14.6 pypi_0 pypi
diffusers 0.22.0.dev0 pypi_0 pypi
dill 0.3.7 pypi_0 pypi
docker-pycreds 0.4.0 py_0 conda-forge
ffmpeg 4.3 hf484d3e_0 pytorch
filelock 3.12.4 pypi_0 pypi
freetype 2.12.1 h4a9f257_0
frozenlist 1.4.0 pypi_0 pypi
fsspec 2023.10.0 pypi_0 pypi
ftfy 6.1.1 pypi_0 pypi
giflib 5.2.1 h5eee18b_3
gitdb 4.0.11 pyhd8ed1ab_0 conda-forge
gitpython 3.1.40 pyhd8ed1ab_0 conda-forge
gmp 6.2.1 h295c915_3
gnutls 3.6.15 he1e5248_0
google-auth 2.23.3 pypi_0 pypi
google-auth-oauthlib 1.1.0 pypi_0 pypi
gpustat 0.6.0 pyhd3eb1b0_1
grpcio 1.59.0 pypi_0 pypi
huggingface-hub 0.17.3 pypi_0 pypi
idna 3.4 py39h06a4308_0
importlib-metadata 6.8.0 pypi_0 pypi
intel-openmp 2023.1.0 hdb19cb5_46305
jinja2 3.1.2 pypi_0 pypi
jpeg 9e h5eee18b_1
lame 3.100 h7b6447c_0
lcms2 2.12 h3be6417_0
ld_impl_linux-64 2.38 h1181459_1
lerc 3.0 h295c915_0
libcublas 11.10.3.66 0 nvidia
libcufft 10.7.2.124 h4fbf590_0 nvidia
libcufile 1.8.0.34 0 nvidia
libcurand 10.3.4.52 0 nvidia
libcusolver 11.4.0.1 0 nvidia
libcusparse 11.7.4.91 0 nvidia
libdeflate 1.17 h5eee18b_1
libffi 3.4.4 h6a678d5_0
libgcc-ng 13.2.0 h807b86a_2 conda-forge
libgfortran-ng 13.2.0 h69a702a_2 conda-forge
libgfortran5 13.2.0 ha4646dd_2 conda-forge
libiconv 1.16 h7f8727e_2
libidn2 2.3.4 h5eee18b_0
libnpp 11.7.4.75 0 nvidia
libnvjpeg 11.8.0.2 0 nvidia
libpng 1.6.39 h5eee18b_0
libprotobuf 3.20.3 he621ea3_0
libstdcxx-ng 13.2.0 h7e041cc_2 conda-forge
libtasn1 4.19.0 h5eee18b_0
libtiff 4.5.1 h6a678d5_0
libunistring 0.9.10 h27cfd23_0
libwebp 1.3.2 h11a3e52_0
libwebp-base 1.3.2 h5eee18b_0
llvm-openmp 14.0.6 h9e868ea_0
lz4-c 1.9.4 h6a678d5_0
markdown 3.5 pypi_0 pypi
markupsafe 2.1.3 pypi_0 pypi
mkl 2023.1.0 h213fc3f_46343
mkl-service 2.4.0 py39h5eee18b_1
mkl_fft 1.3.8 py39h5eee18b_0
mkl_random 1.2.4 py39hdb19cb5_0
multidict 6.0.4 pypi_0 pypi
multiprocess 0.70.15 pypi_0 pypi
ncurses 6.4 h6a678d5_0
nettle 3.7.3 hbbd107a_1
numpy 1.26.0 py39h5f9d8c6_0
numpy-base 1.26.0 py39hb5e798b_0
nvidia-ml 7.352.0 pyhd3eb1b0_0
oauthlib 3.2.2 pypi_0 pypi
openh264 2.1.1 h4ff587b_0
openjpeg 2.4.0 h3ad879b_0
openssl 3.0.11 h7f8727e_2
packaging 23.2 pypi_0 pypi
pandas 2.1.1 pypi_0 pypi
pathtools 0.1.2 py_1 conda-forge
pillow 10.0.1 py39ha6cbd5a_0
pip 23.3 py39h06a4308_0
protobuf 4.23.4 pypi_0 pypi
psutil 5.9.6 pypi_0 pypi
pyarrow 13.0.0 pypi_0 pypi
pyasn1 0.5.0 pypi_0 pypi
pyasn1-modules 0.3.0 pypi_0 pypi
pycparser 2.21 pyhd3eb1b0_0
pyopenssl 23.2.0 py39h06a4308_0
pysocks 1.7.1 py39h06a4308_0
python 3.9.18 h955ad1f_0
python-dateutil 2.8.2 pypi_0 pypi
python_abi 3.9 2_cp39 conda-forge
pytorch 1.13.1 py3.9_cuda11.7_cudnn8.5.0_0 pytorch
pytorch-cuda 11.7 h778d358_5 pytorch
pytorch-mutex 1.0 cuda pytorch
pytz 2023.3.post1 pypi_0 pypi
pyyaml 6.0.1 pypi_0 pypi
readline 8.2 h5eee18b_0
regex 2023.10.3 pypi_0 pypi
requests 2.31.0 py39h06a4308_0
requests-oauthlib 1.3.1 pypi_0 pypi
rsa 4.9 pypi_0 pypi
safetensors 0.4.0 pypi_0 pypi
scipy 1.11.3 py39h5f9d8c6_0
sentry-sdk 1.32.0 pyhd8ed1ab_0 conda-forge
setproctitle 1.1.10 py39h3811e60_1004 conda-forge
setuptools 68.0.0 py39h06a4308_0
six 1.16.0 pyh6c4a22f_0 conda-forge
smmap 5.0.0 pyhd8ed1ab_0 conda-forge
sqlite 3.41.2 h5eee18b_0
tbb 2021.8.0 hdb19cb5_0
tensorboard 2.15.0 pypi_0 pypi
tensorboard-data-server 0.7.2 pypi_0 pypi
tk 8.6.12 h1ccaba5_0
tokenizers 0.14.1 pypi_0 pypi
torchaudio 0.13.1 py39_cu117 pytorch
torchtriton 2.1.0 py39 pytorch
torchvision 0.14.1 py39_cu117 pytorch
tqdm 4.66.1 pypi_0 pypi
transformers 4.34.1 pypi_0 pypi
typing_extensions 4.7.1 py39h06a4308_0
tzdata 2023.3 pypi_0 pypi
urllib3 1.26.18 py39h06a4308_0
wandb 0.15.12 pyhd8ed1ab_0 conda-forge
wcwidth 0.2.8 pypi_0 pypi
werkzeug 3.0.1 pypi_0 pypi
wheel 0.41.2 py39h06a4308_0
xformers 0.0.22.post7 py39_cu11.7.1_pyt1.13.1 xformers
xxhash 3.4.1 pypi_0 pypi
xz 5.4.2 h5eee18b_0
yaml 0.2.5 h7f98852_2 conda-forge
yarl 1.9.2 pypi_0 pypi
zipp 3.17.0 pypi_0 pypi
zlib 1.2.13 h5eee18b_0
zstd 1.5.5 hc292b87_0 | {
"login": "bhosalems",
"id": 10846405,
"node_id": "MDQ6VXNlcjEwODQ2NDA1",
"avatar_url": "https://avatars.githubusercontent.com/u/10846405?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhosalems",
"html_url": "https://github.com/bhosalems",
"followers_url": "https://api.github.com/users/bhosalems/followers",
"following_url": "https://api.github.com/users/bhosalems/following{/other_user}",
"gists_url": "https://api.github.com/users/bhosalems/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhosalems/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhosalems/subscriptions",
"organizations_url": "https://api.github.com/users/bhosalems/orgs",
"repos_url": "https://api.github.com/users/bhosalems/repos",
"events_url": "https://api.github.com/users/bhosalems/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhosalems/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6363/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6362 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6362/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6362/comments | https://api.github.com/repos/huggingface/datasets/issues/6362/events | https://github.com/huggingface/datasets/pull/6362 | 1,965,794,569 | PR_kwDODunzps5d_MxD | 6,362 | Simplify filesystem logic | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 13 | 2023-10-27T15:54:18 | 2023-11-15T14:08:29 | 2023-11-15T14:02:02 | COLLABORATOR | null | Simplifies the existing filesystem logic (e.g., to avoid unnecessary if-else as mentioned in https://github.com/huggingface/datasets/pull/6098#issue-1827655071) | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6362/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6362",
"html_url": "https://github.com/huggingface/datasets/pull/6362",
"diff_url": "https://github.com/huggingface/datasets/pull/6362.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6362.patch",
"merged_at": "2023-11-15T14:02:02"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6360 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6360/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6360/comments | https://api.github.com/repos/huggingface/datasets/issues/6360/events | https://github.com/huggingface/datasets/issues/6360 | 1,965,672,950 | I_kwDODunzps51Kcn2 | 6,360 | Add support for `Sequence(Audio/Image)` feature in `push_to_hub` | {
"login": "Laurent2916",
"id": 21087104,
"node_id": "MDQ6VXNlcjIxMDg3MTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/21087104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Laurent2916",
"html_url": "https://github.com/Laurent2916",
"followers_url": "https://api.github.com/users/Laurent2916/followers",
"following_url": "https://api.github.com/users/Laurent2916/following{/other_user}",
"gists_url": "https://api.github.com/users/Laurent2916/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Laurent2916/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Laurent2916/subscriptions",
"organizations_url": "https://api.github.com/users/Laurent2916/orgs",
"repos_url": "https://api.github.com/users/Laurent2916/repos",
"events_url": "https://api.github.com/users/Laurent2916/events{/privacy}",
"received_events_url": "https://api.github.com/users/Laurent2916/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1 | 2023-10-27T14:39:57 | 2024-02-06T19:24:20 | 2024-02-06T19:24:20 | CONTRIBUTOR | null | ### Feature request
Allow for `Sequence` of `Image` (or `Audio`) to be embedded inside the shards.
### Motivation
Currently, thanks to #3685, when `embed_external_files` is set to True (which is the default) in `push_to_hub`, features of type `Image` and `Audio` are embedded inside the arrow/parquet shards, instead of only storing paths to the files.
I've noticed that this behavior does not extend to `Sequence` of `Image`, when working with a [dataset of timelapse images](https://huggingface.co/datasets/1aurent/Human-Embryo-Timelapse).
### Your contribution
I'll submit a PR if I find a way to add this feature | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6360/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6359 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6359/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6359/comments | https://api.github.com/repos/huggingface/datasets/issues/6359/events | https://github.com/huggingface/datasets/issues/6359 | 1,965,378,583 | I_kwDODunzps51JUwX | 6,359 | Stuck in "Resolving data files..." | {
"login": "Luciennnnnnn",
"id": 20135317,
"node_id": "MDQ6VXNlcjIwMTM1MzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/20135317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Luciennnnnnn",
"html_url": "https://github.com/Luciennnnnnn",
"followers_url": "https://api.github.com/users/Luciennnnnnn/followers",
"following_url": "https://api.github.com/users/Luciennnnnnn/following{/other_user}",
"gists_url": "https://api.github.com/users/Luciennnnnnn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Luciennnnnnn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Luciennnnnnn/subscriptions",
"organizations_url": "https://api.github.com/users/Luciennnnnnn/orgs",
"repos_url": "https://api.github.com/users/Luciennnnnnn/repos",
"events_url": "https://api.github.com/users/Luciennnnnnn/events{/privacy}",
"received_events_url": "https://api.github.com/users/Luciennnnnnn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 4 | 2023-10-27T12:01:51 | 2024-01-24T15:02:06 | null | NONE | null | ### Describe the bug
I have an image dataset with 300k images, the size of image is 768 * 768.
When I run `dataset = load_dataset("imagefolder", data_dir="/path/to/img_dir", split='train')` in second time, it takes 50 minutes to finish "Resolving data files" part, what's going on in this part?
From my understand, after Arrow files been created in the first run, the second run should not take time longer than one or two minutes.
### Steps to reproduce the bug
# Run following code two times
dataset = load_dataset("imagefolder", data_dir="/path/to/img_dir", split='train')
### Expected behavior
Fast dataset building
### Environment info
- `datasets` version: 2.14.5
- Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.35
- Python version: 3.10.11
- Huggingface_hub version: 0.17.3
- PyArrow version: 10.0.1
- Pandas version: 1.5.3 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6359/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6358 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6358/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6358/comments | https://api.github.com/repos/huggingface/datasets/issues/6358/events | https://github.com/huggingface/datasets/issues/6358 | 1,965,014,595 | I_kwDODunzps51H75D | 6,358 | Mounting datasets cache fails due to absolute paths. | {
"login": "charliebudd",
"id": 72921588,
"node_id": "MDQ6VXNlcjcyOTIxNTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/72921588?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/charliebudd",
"html_url": "https://github.com/charliebudd",
"followers_url": "https://api.github.com/users/charliebudd/followers",
"following_url": "https://api.github.com/users/charliebudd/following{/other_user}",
"gists_url": "https://api.github.com/users/charliebudd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/charliebudd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/charliebudd/subscriptions",
"organizations_url": "https://api.github.com/users/charliebudd/orgs",
"repos_url": "https://api.github.com/users/charliebudd/repos",
"events_url": "https://api.github.com/users/charliebudd/events{/privacy}",
"received_events_url": "https://api.github.com/users/charliebudd/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2023-10-27T08:20:27 | 2024-04-10T08:50:06 | 2023-11-28T14:47:12 | NONE | null | ### Describe the bug
Creating a datasets cache and mounting this into, for example, a docker container, renders the data unreadable due to absolute paths written into the cache.
### Steps to reproduce the bug
1. Create a datasets cache by downloading some data
2. Mount the dataset folder into a docker container or remote system.
3. (Edit) Set `HF_HOME` or `HF_DATASET_CACHE` to point to the mounted cache.
4. Attempt to access the data from within the docker container.
5. An error is thrown saying no file exists at \<absolute path to original cache location\>
### Expected behavior
The data is loaded without error
### Environment info
- `datasets` version: 2.14.4
- Platform: Linux-5.4.0-162-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.16.4
- PyArrow version: 13.0.0
- Pandas version: 2.0.3 | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6358/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6357 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6357/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6357/comments | https://api.github.com/repos/huggingface/datasets/issues/6357/events | https://github.com/huggingface/datasets/issues/6357 | 1,964,653,995 | I_kwDODunzps51Gj2r | 6,357 | Allow passing a multiprocessing context to functions that support `num_proc` | {
"login": "bryant1410",
"id": 3905501,
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryant1410",
"html_url": "https://github.com/bryant1410",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2023-10-27T02:31:16 | 2023-10-27T02:31:16 | null | CONTRIBUTOR | null | ### Feature request
Allow specifying [a multiprocessing context](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods) to functions that support `num_proc` or use multiprocessing pools. For example, the following could be done:
```python
dataset = dataset.map(_func, num_proc=2, mp_context=multiprocess.get_context("spawn"))
```
Or at least the multiprocessing start method ("fork", "spawn", "fork_server" or `None`):
```python
dataset = dataset.map(_func, num_proc=2, mp_start_method="spawn")
```
Another option could be passing the `pool` as an argument.
### Motivation
By default, `multiprocess` (the `multiprocessing`-fork library that this repo uses) uses the "fork" start method for multiprocessing pools (for the default context). It could be changed by using `set_start_method`. However, this conditions the multiprocessing start method from all processing in a Python program that uses the default context, because [you can't call that function more than once](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods:~:text=set_start_method()%20should%20not%20be%20used%20more%20than%20once%20in%20the%20program.). My proposal is to allow using a different multiprocessing context, not to condition the whole Python program.
One reason to change the start method is that "fork" (the default) makes child processes likely deadlock if thread pools were created before (and also this is not supported by POSIX). For example, this happens when using PyTorch because OpenMP threads are used for CPU intra-op parallelism, which is enabled by default (e.g., for context see [`torch.set_num_threads`](https://pytorch.org/docs/stable/generated/torch.set_num_threads.html)). This can also be fixed by setting `torch.set_num_threads(1)` (or similarly by other methods) but this conditions the whole Python program as it can only be set once to guarantee its behavior (similarly to). There are noticeable performance differences when setting this number to 1 even when using GPU(s). Using, e.g., a "spawn" start method would solve this issue.
For more context, see:
* https://discuss.huggingface.co/t/dataset-map-stuck-with-torch-set-num-threads-set-to-2-or-larger/37984
* https://discuss.huggingface.co/t/using-num-proc-1-in-dataset-map-hangs/44310
### Your contribution
I'd be happy to review a PR that makes such a change. And if you really don't have the bandwidth for it, I'd consider creating one. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6357/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6357/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6356 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6356/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6356/comments | https://api.github.com/repos/huggingface/datasets/issues/6356/events | https://github.com/huggingface/datasets/pull/6356 | 1,964,015,802 | PR_kwDODunzps5d5Jri | 6,356 | Add `fsspec` version to the `datasets-cli env` command output | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-10-26T17:19:25 | 2023-10-26T18:42:56 | 2023-10-26T18:32:21 | COLLABORATOR | null | ... to make debugging issues easier, as `fsspec`'s releases often introduce breaking changes. | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6356/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6356/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6356",
"html_url": "https://github.com/huggingface/datasets/pull/6356",
"diff_url": "https://github.com/huggingface/datasets/pull/6356.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6356.patch",
"merged_at": "2023-10-26T18:32:21"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6355 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6355/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6355/comments | https://api.github.com/repos/huggingface/datasets/issues/6355/events | https://github.com/huggingface/datasets/pull/6355 | 1,963,979,896 | PR_kwDODunzps5d5B2B | 6,355 | More hub centric docs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-10-26T16:54:46 | 2024-01-11T06:34:16 | 2023-10-30T17:32:57 | MEMBER | null | Let's have more hub-centric documentation in the datasets docs
Tutorials
- Add “Configure the dataset viewer” page
- Change order:
- Overview
- and more focused on the Hub rather than the library
- Then all the hub related things
- and mention how to read/write with other tools like pandas
- Then all the datasets lib related things in a subsection
Also:
- Rename “know your dataset” page to “Explore your dataset”
- Remove “Evaluate Predictions” page since it's 'evaluate' stuff (or move to legacy section ?)
TODO:
- [ ] write the “Configure the dataset viewer” page | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6355/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6355/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6355",
"html_url": "https://github.com/huggingface/datasets/pull/6355",
"diff_url": "https://github.com/huggingface/datasets/pull/6355.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6355.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6354 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6354/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6354/comments | https://api.github.com/repos/huggingface/datasets/issues/6354/events | https://github.com/huggingface/datasets/issues/6354 | 1,963,483,324 | I_kwDODunzps51CGC8 | 6,354 | `IterableDataset.from_spark` does not support multiple workers in pytorch `Dataloader` | {
"login": "NazyS",
"id": 50199774,
"node_id": "MDQ6VXNlcjUwMTk5Nzc0",
"avatar_url": "https://avatars.githubusercontent.com/u/50199774?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NazyS",
"html_url": "https://github.com/NazyS",
"followers_url": "https://api.github.com/users/NazyS/followers",
"following_url": "https://api.github.com/users/NazyS/following{/other_user}",
"gists_url": "https://api.github.com/users/NazyS/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NazyS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NazyS/subscriptions",
"organizations_url": "https://api.github.com/users/NazyS/orgs",
"repos_url": "https://api.github.com/users/NazyS/repos",
"events_url": "https://api.github.com/users/NazyS/events{/privacy}",
"received_events_url": "https://api.github.com/users/NazyS/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2023-10-26T12:43:36 | 2024-12-10T14:06:06 | null | NONE | null | ### Describe the bug
Looks like `IterableDataset.from_spark` does not support multiple workers in pytorch `Dataloader` if I'm not missing anything.
Also, returns not consistent error messages, which probably depend on the nondeterministic order of worker executions
Some exampes I've encountered:
```
File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-68c05436-3512-41c4-88ca-5630012b70d1/lib/python3.10/site-packages/datasets/packaged_modules/spark/spark.py", line 79, in __iter__
yield from self.generate_examples_fn()
File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-68c05436-3512-41c4-88ca-5630012b70d1/lib/python3.10/site-packages/datasets/packaged_modules/spark/spark.py", line 49, in generate_fn
df_with_partition_id = df.select("*", pyspark.sql.functions.spark_partition_id().alias("part_id"))
File "/databricks/spark/python/pyspark/instrumentation_utils.py", line 54, in wrapper
logger.log_failure(
File "/databricks/spark/python/pyspark/databricks/usage_logger.py", line 70, in log_failure
self.logger.recordFunctionCallFailureEvent(
File "/databricks/spark/python/lib/py4j-0.10.9.7-src.zip/py4j/java_gateway.py", line 1322, in __call__
return_value = get_return_value(
File "/databricks/spark/python/pyspark/errors/exceptions/captured.py", line 188, in deco
return f(*a, **kw)
File "/databricks/spark/python/lib/py4j-0.10.9.7-src.zip/py4j/protocol.py", line 342, in get_return_value
return OUTPUT_CONVERTER[type](answer[2:], gateway_client)
KeyError: 'c'
```
```
File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-68c05436-3512-41c4-88ca-5630012b70d1/lib/python3.10/site-packages/datasets/packaged_modules/spark/spark.py", line 79, in __iter__
yield from self.generate_examples_fn()
File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-68c05436-3512-41c4-88ca-5630012b70d1/lib/python3.10/site-packages/datasets/packaged_modules/spark/spark.py", line 49, in generate_fn
df_with_partition_id = df.select("*", pyspark.sql.functions.spark_partition_id().alias("part_id"))
File "/databricks/spark/python/pyspark/sql/utils.py", line 162, in wrapped
return f(*args, **kwargs)
File "/databricks/spark/python/pyspark/sql/functions.py", line 4893, in spark_partition_id
return _invoke_function("spark_partition_id")
File "/databricks/spark/python/pyspark/sql/functions.py", line 98, in _invoke_function
return Column(jf(*args))
File "/databricks/spark/python/lib/py4j-0.10.9.7-src.zip/py4j/java_gateway.py", line 1322, in __call__
return_value = get_return_value(
File "/databricks/spark/python/pyspark/errors/exceptions/captured.py", line 188, in deco
return f(*a, **kw)
File "/databricks/spark/python/lib/py4j-0.10.9.7-src.zip/py4j/protocol.py", line 342, in get_return_value
return OUTPUT_CONVERTER[type](answer[2:], gateway_client)
KeyError: 'm'
```
```
File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-68c05436-3512-41c4-88ca-5630012b70d1/lib/python3.10/site-packages/datasets/packaged_modules/spark/spark.py", line 79, in __iter__
yield from self.generate_examples_fn()
File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-68c05436-3512-41c4-88ca-5630012b70d1/lib/python3.10/site-packages/datasets/packaged_modules/spark/spark.py", line 49, in generate_fn
df_with_partition_id = df.select("*", pyspark.sql.functions.spark_partition_id().alias("part_id"))
File "/databricks/spark/python/pyspark/sql/utils.py", line 162, in wrapped
return f(*args, **kwargs)
File "/databricks/spark/python/pyspark/sql/functions.py", line 4893, in spark_partition_id
return _invoke_function("spark_partition_id")
File "/databricks/spark/python/pyspark/sql/functions.py", line 97, in _invoke_function
jf = _get_jvm_function(name, SparkContext._active_spark_context)
File "/databricks/spark/python/pyspark/sql/functions.py", line 88, in _get_jvm_function
return getattr(sc._jvm.functions, name)
File "/databricks/spark/python/lib/py4j-0.10.9.7-src.zip/py4j/java_gateway.py", line 1725, in __getattr__
raise Py4JError(message)
py4j.protocol.Py4JError: functions does not exist in the JVM
```
### Steps to reproduce the bug
```python
import pandas as pd
import numpy as np
batch_size = 16
pdf = pd.DataFrame({
key: np.random.rand(16*100) for key in ['feature', 'target']
})
test_df = spark.createDataFrame(pdf)
from datasets import IterableDataset
from torch.utils.data import DataLoader
ids = IterableDataset.from_spark(test_df)
for batch in DataLoader(ids, batch_size=16, num_workers=4):
for k, b in batch.items():
print(k, b.shape, sep='\t')
print('\n')
```
### Expected behavior
For `num_workers` equal to 0 or 1 works fine as expected:
```
feature torch.Size([16])
target torch.Size([16])
feature torch.Size([16])
target torch.Size([16])
....
```
Expected to support workers >1.
### Environment info
Databricks 13.3 LTS ML runtime - Spark 3.4.1
pyspark==3.4.1
py4j==0.10.9.7
datasets==2.13.1 and also tested with datasets==2.14.6 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6354/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6353 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6353/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6353/comments | https://api.github.com/repos/huggingface/datasets/issues/6353/events | https://github.com/huggingface/datasets/issues/6353 | 1,962,646,450 | I_kwDODunzps50-5uy | 6,353 | load_dataset save_to_disk load_from_disk error | {
"login": "brisker",
"id": 13804492,
"node_id": "MDQ6VXNlcjEzODA0NDky",
"avatar_url": "https://avatars.githubusercontent.com/u/13804492?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brisker",
"html_url": "https://github.com/brisker",
"followers_url": "https://api.github.com/users/brisker/followers",
"following_url": "https://api.github.com/users/brisker/following{/other_user}",
"gists_url": "https://api.github.com/users/brisker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brisker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brisker/subscriptions",
"organizations_url": "https://api.github.com/users/brisker/orgs",
"repos_url": "https://api.github.com/users/brisker/repos",
"events_url": "https://api.github.com/users/brisker/events{/privacy}",
"received_events_url": "https://api.github.com/users/brisker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2023-10-26T03:47:06 | 2024-04-03T05:31:01 | 2023-10-26T10:18:04 | NONE | null | ### Describe the bug
datasets version: 2.10.1
I `load_dataset `and `save_to_disk` sucessfully on windows10( **and I `load_from_disk(/LLM/data/wiki)` succcesfully on windows10**), and I copy the dataset `/LLM/data/wiki`
into a ubuntu system, but when I `load_from_disk(/LLM/data/wiki)` on ubuntu, something weird happens:
```
load_from_disk('/LLM/data/wiki')
File "/usr/local/miniconda3/lib/python3.8/site-packages/datasets/load.py", line 1874, in load_from_disk
return DatasetDict.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options)
File "/usr/local/miniconda3/lib/python3.8/site-packages/datasets/dataset_dict.py", line 1309, in load_from_disk
dataset_dict[k] = Dataset.load_from_disk(
File "/usr/local/miniconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1543, in load_from_disk
fs_token_paths = fsspec.get_fs_token_paths(dataset_path, storage_options=storage_options)
File "/usr/local/miniconda3/lib/python3.8/site-packages/fsspec/core.py", line 610, in get_fs_token_paths
chain = _un_chain(urlpath0, storage_options or {})
File "/usr/local/miniconda3/lib/python3.8/site-packages/fsspec/core.py", line 325, in _un_chain
cls = get_filesystem_class(protocol)
File "/usr/local/miniconda3/lib/python3.8/site-packages/fsspec/registry.py", line 232, in get_filesystem_class
raise ValueError(f"Protocol not known: {protocol}")
ValueError: Protocol not known: /LLM/data/wiki
```
It seems that something went wrong on the arrow file?
How can I solve this , since currently I can not save_to_disk on ubuntu system
### Steps to reproduce the bug
datasets version: 2.10.1
### Expected behavior
datasets version: 2.10.1
### Environment info
datasets version: 2.10.1 | {
"login": "brisker",
"id": 13804492,
"node_id": "MDQ6VXNlcjEzODA0NDky",
"avatar_url": "https://avatars.githubusercontent.com/u/13804492?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brisker",
"html_url": "https://github.com/brisker",
"followers_url": "https://api.github.com/users/brisker/followers",
"following_url": "https://api.github.com/users/brisker/following{/other_user}",
"gists_url": "https://api.github.com/users/brisker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brisker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brisker/subscriptions",
"organizations_url": "https://api.github.com/users/brisker/orgs",
"repos_url": "https://api.github.com/users/brisker/repos",
"events_url": "https://api.github.com/users/brisker/events{/privacy}",
"received_events_url": "https://api.github.com/users/brisker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6353/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6353/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6352 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6352/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6352/comments | https://api.github.com/repos/huggingface/datasets/issues/6352/events | https://github.com/huggingface/datasets/issues/6352 | 1,962,296,057 | I_kwDODunzps509kL5 | 6,352 | Error loading wikitext data raise NotImplementedError(f"Loading a dataset cached in a {type(self._fs).__name__} is not supported.") | {
"login": "Ahmed-Roushdy",
"id": 68569076,
"node_id": "MDQ6VXNlcjY4NTY5MDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/68569076?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ahmed-Roushdy",
"html_url": "https://github.com/Ahmed-Roushdy",
"followers_url": "https://api.github.com/users/Ahmed-Roushdy/followers",
"following_url": "https://api.github.com/users/Ahmed-Roushdy/following{/other_user}",
"gists_url": "https://api.github.com/users/Ahmed-Roushdy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ahmed-Roushdy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ahmed-Roushdy/subscriptions",
"organizations_url": "https://api.github.com/users/Ahmed-Roushdy/orgs",
"repos_url": "https://api.github.com/users/Ahmed-Roushdy/repos",
"events_url": "https://api.github.com/users/Ahmed-Roushdy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ahmed-Roushdy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 13 | 2023-10-25T21:55:31 | 2024-03-19T16:46:22 | 2023-11-07T07:26:54 | NONE | null | I was trying to load the wiki dataset, but i got this error
traindata = load_dataset('wikitext', 'wikitext-2-raw-v1', split='train')
File "/home/aelkordy/.conda/envs/prune_llm/lib/python3.9/site-packages/datasets/load.py", line 1804, in load_dataset
ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory)
File "/home/aelkordy/.conda/envs/prune_llm/lib/python3.9/site-packages/datasets/builder.py", line 1108, in as_dataset
raise NotImplementedError(f"Loading a dataset cached in a {type(self._fs).__name__} is not supported.")
NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported. | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6352/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6352/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6351 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6351/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6351/comments | https://api.github.com/repos/huggingface/datasets/issues/6351/events | https://github.com/huggingface/datasets/pull/6351 | 1,961,982,988 | PR_kwDODunzps5dyMvh | 6,351 | Fix use_dataset.mdx | {
"login": "angel-luis",
"id": 17672548,
"node_id": "MDQ6VXNlcjE3NjcyNTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17672548?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/angel-luis",
"html_url": "https://github.com/angel-luis",
"followers_url": "https://api.github.com/users/angel-luis/followers",
"following_url": "https://api.github.com/users/angel-luis/following{/other_user}",
"gists_url": "https://api.github.com/users/angel-luis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/angel-luis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/angel-luis/subscriptions",
"organizations_url": "https://api.github.com/users/angel-luis/orgs",
"repos_url": "https://api.github.com/users/angel-luis/repos",
"events_url": "https://api.github.com/users/angel-luis/events{/privacy}",
"received_events_url": "https://api.github.com/users/angel-luis/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-10-25T18:21:08 | 2023-10-26T17:19:49 | 2023-10-26T17:10:27 | CONTRIBUTOR | null | The current example isn't working because it can't find `labels` inside the Dataset object. So I've added an extra step to the process. Tested and working in Colab. | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6351/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6351/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6351",
"html_url": "https://github.com/huggingface/datasets/pull/6351",
"diff_url": "https://github.com/huggingface/datasets/pull/6351.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6351.patch",
"merged_at": "2023-10-26T17:10:27"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6350 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6350/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6350/comments | https://api.github.com/repos/huggingface/datasets/issues/6350/events | https://github.com/huggingface/datasets/issues/6350 | 1,961,869,203 | I_kwDODunzps5077-T | 6,350 | Different objects are returned from calls that should be returning the same kind of object. | {
"login": "phalexo",
"id": 4603365,
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phalexo",
"html_url": "https://github.com/phalexo",
"followers_url": "https://api.github.com/users/phalexo/followers",
"following_url": "https://api.github.com/users/phalexo/following{/other_user}",
"gists_url": "https://api.github.com/users/phalexo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phalexo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phalexo/subscriptions",
"organizations_url": "https://api.github.com/users/phalexo/orgs",
"repos_url": "https://api.github.com/users/phalexo/repos",
"events_url": "https://api.github.com/users/phalexo/events{/privacy}",
"received_events_url": "https://api.github.com/users/phalexo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2023-10-25T17:08:39 | 2023-10-26T21:03:06 | null | NONE | null | ### Describe the bug
1. dataset = load_dataset("togethercomputer/RedPajama-Data-1T-Sample", cache_dir=training_args.cache_dir, split='train[:1%]')
2. dataset = load_dataset("togethercomputer/RedPajama-Data-1T-Sample", cache_dir=training_args.cache_dir)
The only difference I would expect these calls to have is the size of the dataset.
But, while 2. returns a dictionary with "train" key in it, 1. returns a dataset WITHOUT any initial "train" keyword.
Both calls are to be used within exactly the same context. They should return identically structured datasets of different size.
### Steps to reproduce the bug
See above.
### Expected behavior
Expect both calls to return the same structured Dataset structure but with different number of elements, i.e. call 1. should have 1% of the data of the call 2.0
### Environment info
Ubuntu 20.04
gcc 9.x.x.
It is really irrelevant. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6350/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6350/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6349 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6349/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6349/comments | https://api.github.com/repos/huggingface/datasets/issues/6349/events | https://github.com/huggingface/datasets/issues/6349 | 1,961,435,673 | I_kwDODunzps506SIZ | 6,349 | Can't load ds = load_dataset("imdb") | {
"login": "vivianc2",
"id": 86415736,
"node_id": "MDQ6VXNlcjg2NDE1NzM2",
"avatar_url": "https://avatars.githubusercontent.com/u/86415736?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vivianc2",
"html_url": "https://github.com/vivianc2",
"followers_url": "https://api.github.com/users/vivianc2/followers",
"following_url": "https://api.github.com/users/vivianc2/following{/other_user}",
"gists_url": "https://api.github.com/users/vivianc2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vivianc2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vivianc2/subscriptions",
"organizations_url": "https://api.github.com/users/vivianc2/orgs",
"repos_url": "https://api.github.com/users/vivianc2/repos",
"events_url": "https://api.github.com/users/vivianc2/events{/privacy}",
"received_events_url": "https://api.github.com/users/vivianc2/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-10-25T13:29:51 | 2024-03-20T15:09:53 | 2023-10-31T19:59:35 | NONE | null | ### Describe the bug
I did `from datasets import load_dataset, load_metric` and then `ds = load_dataset("imdb")` and it gave me the error:
ExpectedMoreDownloadedFiles: {'http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz'}
I tried doing `ds = load_dataset("imdb",download_mode="force_redownload")` as well as reinstalling dataset. I still face this problem.
### Steps to reproduce the bug
1. from datasets import load_dataset, load_metric
2. ds = load_dataset("imdb")
### Expected behavior
It should load and give me this when I run `ds`
DatasetDict({
train: Dataset({
features: ['text', 'label'],
num_rows: 25000
})
test: Dataset({
features: ['text', 'label'],
num_rows: 25000
})
unsupervised: Dataset({
features: ['text', 'label'],
num_rows: 50000
})
})
### Environment info
- `datasets` version: 2.14.6
- Platform: Linux-5.4.0-164-generic-x86_64-with-glibc2.17
- Python version: 3.8.18
- Huggingface_hub version: 0.16.2
- PyArrow version: 13.0.0
- Pandas version: 2.0.2 | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6349/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6349/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6348 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6348/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6348/comments | https://api.github.com/repos/huggingface/datasets/issues/6348/events | https://github.com/huggingface/datasets/issues/6348 | 1,961,268,504 | I_kwDODunzps505pUY | 6,348 | Parquet stream-conversion fails to embed images/audio files from gated repos | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2023-10-25T12:12:44 | 2023-10-25T12:13:07 | null | COLLABORATOR | null | it seems to be an issue with datasets not passing the token to embed_table_storage when generating a dataset
See https://github.com/huggingface/datasets-server/issues/2010 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6348/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6348/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6347 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6347/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6347/comments | https://api.github.com/repos/huggingface/datasets/issues/6347/events | https://github.com/huggingface/datasets/issues/6347 | 1,959,004,835 | I_kwDODunzps50xAqj | 6,347 | Incorrect example code in 'Create a dataset' docs | {
"login": "rwood-97",
"id": 72076688,
"node_id": "MDQ6VXNlcjcyMDc2Njg4",
"avatar_url": "https://avatars.githubusercontent.com/u/72076688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rwood-97",
"html_url": "https://github.com/rwood-97",
"followers_url": "https://api.github.com/users/rwood-97/followers",
"following_url": "https://api.github.com/users/rwood-97/following{/other_user}",
"gists_url": "https://api.github.com/users/rwood-97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rwood-97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rwood-97/subscriptions",
"organizations_url": "https://api.github.com/users/rwood-97/orgs",
"repos_url": "https://api.github.com/users/rwood-97/repos",
"events_url": "https://api.github.com/users/rwood-97/events{/privacy}",
"received_events_url": "https://api.github.com/users/rwood-97/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-10-24T11:01:21 | 2023-10-25T13:05:21 | 2023-10-25T13:05:21 | NONE | null | ### Describe the bug
On [this](https://huggingface.co/docs/datasets/create_dataset) page, the example code for loading in images and audio is incorrect.
Currently, examples are:
``` python
from datasets import ImageFolder
dataset = load_dataset("imagefolder", data_dir="/path/to/pokemon")
```
and
``` python
from datasets import AudioFolder
dataset = load_dataset("audiofolder", data_dir="/path/to/folder")
```
I'm pretty sure the imports are wrong and should be:
``` python
from datasets import load_dataset
dataset = load_dataset("audiofolder", data_dir="/path/to/folder")
```
I am happy to update this if this is right but just wanted to check before making any changes.
### Steps to reproduce the bug
Go to https://huggingface.co/docs/datasets/create_dataset
### Expected behavior
N/A
### Environment info
N/A | {
"login": "rwood-97",
"id": 72076688,
"node_id": "MDQ6VXNlcjcyMDc2Njg4",
"avatar_url": "https://avatars.githubusercontent.com/u/72076688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rwood-97",
"html_url": "https://github.com/rwood-97",
"followers_url": "https://api.github.com/users/rwood-97/followers",
"following_url": "https://api.github.com/users/rwood-97/following{/other_user}",
"gists_url": "https://api.github.com/users/rwood-97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rwood-97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rwood-97/subscriptions",
"organizations_url": "https://api.github.com/users/rwood-97/orgs",
"repos_url": "https://api.github.com/users/rwood-97/repos",
"events_url": "https://api.github.com/users/rwood-97/events{/privacy}",
"received_events_url": "https://api.github.com/users/rwood-97/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6347/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6347/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6346 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6346/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6346/comments | https://api.github.com/repos/huggingface/datasets/issues/6346/events | https://github.com/huggingface/datasets/pull/6346 | 1,958,777,076 | PR_kwDODunzps5dnZM_ | 6,346 | Fix UnboundLocalError if preprocessing returns an empty list | {
"login": "cwallenwein",
"id": 40916592,
"node_id": "MDQ6VXNlcjQwOTE2NTky",
"avatar_url": "https://avatars.githubusercontent.com/u/40916592?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cwallenwein",
"html_url": "https://github.com/cwallenwein",
"followers_url": "https://api.github.com/users/cwallenwein/followers",
"following_url": "https://api.github.com/users/cwallenwein/following{/other_user}",
"gists_url": "https://api.github.com/users/cwallenwein/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cwallenwein/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cwallenwein/subscriptions",
"organizations_url": "https://api.github.com/users/cwallenwein/orgs",
"repos_url": "https://api.github.com/users/cwallenwein/repos",
"events_url": "https://api.github.com/users/cwallenwein/events{/privacy}",
"received_events_url": "https://api.github.com/users/cwallenwein/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-10-24T08:38:43 | 2023-10-25T17:39:17 | 2023-10-25T16:36:38 | CONTRIBUTOR | null | If this tokenization function is used with IterableDatasets and no sample is as big as the context length, `input_batch` will be an empty list.
```
def tokenize(batch, tokenizer, context_length):
outputs = tokenizer(
batch["text"],
truncation=True,
max_length=context_length,
return_overflowing_tokens=True,
return_length=True
)
input_batch = []
for length, input_ids in zip(outputs["length"], outputs["input_ids"]):
if length == context_length:
input_batch.append(input_ids)
return {"input_ids": input_batch}
dataset.map(tokenize, batched=True, batch_size=batch_size, fn_kwargs={"context_length": context_length, "tokenizer": tokenizer}, remove_columns=dataset.column_names)
```
This will throw the following error: UnboundLocalError: local variable 'batch_idx' referenced before assignment, because the for loop was not executed a single time
```
for batch_idx, example in enumerate(_batch_to_examples(transformed_batch)):
yield new_key, example
current_idx += batch_idx + 1
```
Some of the possible solutions
```
for batch_idx, example in enumerate(_batch_to_examples(transformed_batch)):
yield new_key, example
try:
current_idx += batch_idx + 1
except:
current_idx += 1
```
or
```
batch_idx = 0
for batch_idx, example in enumerate(_batch_to_examples(transformed_batch)):
yield new_key, example
current_idx += batch_idx + 1
``` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6346/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6346",
"html_url": "https://github.com/huggingface/datasets/pull/6346",
"diff_url": "https://github.com/huggingface/datasets/pull/6346.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6346.patch",
"merged_at": "2023-10-25T16:36:38"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6345 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6345/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6345/comments | https://api.github.com/repos/huggingface/datasets/issues/6345/events | https://github.com/huggingface/datasets/issues/6345 | 1,957,707,870 | I_kwDODunzps50sEBe | 6,345 | support squad structure datasets using a YAML parameter | {
"login": "MajdTannous1",
"id": 138524319,
"node_id": "U_kgDOCEG2nw",
"avatar_url": "https://avatars.githubusercontent.com/u/138524319?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MajdTannous1",
"html_url": "https://github.com/MajdTannous1",
"followers_url": "https://api.github.com/users/MajdTannous1/followers",
"following_url": "https://api.github.com/users/MajdTannous1/following{/other_user}",
"gists_url": "https://api.github.com/users/MajdTannous1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MajdTannous1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MajdTannous1/subscriptions",
"organizations_url": "https://api.github.com/users/MajdTannous1/orgs",
"repos_url": "https://api.github.com/users/MajdTannous1/repos",
"events_url": "https://api.github.com/users/MajdTannous1/events{/privacy}",
"received_events_url": "https://api.github.com/users/MajdTannous1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2023-10-23T17:55:37 | 2023-10-23T17:55:37 | null | NONE | null | ### Feature request
Since the squad structure is widely used, I think it could be beneficial to support it using a YAML parameter.
could you implement automatic data loading of squad-like data using squad JSON format, to read it from JSON files and view it in the correct squad structure.
The dataset structure should be like this:
https://huggingface.co/datasets/squad
Columns:id,title,context,question,answers
### Motivation
Dataset repo requires arbitrary Python code execution
### Your contribution
The dataset structure should be like this:
https://huggingface.co/datasets/squad
Columns:id,title,context,question,answers
train and dev sets in squad structure JSON files | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6345/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/6345/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6344 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6344/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6344/comments | https://api.github.com/repos/huggingface/datasets/issues/6344/events | https://github.com/huggingface/datasets/pull/6344 | 1,957,412,169 | PR_kwDODunzps5diyd5 | 6,344 | set dev version | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-10-23T15:13:28 | 2023-10-23T15:24:31 | 2023-10-23T15:13:38 | MEMBER | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6344/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6344",
"html_url": "https://github.com/huggingface/datasets/pull/6344",
"diff_url": "https://github.com/huggingface/datasets/pull/6344.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6344.patch",
"merged_at": "2023-10-23T15:13:38"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6343 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6343/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6343/comments | https://api.github.com/repos/huggingface/datasets/issues/6343/events | https://github.com/huggingface/datasets/pull/6343 | 1,957,370,711 | PR_kwDODunzps5dipeb | 6,343 | Remove unused argument in `_get_data_files_patterns` | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-10-23T14:54:18 | 2023-11-16T09:09:42 | 2023-11-16T09:03:39 | MEMBER | null | null | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6343/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6343",
"html_url": "https://github.com/huggingface/datasets/pull/6343",
"diff_url": "https://github.com/huggingface/datasets/pull/6343.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6343.patch",
"merged_at": "2023-11-16T09:03:39"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6342 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6342/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6342/comments | https://api.github.com/repos/huggingface/datasets/issues/6342/events | https://github.com/huggingface/datasets/pull/6342 | 1,957,344,445 | PR_kwDODunzps5dijxt | 6,342 | Release: 2.14.6 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2023-10-23T14:43:26 | 2023-10-23T15:21:54 | 2023-10-23T15:07:25 | MEMBER | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6342/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6342",
"html_url": "https://github.com/huggingface/datasets/pull/6342",
"diff_url": "https://github.com/huggingface/datasets/pull/6342.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6342.patch",
"merged_at": "2023-10-23T15:07:25"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6340 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6340/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6340/comments | https://api.github.com/repos/huggingface/datasets/issues/6340/events | https://github.com/huggingface/datasets/pull/6340 | 1,956,917,893 | PR_kwDODunzps5dhGpW | 6,340 | Release 2.14.5 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-10-23T11:10:22 | 2023-10-23T14:20:46 | 2023-10-23T11:12:40 | MEMBER | null | (wrong release number - I was continuing the 2.14 branch but 2.14.5 was released from `main`) | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6340/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6340",
"html_url": "https://github.com/huggingface/datasets/pull/6340",
"diff_url": "https://github.com/huggingface/datasets/pull/6340.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6340.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6339 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6339/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6339/comments | https://api.github.com/repos/huggingface/datasets/issues/6339/events | https://github.com/huggingface/datasets/pull/6339 | 1,956,912,627 | PR_kwDODunzps5dhFfg | 6,339 | minor release step improvement | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-10-23T11:07:04 | 2023-11-07T10:38:54 | 2023-11-07T10:32:41 | MEMBER | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6339/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6339",
"html_url": "https://github.com/huggingface/datasets/pull/6339",
"diff_url": "https://github.com/huggingface/datasets/pull/6339.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6339.patch",
"merged_at": "2023-11-07T10:32:41"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6338 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6338/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6338/comments | https://api.github.com/repos/huggingface/datasets/issues/6338/events | https://github.com/huggingface/datasets/pull/6338 | 1,956,886,072 | PR_kwDODunzps5dg_sb | 6,338 | pin fsspec before it switches to glob.glob | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-10-23T10:50:54 | 2024-01-11T06:32:56 | 2023-10-23T10:51:52 | MEMBER | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6338/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6338",
"html_url": "https://github.com/huggingface/datasets/pull/6338",
"diff_url": "https://github.com/huggingface/datasets/pull/6338.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6338.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6337 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6337/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6337/comments | https://api.github.com/repos/huggingface/datasets/issues/6337/events | https://github.com/huggingface/datasets/pull/6337 | 1,956,875,259 | PR_kwDODunzps5dg9Uu | 6,337 | Pin supported upper version of fsspec | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 6 | 2023-10-23T10:44:16 | 2023-10-23T12:13:20 | 2023-10-23T12:04:36 | MEMBER | null | Pin upper version of `fsspec` to avoid disruptions introduced by breaking changes (and the need of urgent patch releases with hotfixes) on each release on their side. See:
- #6331
- #6210
- #5731
- #5617
- #5447
I propose that we explicitly test, introduce fixes and support each new `fsspec` version release.
CC: @LysandreJik | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6337/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6337/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6337",
"html_url": "https://github.com/huggingface/datasets/pull/6337",
"diff_url": "https://github.com/huggingface/datasets/pull/6337.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6337.patch",
"merged_at": "2023-10-23T12:04:36"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6336 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6336/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6336/comments | https://api.github.com/repos/huggingface/datasets/issues/6336/events | https://github.com/huggingface/datasets/pull/6336 | 1,956,827,232 | PR_kwDODunzps5dgy0w | 6,336 | unpin-fsspec | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-10-23T10:16:46 | 2024-02-07T12:41:35 | 2023-10-23T10:17:48 | MEMBER | null | Close #6333. | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6336/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6336",
"html_url": "https://github.com/huggingface/datasets/pull/6336",
"diff_url": "https://github.com/huggingface/datasets/pull/6336.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6336.patch",
"merged_at": "2023-10-23T10:17:48"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6335 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6335/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6335/comments | https://api.github.com/repos/huggingface/datasets/issues/6335/events | https://github.com/huggingface/datasets/pull/6335 | 1,956,740,818 | PR_kwDODunzps5dggIV | 6,335 | Support fsspec 2023.10.0 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 7 | 2023-10-23T09:29:17 | 2024-01-11T06:33:35 | 2023-11-14T14:17:40 | MEMBER | null | Fix #6333. | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6335/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6335",
"html_url": "https://github.com/huggingface/datasets/pull/6335",
"diff_url": "https://github.com/huggingface/datasets/pull/6335.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6335.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6334 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6334/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6334/comments | https://api.github.com/repos/huggingface/datasets/issues/6334/events | https://github.com/huggingface/datasets/pull/6334 | 1,956,719,774 | PR_kwDODunzps5dgbpR | 6,334 | datasets.filesystems: fix is_remote_filesystems | {
"login": "ap--",
"id": 1463443,
"node_id": "MDQ6VXNlcjE0NjM0NDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1463443?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ap--",
"html_url": "https://github.com/ap--",
"followers_url": "https://api.github.com/users/ap--/followers",
"following_url": "https://api.github.com/users/ap--/following{/other_user}",
"gists_url": "https://api.github.com/users/ap--/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ap--/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ap--/subscriptions",
"organizations_url": "https://api.github.com/users/ap--/orgs",
"repos_url": "https://api.github.com/users/ap--/repos",
"events_url": "https://api.github.com/users/ap--/events{/privacy}",
"received_events_url": "https://api.github.com/users/ap--/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-10-23T09:17:54 | 2024-02-07T12:41:15 | 2023-10-23T10:14:10 | CONTRIBUTOR | null | Close #6330, close #6333.
`fsspec.implementations.LocalFilesystem.protocol`
was changed from `str` "file" to `tuple[str,...]` ("file", "local") in `fsspec>=2023.10.0`
This commit supports both styles. | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6334/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6334/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6334",
"html_url": "https://github.com/huggingface/datasets/pull/6334",
"diff_url": "https://github.com/huggingface/datasets/pull/6334.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6334.patch",
"merged_at": "2023-10-23T10:14:10"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6333 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6333/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6333/comments | https://api.github.com/repos/huggingface/datasets/issues/6333/events | https://github.com/huggingface/datasets/issues/6333 | 1,956,714,423 | I_kwDODunzps50oRe3 | 6,333 | Support fsspec 2023.10.0 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4 | 2023-10-23T09:14:53 | 2024-02-07T12:39:58 | 2024-02-07T12:39:58 | MEMBER | null | Once root issue is fixed, remove temporary pin of fsspec < 2023.10.0 introduced by:
- #6331
Related to issue:
- #6330
As @ZachNagengast suggested, the issue might be related to:
- https://github.com/fsspec/filesystem_spec/pull/1381 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6333/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6333/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6332 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6332/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6332/comments | https://api.github.com/repos/huggingface/datasets/issues/6332/events | https://github.com/huggingface/datasets/pull/6332 | 1,956,697,328 | PR_kwDODunzps5dgW3w | 6,332 | Replace deprecated license_file in setup.cfg | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-10-23T09:05:26 | 2023-11-07T08:23:10 | 2023-11-07T08:09:06 | MEMBER | null | Replace deprecated license_file in `setup.cfg`.
See: https://github.com/huggingface/datasets/actions/runs/6610930650/job/17953825724?pr=6331
```
/tmp/pip-build-env-a51hls20/overlay/lib/python3.8/site-packages/setuptools/config/setupcfg.py:293: _DeprecatedConfig: Deprecated config in `setup.cfg`
!!
********************************************************************************
The license_file parameter is deprecated, use license_files instead.
By 2023-Oct-30, you need to update your project and remove deprecated calls
or your builds will no longer be supported.
See https://setuptools.pypa.io/en/latest/userguide/declarative_config.html for details.
********************************************************************************
!!
``` | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6332/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6332/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6332",
"html_url": "https://github.com/huggingface/datasets/pull/6332",
"diff_url": "https://github.com/huggingface/datasets/pull/6332.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6332.patch",
"merged_at": "2023-11-07T08:09:06"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6331 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6331/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6331/comments | https://api.github.com/repos/huggingface/datasets/issues/6331/events | https://github.com/huggingface/datasets/pull/6331 | 1,956,671,256 | PR_kwDODunzps5dgRQt | 6,331 | Temporarily pin fsspec < 2023.10.0 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-10-23T08:51:50 | 2023-10-23T09:26:42 | 2023-10-23T09:17:55 | MEMBER | null | Temporarily pin fsspec < 2023.10.0 until permanent solution is found.
Hot fix #6330.
See: https://github.com/huggingface/datasets/actions/runs/6610904287/job/17953774987
```
...
ERROR tests/test_iterable_dataset.py::test_iterable_dataset_from_file - NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported.
= 373 failed, 2055 passed, 17 skipped, 8 warnings, 6 errors in 228.14s (0:03:48) =
``` | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6331/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6331/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6331",
"html_url": "https://github.com/huggingface/datasets/pull/6331",
"diff_url": "https://github.com/huggingface/datasets/pull/6331.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6331.patch",
"merged_at": "2023-10-23T09:17:55"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6330 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6330/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6330/comments | https://api.github.com/repos/huggingface/datasets/issues/6330/events | https://github.com/huggingface/datasets/issues/6330 | 1,956,053,294 | I_kwDODunzps50lwEu | 6,330 | Latest fsspec==2023.10.0 issue with streaming datasets | {
"login": "ZachNagengast",
"id": 1981179,
"node_id": "MDQ6VXNlcjE5ODExNzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1981179?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZachNagengast",
"html_url": "https://github.com/ZachNagengast",
"followers_url": "https://api.github.com/users/ZachNagengast/followers",
"following_url": "https://api.github.com/users/ZachNagengast/following{/other_user}",
"gists_url": "https://api.github.com/users/ZachNagengast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZachNagengast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZachNagengast/subscriptions",
"organizations_url": "https://api.github.com/users/ZachNagengast/orgs",
"repos_url": "https://api.github.com/users/ZachNagengast/repos",
"events_url": "https://api.github.com/users/ZachNagengast/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZachNagengast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 8 | 2023-10-22T20:57:10 | 2024-05-08T00:18:39 | 2023-10-23T09:17:56 | CONTRIBUTOR | null | ### Describe the bug
Loading a streaming dataset with this version of fsspec fails with the following error:
`NotImplementedError: Loading a streaming dataset cached in a LocalFileSystem is not supported yet.`
I suspect the issue is with this PR
https://github.com/fsspec/filesystem_spec/pull/1381
### Steps to reproduce the bug
1. Upgrade fsspec to version `2023.10.0`
2. Attempt to load a streaming dataset e.g. `load_dataset("laion/gpt4v-emotion-dataset", split="train", streaming=True)`
3. Observe the following exception:
```
File "/opt/hostedtoolcache/Python/3.11.6/x64/lib/python3.11/site-packages/datasets/load.py", line 2146, in load_dataset
return builder_instance.as_streaming_dataset(split=split)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.11.6/x64/lib/python3.11/site-packages/datasets/builder.py", line 1318, in as_streaming_dataset
raise NotImplementedError(
NotImplementedError: Loading a streaming dataset cached in a LocalFileSystem is not supported yet.
```
### Expected behavior
Should stream the dataset as normal.
### Environment info
datasets@main
fsspec==2023.10.0 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6330/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6330/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6329 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6329/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6329/comments | https://api.github.com/repos/huggingface/datasets/issues/6329/events | https://github.com/huggingface/datasets/issues/6329 | 1,955,858,020 | I_kwDODunzps50lAZk | 6,329 | شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی | {
"login": "shabnam706",
"id": 147399213,
"node_id": "U_kgDOCMkiLQ",
"avatar_url": "https://avatars.githubusercontent.com/u/147399213?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shabnam706",
"html_url": "https://github.com/shabnam706",
"followers_url": "https://api.github.com/users/shabnam706/followers",
"following_url": "https://api.github.com/users/shabnam706/following{/other_user}",
"gists_url": "https://api.github.com/users/shabnam706/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shabnam706/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shabnam706/subscriptions",
"organizations_url": "https://api.github.com/users/shabnam706/orgs",
"repos_url": "https://api.github.com/users/shabnam706/repos",
"events_url": "https://api.github.com/users/shabnam706/events{/privacy}",
"received_events_url": "https://api.github.com/users/shabnam706/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-10-22T11:07:46 | 2023-10-23T09:22:58 | 2023-10-23T09:22:58 | NONE | null | شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی
| {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6329/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6329/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6328 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6328/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6328/comments | https://api.github.com/repos/huggingface/datasets/issues/6328/events | https://github.com/huggingface/datasets/issues/6328 | 1,955,857,904 | I_kwDODunzps50lAXw | 6,328 | شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی | {
"login": "shabnam706",
"id": 147399213,
"node_id": "U_kgDOCMkiLQ",
"avatar_url": "https://avatars.githubusercontent.com/u/147399213?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shabnam706",
"html_url": "https://github.com/shabnam706",
"followers_url": "https://api.github.com/users/shabnam706/followers",
"following_url": "https://api.github.com/users/shabnam706/following{/other_user}",
"gists_url": "https://api.github.com/users/shabnam706/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shabnam706/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shabnam706/subscriptions",
"organizations_url": "https://api.github.com/users/shabnam706/orgs",
"repos_url": "https://api.github.com/users/shabnam706/repos",
"events_url": "https://api.github.com/users/shabnam706/events{/privacy}",
"received_events_url": "https://api.github.com/users/shabnam706/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-10-22T11:07:21 | 2023-10-23T09:22:38 | 2023-10-23T09:22:38 | NONE | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6328/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6328/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6327 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6327/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6327/comments | https://api.github.com/repos/huggingface/datasets/issues/6327/events | https://github.com/huggingface/datasets/issues/6327 | 1,955,470,755 | I_kwDODunzps50jh2j | 6,327 | FileNotFoundError when trying to load the downloaded dataset with `load_dataset(..., streaming=True)` | {
"login": "yzhangcs",
"id": 18402347,
"node_id": "MDQ6VXNlcjE4NDAyMzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/18402347?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yzhangcs",
"html_url": "https://github.com/yzhangcs",
"followers_url": "https://api.github.com/users/yzhangcs/followers",
"following_url": "https://api.github.com/users/yzhangcs/following{/other_user}",
"gists_url": "https://api.github.com/users/yzhangcs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yzhangcs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yzhangcs/subscriptions",
"organizations_url": "https://api.github.com/users/yzhangcs/orgs",
"repos_url": "https://api.github.com/users/yzhangcs/repos",
"events_url": "https://api.github.com/users/yzhangcs/events{/privacy}",
"received_events_url": "https://api.github.com/users/yzhangcs/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-10-21T12:27:03 | 2023-10-23T18:50:07 | 2023-10-23T18:50:07 | CONTRIBUTOR | null | ### Describe the bug
Hi, I'm trying to load the dataset `togethercomputer/RedPajama-Data-1T-Sample` with `load_dataset` in streaming mode, i.e., `streaming=True`, but `FileNotFoundError` occurs.
### Steps to reproduce the bug
I've downloaded the dataset and save it to the cache dir in advance. My hope is loading the files in offline environment and without taking too much hours to prepross the entire data before running into the training process.
So I try the following code to load the files streamingly
```py
dataset = load_dataset('togethercomputer/RedPajama-Data-1T-Sample', streaming=True)
print(next(iter(dataset['train'])))
```
Sadly, it raises the following:
```
FileNotFoundError: [Errno 2] No such file or directory: 'CURRENT_CODE_PATH/arxiv_sample.jsonl'
```
I've noticed that the dataset can be properly found in the begining
```
Using the latest cached version of the module from /root/.cache/huggingface/modules/datasets_modules/datasets/togethercomputer--RedPajama-Data-1T-Sample/6ea3bc8ec2e84ec6d2df1930942e9028ace8c5b9d9143823cf911c50bbd92039 (last modified on Sat Oct 21 20:12:57 2023) since it couldn't be found locally at togethercomputer/RedPajama-Data-1T-Sample., or remotely on the Hugging Face Hub.
```
But it seems that the paths couldn't be properly parsed when loading iteratively.
How should I fix this error. I've tried specifying `data_files` or `data_dir` as `.../arxiv_sample.jsonl` but none of them works.
Thanks.
### Expected behavior
Properly load the dataset.
### Environment info
`datasets==2.14.5` | {
"login": "yzhangcs",
"id": 18402347,
"node_id": "MDQ6VXNlcjE4NDAyMzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/18402347?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yzhangcs",
"html_url": "https://github.com/yzhangcs",
"followers_url": "https://api.github.com/users/yzhangcs/followers",
"following_url": "https://api.github.com/users/yzhangcs/following{/other_user}",
"gists_url": "https://api.github.com/users/yzhangcs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yzhangcs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yzhangcs/subscriptions",
"organizations_url": "https://api.github.com/users/yzhangcs/orgs",
"repos_url": "https://api.github.com/users/yzhangcs/repos",
"events_url": "https://api.github.com/users/yzhangcs/events{/privacy}",
"received_events_url": "https://api.github.com/users/yzhangcs/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6327/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6327/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6326 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6326/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6326/comments | https://api.github.com/repos/huggingface/datasets/issues/6326/events | https://github.com/huggingface/datasets/pull/6326 | 1,955,420,536 | PR_kwDODunzps5dcSRa | 6,326 | Create battery_analysis.py | {
"login": "vinitkm",
"id": 130216732,
"node_id": "U_kgDOB8LzHA",
"avatar_url": "https://avatars.githubusercontent.com/u/130216732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vinitkm",
"html_url": "https://github.com/vinitkm",
"followers_url": "https://api.github.com/users/vinitkm/followers",
"following_url": "https://api.github.com/users/vinitkm/following{/other_user}",
"gists_url": "https://api.github.com/users/vinitkm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vinitkm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vinitkm/subscriptions",
"organizations_url": "https://api.github.com/users/vinitkm/orgs",
"repos_url": "https://api.github.com/users/vinitkm/repos",
"events_url": "https://api.github.com/users/vinitkm/events{/privacy}",
"received_events_url": "https://api.github.com/users/vinitkm/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-10-21T10:07:48 | 2023-10-23T14:56:20 | 2023-10-23T14:56:20 | NONE | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6326/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6326/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6326",
"html_url": "https://github.com/huggingface/datasets/pull/6326",
"diff_url": "https://github.com/huggingface/datasets/pull/6326.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6326.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6325 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6325/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6325/comments | https://api.github.com/repos/huggingface/datasets/issues/6325/events | https://github.com/huggingface/datasets/pull/6325 | 1,955,420,178 | PR_kwDODunzps5dcSM3 | 6,325 | Create battery_analysis.py | {
"login": "vinitkm",
"id": 130216732,
"node_id": "U_kgDOB8LzHA",
"avatar_url": "https://avatars.githubusercontent.com/u/130216732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vinitkm",
"html_url": "https://github.com/vinitkm",
"followers_url": "https://api.github.com/users/vinitkm/followers",
"following_url": "https://api.github.com/users/vinitkm/following{/other_user}",
"gists_url": "https://api.github.com/users/vinitkm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vinitkm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vinitkm/subscriptions",
"organizations_url": "https://api.github.com/users/vinitkm/orgs",
"repos_url": "https://api.github.com/users/vinitkm/repos",
"events_url": "https://api.github.com/users/vinitkm/events{/privacy}",
"received_events_url": "https://api.github.com/users/vinitkm/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-10-21T10:06:37 | 2023-10-23T14:55:58 | 2023-10-23T14:55:58 | NONE | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6325/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6325/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6325",
"html_url": "https://github.com/huggingface/datasets/pull/6325",
"diff_url": "https://github.com/huggingface/datasets/pull/6325.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6325.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6324 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6324/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6324/comments | https://api.github.com/repos/huggingface/datasets/issues/6324/events | https://github.com/huggingface/datasets/issues/6324 | 1,955,126,687 | I_kwDODunzps50iN2f | 6,324 | Conversion to Arrow fails due to wrong type heuristic | {
"login": "jphme",
"id": 2862336,
"node_id": "MDQ6VXNlcjI4NjIzMzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2862336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jphme",
"html_url": "https://github.com/jphme",
"followers_url": "https://api.github.com/users/jphme/followers",
"following_url": "https://api.github.com/users/jphme/following{/other_user}",
"gists_url": "https://api.github.com/users/jphme/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jphme/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jphme/subscriptions",
"organizations_url": "https://api.github.com/users/jphme/orgs",
"repos_url": "https://api.github.com/users/jphme/repos",
"events_url": "https://api.github.com/users/jphme/events{/privacy}",
"received_events_url": "https://api.github.com/users/jphme/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-10-20T23:20:58 | 2023-10-23T20:52:57 | 2023-10-23T20:52:57 | NONE | null | ### Describe the bug
I have a list of dictionaries with valid/JSON-serializable values.
One key is the denominator for a paragraph. In 99.9% of cases its a number, but there are some occurences of '1a', '2b' and so on.
If trying to convert this list to a dataset with `Dataset.from_list()`, I always get
`ArrowInvalid: Could not convert '1' with type str: tried to convert to int64`, presumably because pyarrow tries to convert the keys to integers.
Is there any way to circumvent this and fix dtypes? I didn't find anything in the documentation.
### Steps to reproduce the bug
* create a list of dicts with one key being a string of an integer for the first few thousand occurences and try to convert to dataset.
### Expected behavior
There shouldn't be an error (e.g. some flag to turn off automatic str to numeric conversion).
### Environment info
- `datasets` version: 2.14.5
- Platform: Linux-5.15.0-84-generic-x86_64-with-glibc2.35
- Python version: 3.9.18
- Huggingface_hub version: 0.17.3
- PyArrow version: 13.0.0
- Pandas version: 2.1.1 | {
"login": "jphme",
"id": 2862336,
"node_id": "MDQ6VXNlcjI4NjIzMzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2862336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jphme",
"html_url": "https://github.com/jphme",
"followers_url": "https://api.github.com/users/jphme/followers",
"following_url": "https://api.github.com/users/jphme/following{/other_user}",
"gists_url": "https://api.github.com/users/jphme/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jphme/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jphme/subscriptions",
"organizations_url": "https://api.github.com/users/jphme/orgs",
"repos_url": "https://api.github.com/users/jphme/repos",
"events_url": "https://api.github.com/users/jphme/events{/privacy}",
"received_events_url": "https://api.github.com/users/jphme/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6324/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6324/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6323 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6323/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6323/comments | https://api.github.com/repos/huggingface/datasets/issues/6323/events | https://github.com/huggingface/datasets/issues/6323 | 1,954,245,980 | I_kwDODunzps50e21c | 6,323 | Loading dataset from large GCS bucket very slow since 2.14 | {
"login": "jbcdnr",
"id": 6209990,
"node_id": "MDQ6VXNlcjYyMDk5OTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6209990?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jbcdnr",
"html_url": "https://github.com/jbcdnr",
"followers_url": "https://api.github.com/users/jbcdnr/followers",
"following_url": "https://api.github.com/users/jbcdnr/following{/other_user}",
"gists_url": "https://api.github.com/users/jbcdnr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jbcdnr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbcdnr/subscriptions",
"organizations_url": "https://api.github.com/users/jbcdnr/orgs",
"repos_url": "https://api.github.com/users/jbcdnr/repos",
"events_url": "https://api.github.com/users/jbcdnr/events{/privacy}",
"received_events_url": "https://api.github.com/users/jbcdnr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2023-10-20T12:59:55 | 2024-09-03T18:42:33 | null | NONE | null | ### Describe the bug
Since updating to >2.14 we have very slow access to our parquet files on GCS when loading a dataset (>30 min vs 3s). Our GCS bucket has many objects and resolving globs is very slow. I could track down the problem to this change:
https://github.com/huggingface/datasets/blame/bade7af74437347a760830466eb74f7a8ce0d799/src/datasets/data_files.py#L348
The underlying implementation with gcsfs is really slow. Could you go back to the old way if we are simply giving the parquet files and no glob pattern?
Thank you.
### Steps to reproduce the bug
Load a dataset from a GCS bucket that has many files.
### Expected behavior
Used to be fast (3s) in 2.13
### Environment info
datasets==2.14.5 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6323/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6323/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6322 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6322/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6322/comments | https://api.github.com/repos/huggingface/datasets/issues/6322/events | https://github.com/huggingface/datasets/pull/6322 | 1,952,947,461 | PR_kwDODunzps5dT5vG | 6,322 | Fix regex `get_data_files` formatting for base paths | {
"login": "ZachNagengast",
"id": 1981179,
"node_id": "MDQ6VXNlcjE5ODExNzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1981179?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZachNagengast",
"html_url": "https://github.com/ZachNagengast",
"followers_url": "https://api.github.com/users/ZachNagengast/followers",
"following_url": "https://api.github.com/users/ZachNagengast/following{/other_user}",
"gists_url": "https://api.github.com/users/ZachNagengast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZachNagengast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZachNagengast/subscriptions",
"organizations_url": "https://api.github.com/users/ZachNagengast/orgs",
"repos_url": "https://api.github.com/users/ZachNagengast/repos",
"events_url": "https://api.github.com/users/ZachNagengast/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZachNagengast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-10-19T19:45:10 | 2023-10-23T14:40:45 | 2023-10-23T14:31:21 | CONTRIBUTOR | null | With this pr https://github.com/huggingface/datasets/pull/6309, it is formatting the entire base path into regex, which results in the undesired formatting error `doesn't match the pattern` because of the line in `glob_pattern_to_regex`: `.replace("//", "/")`:
- Input: `hf://datasets/...`
- Output: `hf:/datasets/...`
This fix will only convert the `split_pattern` to regex and keep the `base_path` unchanged.
cc @albertvillanova hopefully this still works with your implementation | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6322/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6322",
"html_url": "https://github.com/huggingface/datasets/pull/6322",
"diff_url": "https://github.com/huggingface/datasets/pull/6322.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6322.patch",
"merged_at": "2023-10-23T14:31:21"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6321 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6321/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6321/comments | https://api.github.com/repos/huggingface/datasets/issues/6321/events | https://github.com/huggingface/datasets/pull/6321 | 1,952,643,483 | PR_kwDODunzps5dS3Mc | 6,321 | Fix typos | {
"login": "python273",
"id": 3097956,
"node_id": "MDQ6VXNlcjMwOTc5NTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3097956?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/python273",
"html_url": "https://github.com/python273",
"followers_url": "https://api.github.com/users/python273/followers",
"following_url": "https://api.github.com/users/python273/following{/other_user}",
"gists_url": "https://api.github.com/users/python273/gists{/gist_id}",
"starred_url": "https://api.github.com/users/python273/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/python273/subscriptions",
"organizations_url": "https://api.github.com/users/python273/orgs",
"repos_url": "https://api.github.com/users/python273/repos",
"events_url": "https://api.github.com/users/python273/events{/privacy}",
"received_events_url": "https://api.github.com/users/python273/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-10-19T16:24:35 | 2023-10-19T17:18:00 | 2023-10-19T17:07:35 | CONTRIBUTOR | null | null | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6321/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6321/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6321",
"html_url": "https://github.com/huggingface/datasets/pull/6321",
"diff_url": "https://github.com/huggingface/datasets/pull/6321.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6321.patch",
"merged_at": "2023-10-19T17:07:35"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6320 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6320/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6320/comments | https://api.github.com/repos/huggingface/datasets/issues/6320/events | https://github.com/huggingface/datasets/issues/6320 | 1,952,618,316 | I_kwDODunzps50YpdM | 6,320 | Dataset slice splits can't load training and validation at the same time | {
"login": "timlac",
"id": 32488097,
"node_id": "MDQ6VXNlcjMyNDg4MDk3",
"avatar_url": "https://avatars.githubusercontent.com/u/32488097?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/timlac",
"html_url": "https://github.com/timlac",
"followers_url": "https://api.github.com/users/timlac/followers",
"following_url": "https://api.github.com/users/timlac/following{/other_user}",
"gists_url": "https://api.github.com/users/timlac/gists{/gist_id}",
"starred_url": "https://api.github.com/users/timlac/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timlac/subscriptions",
"organizations_url": "https://api.github.com/users/timlac/orgs",
"repos_url": "https://api.github.com/users/timlac/repos",
"events_url": "https://api.github.com/users/timlac/events{/privacy}",
"received_events_url": "https://api.github.com/users/timlac/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-10-19T16:09:22 | 2023-11-30T16:21:15 | 2023-11-30T16:21:15 | NONE | null | ### Describe the bug
According to the [documentation](https://huggingface.co/docs/datasets/v2.14.5/loading#slice-splits) is should be possible to run the following command:
`train_test_ds = datasets.load_dataset("bookcorpus", split="train+test")`
to load the train and test sets from the dataset.
However executing the equivalent code:
`speech_commands_v1 = load_dataset("superb", "ks", split="train+test")`
only yields the following output:
> Dataset({
> features: ['file', 'audio', 'label'],
> num_rows: 54175
> })
Where loading the dataset without the split argument yields:
> DatasetDict({
> train: Dataset({
> features: ['file', 'audio', 'label'],
> num_rows: 51094
> })
> validation: Dataset({
> features: ['file', 'audio', 'label'],
> num_rows: 6798
> })
> test: Dataset({
> features: ['file', 'audio', 'label'],
> num_rows: 3081
> })
> })
Thus, the API seems to be broken in this regard.
This is a bit annoying since I want to be able to use the split argument with `split="train[:10%]+test[:10%]"` to have smaller dataset to work with when validating my model is working correctly.
### Steps to reproduce the bug
`speech_commands_v1 = load_dataset("superb", "ks", split="train+test")`
### Expected behavior
> DatasetDict({
> train: Dataset({
> features: ['file', 'audio', 'label'],
> num_rows: 51094
> })
> test: Dataset({
> features: ['file', 'audio', 'label'],
> num_rows: 3081
> })
> })
### Environment info
```
import datasets
print(datasets.__version__)
```
> 2.14.5
```
import sys
print(sys.version)
```
> 3.9.17 (main, Jul 5 2023, 20:41:20)
> [GCC 11.2.0] | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6320/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6320/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6319 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6319/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6319/comments | https://api.github.com/repos/huggingface/datasets/issues/6319/events | https://github.com/huggingface/datasets/issues/6319 | 1,952,101,717 | I_kwDODunzps50WrVV | 6,319 | Datasets.map is severely broken | {
"login": "phalexo",
"id": 4603365,
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phalexo",
"html_url": "https://github.com/phalexo",
"followers_url": "https://api.github.com/users/phalexo/followers",
"following_url": "https://api.github.com/users/phalexo/following{/other_user}",
"gists_url": "https://api.github.com/users/phalexo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phalexo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phalexo/subscriptions",
"organizations_url": "https://api.github.com/users/phalexo/orgs",
"repos_url": "https://api.github.com/users/phalexo/repos",
"events_url": "https://api.github.com/users/phalexo/events{/privacy}",
"received_events_url": "https://api.github.com/users/phalexo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 15 | 2023-10-19T12:19:33 | 2024-08-08T17:05:08 | null | NONE | null | ### Describe the bug
Regardless of how many cores I used, I have 16 or 32 threads, map slows down to a crawl at around 80% done, lingers maybe until 97% extremely slowly and NEVER finishes the job. It just hangs.
After watching this for 27 hours I control-C out of it. Until the end one process appears to be doing something, but it never ends.
I saw some comments about fast tokenizers using Rust and all and tried different variations. NOTHING works.
### Steps to reproduce the bug
Running it without breaking the dataset into parts results in the same behavior. The loop was an attempt to see if this was a RAM issue.
for idx in range(100):
dataset = load_dataset("togethercomputer/RedPajama-Data-1T-Sample", cache_dir=cache_dir, split=f'train[{idx}%:{idx+1}%]')
dataset = dataset.map(partial(tokenize_fn, tokenizer), batched=False, num_proc=1, remove_columns=["text", "meta"])
dataset.save_to_disk(training_args.cache_dir + f"/training_data_{idx}")
### Expected behavior
I expect map to run at more or less the same speed it starts with and FINISH its processing.
### Environment info
Python 3.8, same with 3.10 makes no difference.
Ubuntu 20.04, | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6319/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6319/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6318 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6318/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6318/comments | https://api.github.com/repos/huggingface/datasets/issues/6318/events | https://github.com/huggingface/datasets/pull/6318 | 1,952,100,706 | PR_kwDODunzps5dRC9V | 6,318 | Deterministic set hash | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-10-19T12:19:13 | 2023-10-19T16:27:20 | 2023-10-19T16:16:31 | MEMBER | null | Sort the items in a set according to their `datasets.fingerprint.Hasher.hash` hash to get a deterministic hash of sets.
This is useful to get deterministic hashes of tokenizers that use a trie based on python sets.
reported in https://github.com/huggingface/datasets/issues/3847 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6318/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6318/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6318",
"html_url": "https://github.com/huggingface/datasets/pull/6318",
"diff_url": "https://github.com/huggingface/datasets/pull/6318.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6318.patch",
"merged_at": "2023-10-19T16:16:31"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6317 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6317/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6317/comments | https://api.github.com/repos/huggingface/datasets/issues/6317/events | https://github.com/huggingface/datasets/issues/6317 | 1,951,965,668 | I_kwDODunzps50WKHk | 6,317 | sentiment140 dataset unavailable | {
"login": "AndreasKarasenko",
"id": 52670382,
"node_id": "MDQ6VXNlcjUyNjcwMzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/52670382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AndreasKarasenko",
"html_url": "https://github.com/AndreasKarasenko",
"followers_url": "https://api.github.com/users/AndreasKarasenko/followers",
"following_url": "https://api.github.com/users/AndreasKarasenko/following{/other_user}",
"gists_url": "https://api.github.com/users/AndreasKarasenko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AndreasKarasenko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AndreasKarasenko/subscriptions",
"organizations_url": "https://api.github.com/users/AndreasKarasenko/orgs",
"repos_url": "https://api.github.com/users/AndreasKarasenko/repos",
"events_url": "https://api.github.com/users/AndreasKarasenko/events{/privacy}",
"received_events_url": "https://api.github.com/users/AndreasKarasenko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2 | 2023-10-19T11:25:21 | 2023-10-19T13:04:56 | 2023-10-19T13:04:56 | NONE | null | ### Describe the bug
loading the dataset using load_dataset("sentiment140") returns the following error
ConnectionError: Couldn't reach http://cs.stanford.edu/people/alecmgo/trainingandtestdata.zip (error 403)
### Steps to reproduce the bug
Run the following code (version should not matter).
```
from datasets import load_dataset
data = load_dataset("sentiment140")
```
### Expected behavior
The dataset should be loaded just like any other.
The main issue is that it is no longer hosted by stanford. It is still available from a [Google Drive Link](https://docs.google.com/file/d/0B04GJPshIjmPRnZManQwWEdTZjg/edit).
### Environment info
- `datasets` version: 2.14.5
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.10.8
- Huggingface_hub version: 0.17.3
- PyArrow version: 13.0.0
- Pandas version: 2.1.1 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6317/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6317/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6316 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6316/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6316/comments | https://api.github.com/repos/huggingface/datasets/issues/6316/events | https://github.com/huggingface/datasets/pull/6316 | 1,951,819,869 | PR_kwDODunzps5dQGpg | 6,316 | Fix loading Hub datasets with CSV metadata file | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-10-19T10:21:34 | 2023-10-20T06:23:21 | 2023-10-20T06:14:09 | MEMBER | null | Currently, the reading of the metadata file infers the file extension (.jsonl or .csv) from the passed filename. However, downloaded files from the Hub don't have file extension. For example:
- the original file: `hf://datasets/__DUMMY_TRANSFORMERS_USER__/test-dataset-5916a4-16977085077831/metadata.jsonl`
- corresponds to the downloaded path: `/tmp/pytest-of-username/pytest-46/cache/datasets/downloads/9f5374dbb470f711f6b89d66a5eec1f19cc96324b26bcbebe29138bda6cb20e6`, which does not have extension
In the case where the metadata file does not have an extension, the reader assumes it is a JSONL file, thus the reported error when trying to read a CSV file as a JSONL one: `ArrowInvalid: JSON parse error: Invalid value. in row 0`
This behavior was introduced by:
- #4837
This PR extracts the metadata file extension from the original filename (instead of the downloaded one) and passes it as a parameter to the read_metadata function.
Fix #6315. | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6316/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6316/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6316",
"html_url": "https://github.com/huggingface/datasets/pull/6316",
"diff_url": "https://github.com/huggingface/datasets/pull/6316.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6316.patch",
"merged_at": "2023-10-20T06:14:09"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6315 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6315/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6315/comments | https://api.github.com/repos/huggingface/datasets/issues/6315/events | https://github.com/huggingface/datasets/issues/6315 | 1,951,800,819 | I_kwDODunzps50Vh3z | 6,315 | Hub datasets with CSV metadata raise ArrowInvalid: JSON parse error: Invalid value. in row 0 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0 | 2023-10-19T10:11:29 | 2023-10-20T06:14:10 | 2023-10-20T06:14:10 | MEMBER | null | When trying to load a Hub dataset that contains a CSV metadata file, it raises an `ArrowInvalid` error:
```
E pyarrow.lib.ArrowInvalid: JSON parse error: Invalid value. in row 0
pyarrow/error.pxi:100: ArrowInvalid
```
See: https://huggingface.co/datasets/lukarape/public_small_papers/discussions/1 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6315/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6315/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6314 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6314/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6314/comments | https://api.github.com/repos/huggingface/datasets/issues/6314/events | https://github.com/huggingface/datasets/pull/6314 | 1,951,684,763 | PR_kwDODunzps5dPo25 | 6,314 | Support creating new branch in push_to_hub | {
"login": "jmif",
"id": 1000442,
"node_id": "MDQ6VXNlcjEwMDA0NDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1000442?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmif",
"html_url": "https://github.com/jmif",
"followers_url": "https://api.github.com/users/jmif/followers",
"following_url": "https://api.github.com/users/jmif/following{/other_user}",
"gists_url": "https://api.github.com/users/jmif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmif/subscriptions",
"organizations_url": "https://api.github.com/users/jmif/orgs",
"repos_url": "https://api.github.com/users/jmif/repos",
"events_url": "https://api.github.com/users/jmif/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmif/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-10-19T09:12:39 | 2023-10-19T09:20:06 | 2023-10-19T09:19:48 | NONE | null | This adds support for creating a new branch when pushing a dataset to the hub. Tested both methods locally and branches are created. | {
"login": "jmif",
"id": 1000442,
"node_id": "MDQ6VXNlcjEwMDA0NDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1000442?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmif",
"html_url": "https://github.com/jmif",
"followers_url": "https://api.github.com/users/jmif/followers",
"following_url": "https://api.github.com/users/jmif/following{/other_user}",
"gists_url": "https://api.github.com/users/jmif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmif/subscriptions",
"organizations_url": "https://api.github.com/users/jmif/orgs",
"repos_url": "https://api.github.com/users/jmif/repos",
"events_url": "https://api.github.com/users/jmif/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmif/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6314/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6314/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6314",
"html_url": "https://github.com/huggingface/datasets/pull/6314",
"diff_url": "https://github.com/huggingface/datasets/pull/6314.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6314.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6313 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6313/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6313/comments | https://api.github.com/repos/huggingface/datasets/issues/6313/events | https://github.com/huggingface/datasets/pull/6313 | 1,951,527,712 | PR_kwDODunzps5dPGmL | 6,313 | Fix commit message formatting in multi-commit uploads | {
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-10-19T07:53:56 | 2023-10-20T14:06:13 | 2023-10-20T13:57:39 | MEMBER | null | Currently, the commit message keeps on adding:
- `Upload dataset (part 00000-of-00002)`
- `Upload dataset (part 00000-of-00002) (part 00001-of-00002)`
Introduced in https://github.com/huggingface/datasets/pull/6269
This PR fixes this issue to have
- `Upload dataset (part 00000-of-00002)`
- `Upload dataset (part 00001-of-00002)` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6313/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6313/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6313",
"html_url": "https://github.com/huggingface/datasets/pull/6313",
"diff_url": "https://github.com/huggingface/datasets/pull/6313.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6313.patch",
"merged_at": "2023-10-20T13:57:38"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6312 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6312/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6312/comments | https://api.github.com/repos/huggingface/datasets/issues/6312/events | https://github.com/huggingface/datasets/pull/6312 | 1,950,128,416 | PR_kwDODunzps5dKWDF | 6,312 | docs: resolving namespace conflict, refactored variable | {
"login": "smty2018",
"id": 74114936,
"node_id": "MDQ6VXNlcjc0MTE0OTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/74114936?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/smty2018",
"html_url": "https://github.com/smty2018",
"followers_url": "https://api.github.com/users/smty2018/followers",
"following_url": "https://api.github.com/users/smty2018/following{/other_user}",
"gists_url": "https://api.github.com/users/smty2018/gists{/gist_id}",
"starred_url": "https://api.github.com/users/smty2018/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/smty2018/subscriptions",
"organizations_url": "https://api.github.com/users/smty2018/orgs",
"repos_url": "https://api.github.com/users/smty2018/repos",
"events_url": "https://api.github.com/users/smty2018/events{/privacy}",
"received_events_url": "https://api.github.com/users/smty2018/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-10-18T16:10:59 | 2023-10-19T16:31:59 | 2023-10-19T16:23:07 | CONTRIBUTOR | null | In docs of about_arrow.md, in the below example code

The variable name 'time' was being used in a way that could potentially lead to a namespace conflict with Python's built-in 'time' module. It is not a good convention and can lead to unintended variable shadowing for any user re-using the example code.
To ensure code clarity, and prevent potential naming conflicts renamed the variable 'time' to 'elapsed_time' in the example code. | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6312/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6312/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6312",
"html_url": "https://github.com/huggingface/datasets/pull/6312",
"diff_url": "https://github.com/huggingface/datasets/pull/6312.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6312.patch",
"merged_at": "2023-10-19T16:23:07"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6311 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6311/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6311/comments | https://api.github.com/repos/huggingface/datasets/issues/6311/events | https://github.com/huggingface/datasets/issues/6311 | 1,949,304,993 | I_kwDODunzps50MAih | 6,311 | cast_column to Sequence with length=4 occur exception raise in datasets/table.py:2146 | {
"login": "neiblegy",
"id": 16574677,
"node_id": "MDQ6VXNlcjE2NTc0Njc3",
"avatar_url": "https://avatars.githubusercontent.com/u/16574677?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neiblegy",
"html_url": "https://github.com/neiblegy",
"followers_url": "https://api.github.com/users/neiblegy/followers",
"following_url": "https://api.github.com/users/neiblegy/following{/other_user}",
"gists_url": "https://api.github.com/users/neiblegy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neiblegy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neiblegy/subscriptions",
"organizations_url": "https://api.github.com/users/neiblegy/orgs",
"repos_url": "https://api.github.com/users/neiblegy/repos",
"events_url": "https://api.github.com/users/neiblegy/events{/privacy}",
"received_events_url": "https://api.github.com/users/neiblegy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-10-18T09:38:05 | 2024-02-06T19:24:20 | 2024-02-06T19:24:20 | NONE | null | ### Describe the bug
i load a dataset from local csv file which has 187383612 examples, then use `map` to generate new columns for test.
here is my code :
```
import os
from datasets import load_dataset
from datasets.features import Sequence, Value
def add_new_path(example):
example["ais_bbox"] = [100,100,200,200]
example["ais_image_path"] = os.path.join("images", example["image_path"]) if example["image_path"] else ""
return example
ais_dataset = load_dataset("/data/ryan.gao/ais_dataset_cache/raw/1749/")
hf_ds = ais_dataset.map(add_new_path, batched=False, num_proc=32)
ds = hf_ds.cast_column("ais_bbox", Sequence(Value("int32"), length=4))
```
and the `cast_column` raise an exception
```
Casting the dataset: 3%|███▉
...
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2110, in cast_column
return self.cast(features)
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2055, in cast
dataset = dataset.map(
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 592, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3097, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3474, in _map_single
batch = apply_function_on_filtered_inputs(
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3353, in apply_function_on_filtered_inputs
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 2329, in table_cast
return cast_table_to_schema(table, schema)
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 2288, in cast_table_to_schema
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 2288, in <listcomp>
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 1831, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 1831, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 2145, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
TypeError: Couldn't cast array of type
list<item: int64>
to
Sequence(feature=Value(dtype='int32', id=None), length=4, id=None)
```
i check the source code and make debug info:
in datasets/table.py:2092
```
2091 if feature.length > -1:
2092 if feature.length * len(array) == len(array.values):
2093 return pa.FixedSizeListArray.from_arrays(_c(array.values, feature.feature), feature.length)
2094 print(len(array))
2095 print(len(array.values))
```
my feature.length is 4. but feature.length * len(array) == len(array.values) is false.
print(len(array)) is 262
print(len(array.values)) is 4000
then I use "for item in array" to print each item then get 262 * [100,100,200,200]
and use "for item in array.values" to print each item and get 4000 int32 which are 1000 * [100,100,200,200]
i'm wondering the `chunk` in each `array.chunks`, the "chunk.values" may get all the chunks's value rather than single chunk? but i check the pyarrow's doc seems chunk.values is chunk's value not all.
### Steps to reproduce the bug
code provided above.
### Expected behavior
feature.length * len(array) == len(array.values) should be true. and there should not has Exception.
### Environment info
python3.9
x86_64
datasets: 2.14.4
pyarrow: 13.0.0 or 10.0.0 | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6311/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6311/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6310 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6310/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6310/comments | https://api.github.com/repos/huggingface/datasets/issues/6310/events | https://github.com/huggingface/datasets/pull/6310 | 1,947,457,988 | PR_kwDODunzps5dBPnY | 6,310 | Add return_file_name in load_dataset | {
"login": "juliendenize",
"id": 40604584,
"node_id": "MDQ6VXNlcjQwNjA0NTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/40604584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/juliendenize",
"html_url": "https://github.com/juliendenize",
"followers_url": "https://api.github.com/users/juliendenize/followers",
"following_url": "https://api.github.com/users/juliendenize/following{/other_user}",
"gists_url": "https://api.github.com/users/juliendenize/gists{/gist_id}",
"starred_url": "https://api.github.com/users/juliendenize/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/juliendenize/subscriptions",
"organizations_url": "https://api.github.com/users/juliendenize/orgs",
"repos_url": "https://api.github.com/users/juliendenize/repos",
"events_url": "https://api.github.com/users/juliendenize/events{/privacy}",
"received_events_url": "https://api.github.com/users/juliendenize/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 7 | 2023-10-17T13:36:57 | 2024-08-09T11:51:55 | 2024-07-31T13:56:50 | NONE | null | Proposition to fix #5806.
Added an optional parameter `return_file_name` in the dataset builder config. When set to `True`, the function will include the file name corresponding to the sample in the returned output.
There is a difference between arrow-based and folder-based datasets to return the file name:
- for arrow-based: a column is concatenated after the table is cast.
- for folder-based: `dataset.info.features` has the entry `file_name` and the original file name is passed to the `sample_metadata` dictionary.
The difference in behavior might be a concern, also I do not know whether the `file_name` should return the original file path or the downloaded one for folder-based datasets.
I added some tests for the datasets that already had a test file. | {
"login": "juliendenize",
"id": 40604584,
"node_id": "MDQ6VXNlcjQwNjA0NTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/40604584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/juliendenize",
"html_url": "https://github.com/juliendenize",
"followers_url": "https://api.github.com/users/juliendenize/followers",
"following_url": "https://api.github.com/users/juliendenize/following{/other_user}",
"gists_url": "https://api.github.com/users/juliendenize/gists{/gist_id}",
"starred_url": "https://api.github.com/users/juliendenize/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/juliendenize/subscriptions",
"organizations_url": "https://api.github.com/users/juliendenize/orgs",
"repos_url": "https://api.github.com/users/juliendenize/repos",
"events_url": "https://api.github.com/users/juliendenize/events{/privacy}",
"received_events_url": "https://api.github.com/users/juliendenize/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6310/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6310/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6310",
"html_url": "https://github.com/huggingface/datasets/pull/6310",
"diff_url": "https://github.com/huggingface/datasets/pull/6310.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6310.patch",
"merged_at": null
} | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.